January 19, 2017

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

January 18, 2017

Comics page…

A little post to tell you that I finally added a page on my website with all my comics. Better late than never.
They were all released previously on my blog, and some of them were missing the license info which are now on this page. Also I re-licensed some pages from CC BY-NC-ND to CC BY-SA some time ago in a blog post, this page makes it more obvious.

Link to my comics, enjoy :)

(… yes, I know, I really should update my website…)

January 14, 2017

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 12, 2017

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

rawsamples.ch replacement

Rawsamples.ch is a website with the goal to:

…provide RAW-Files of nearly all available Digitalcameras mainly to software-developers. [sic]

It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with an SQL-injection that ended up corrupting the database for the Joomla install that hosted the site. To compound the pain, there were no database backups … :(

Luckily, Kees Guequierre (dtstyle.net) decided to build a site where contributors could upload sample raw files from their cameras for everyone to use – particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and raw.pixls.us is the licensing. The existing files, and the preference for any new contributions, are licensed as Creative Commons Zero – Public Domain (as opposed to CC-BY-NC-SA).

After some hacking, with input and guidance from darktable developer Roman Lebedev, the site was finally ready.

raw.pixls.us

The site is now live at https://raw.pixls.us.

You can look at the submitted files and search/sort through all of them (and download the ones you want).

In addition to browsing the archive, it would be fantastic if you're able to supplement the database by upload sample images. Many of the images from the rawsamples.ch archive are licensed CC-BY-NC-SA, but we'd rather have the files licensed CC0. CC0 is preferable to CC-BY-NC-SA because if the sample raw files are separated from the database, they can safely be redistributed without attribution (attribution is required by CC-BY-NC-SA). So if you have a camera that is already in the list with the more restrictive Creative Commons license, then please consider uploading a replacement for us!

We are looking for shots that are:

  • Lens mounted on the camera
  • Lens cap off
  • In focus
  • With normal exposure, not underexposed and not overexposed
  • Landscape orientation
  • Licensed under the Creative Commons Zero

We are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, or lighting
    (Even if it's a shot of a color target)
  • DNG files created with Adobe DNG Converter

Please take a moment and see if you can provide samples to help the developers!

This post has been written in collaboration with pixls.us

New Year, New Raw Samples Website


New Year, New Raw Samples Website

A replacement for rawsamples.ch

Happy New Year, and I hope everyone has had a wonderful holiday!

We’ve been busy working on various things ourselves, including migrating RawPedia to a new server as well as building a replacement raw sample database/website to alleviate the problems that rawsamples.ch was having…

rawsamples.ch Replacement

Rawsamples.ch is a website with the goal to:

…provide RAW-Files of nearly all available Digitalcameras mainly to software-developers. [sic]

It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with a SQL-injection that ended up corrupting the database for the Joomla install that hosted the site. To compound the pain, there were no database backups… :(

On the good side, the PIXLS.US community has some dangerous folks with idle hands. Our friendly, neighborhood @andabata (Kees Guequierre) had some time off at the end of the year and a desire to build something. You may know @andabata as the fellow responsible for the super-useful dtstyle website, which is chock full of darktable styles to peruse and download (if you haven’t heard of it before – you’re welcome!). He’s also my go-to for macro photography and is responsible for this awesome image used on a slide for the Libre Graphics Meeting:

PIXLS.US LGM Slide

Luckily, he decided to build a site where contributors could upload sample raw files from their cameras for everyone to use – particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and raw.pixls.us is the licensing. The existing files, and the preference for any new contributions, are licensed as Creative Commons Zero - Public Domain (as opposed to CC-BY-NC-SA).

After some hacking, with input and guidance from darktable developer Roman Lebedev, the site was finally ready. The repository for it can be found on GitHub: raw.pixls.us repo.

raw.pixls.us

The site is now live at https://raw.pixls.us.

You can look at the submitted files and search/sort through all of them (and download the ones you want).

In addition to browsing the archive, it would be fantastic if you were able to supplement the database by uploading sample images. Many of the files from the rawsamples.ch archive are licensed CC-BY-NC-SA, but we’d rather have the files licensed Creative Commons Zero - Public Domain. CC0 is preferable because if the sample raw files are separated from the database, they can safely be redistributed without attribution. So if you have a camera that is already in the list with the more restrictive license, then please consider uploading a replacement for us!

We are looking for shots that are:

  • Lens mounted on the camera
  • Lens cap off
  • In focus
  • Properly exposed (not over/under)
  • Landscape orientation
  • Licensed under the Creative Commons Zero

We are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, or lighting
    (Even if it’s a shot of a color target)
  • DNG files created with Adobe DNG Converter

Please take a moment and see if you can provide samples to help the developers!

Wed 2017/Jan/11

  • Reproducible font rendering for librsvg's tests

    The official test suite for SVG 1.1 consists of a bunch of SVG test files that use many of the features in the SVG specification. The test suite comes with reference PNGs: your SVG renderer is supposed to produce images that look like those PNGs.

    I've been adding test files from that test suite to librsvg as I convert things to Rust, and also when I refactor code that touches code for a particular kind of SVG element or filter.

    The SVG test suite is not a drop-in solution, however. The spec does not specify pixel-exact rendering. It doesn't mandate any specific kind of font rendering, either. The test suite is for eyeballing that tests render correctly, and each test has instructions on what to look for; it is not meant for automatic testing.

    The test files include text elements, and the font for those texts is specified in an interesting way. SVG supports referencing "SVG fonts": your image_with_text_in_it.svg can specify that it will reference my_svg_font.svg, and that file will have individual glyphs defined as normal SVG objects. "You draw an a with this path definition", etc.

    Librsvg doesn't support SVG fonts yet. (Patches appreciated!) As a provision for renderers which don't support SVG fonts, the test suite specifies fallbacks with well-known names like "sans-serif" and such.

    In the GNOME world, "sans-serif" resolves to whatever Fontconfig decides. Various things contribute to the way fonts are resolved:

    • The fonts that are installed on a particular machine.

    • The Fontconfig configuration that is on a particular machine: each distro may decide to resolve fonts in slightly different ways.

    • The user's personal ~/.fonts, and whether they are running gnome-settings-daemon and whether it monitors that directory for Fontconfig's perusal.

    • Phase of the moon, checksum of the clouds, polarity of the yak fields, etc.

    For silly reasons, librsvg's "make distcheck" doesn't work when run as a user; I need to run it as root. And as root, my personal ~/.fonts doesn't get picked up and also my particular font rendering configuration is different from the system's default (why? I have no idea — maybe I selected specific hinting/antialiasing at some point?).

    It has taken a few tries to get reproducible font rendering for librsvg's tests. Without reproducible rendering, the images that get rendered from the test suite may not match the reference images, depending on the font renderer's configuration and the available fonts.

    Currently librsvg does two things to get reproducible font rendering for the test suite:

    • We use a specific cairo_font_options_t on our PangoContext. These options specify what antialiasing, hinting, and hint metrics to use, so that the environment's or user's configuration does not affect rendering.

    • We create a specific FcConfig and a PangoFontMap for testing, with a single font file that we ship. This will cause any font description, no matter if it is "sans-serif" or whatever, to resolve to that single font file. Special thanks to Christian Hergert for providing the relevant code from Gnome-builder.

    • We ship a font file as mentioned above, and just use it for the test suite.

    This seems to work fine. I can run "make check" both as my regular user with my private ~/.fonts stash, or as root with the system's configuration, and the test suite passes. This means that the rendered SVGs match the reference PNGs that get shipped with librsvg — this means reproducible font rendering, at least on my machine. I'd love to know if this works on other people's boxes as well.

January 09, 2017

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 08, 2017

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 04, 2017

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 02, 2017

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

darktable 2.2.1 released

we're proud to announce the first bugfix release for the 2.2 series of darktable, 2.2.1!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.1.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.1.tar.xz
da843190f08e02df19ccbc02b9d1bef6bd242b81499494c7da2cccdc520e24fc  darktable-2.2.1.tar.xz
$ sha256sum darktable-2.2.1.3.dmg
9a86ed2cff453dfc0c979e802d5e467bc4974417ca462d6cbea1c3aa693b08de  darktable-2.2.1.3.dmg

and the changelog as compared to 2.2.0 can be found below.

New features:

  • Show a dialog window that tells when locking the database/library failed
  • Ask before deleting history stack from lightable.
  • preferences: make features that are not available (greyed out) more obvious

Bugfixes:

  • Always cleanup undo list before entering darkroom view. Fixes crash when using undo after re-entering darkroom
  • Darkroom: properly delete module instances. Fixes rare crashes after deleting second instance of module.
  • Levels and tonecurve modules now also use 256 bins.
  • Rawoverexposed module: fix visualization when a camera custom white balance preset is used

Base Support:

  • Canon EOS M5

December 31, 2016

The top 30 Blender developers 2016

Let’s salute and applaud the most active developers for Blender of the past year again! The ranking is based on commit total, for Blender itself and all its branches.  

Obviously a commit total doesn’t mean much. Nevertheless, it’s a nice way to put the people who make Blender in the spotlights.

The number ’30’ is also arbitrary. I just had to stop adding more! Names are listed in increasing commit count order.

Special thanks to Miika Hämäläinen for making the stats listing.

Ton Roosendaal, Blender Foundation chairman.
31-12-2016

Joey Ferwerda (28)

openhmd-logoJoey (Netherlands) worked in 2016 on adding real-time VR viewing in Blender’s viewport. This works for Oculus, with Vive support coming soon.

He currently works on OpenHMD, an open source library to support all current Head Mounted Displays.

Luca Rood (30)

screen-shot-2016-12-31-at-18-49-08Luca (Brazil) is to my knowledge the youngest on this list. With his 19 years he’s impressing everyone with in-depth knowledge of simulation techniques and courage to dive into Blender’s ancient cloth code to fix it up.

Luca currently works with a Development Fund grant on improving cloth sim, to make it usable for high quality character animation.

Gaia Clary (32)

collada-banner-200x55Gaia (Germany) is the maintainer of COLLADA in Blender. Her never-ending energy to keep this working in Blender means we can keep it supported for 2.8 as well.

Martijn Berger (40)

screen-shot-2016-12-31-at-18-43-59Martijn (Netherlands) was active in 2016 as platform manager for Windows and MacOS. He helps making the releases, especially to comply to the security standards for downloading binaries on Windows and MacOS.

Antonio Vazquez (41)

screen-shot-2016-12-31-at-18-39-22Antonio (Spain) joined the team to work on Grease Pencil. Based on feedback and guidance of Daniel Lara (Pepeland), he helped turning this annotation tool in Blender into a full fledged 2d animation and animatic storyboarding tool.

Ray Molenkamp (46)

screen-shot-2016-12-31-at-18-37-18Ray (Canada) joined the team in 2016, volunteering to help out maintaining Blender for the Windows platform, supporting Microsoft’s development environment.

Alexander Gavrilov (58)

26-manual-modeling-meshes-weight-paint-face-selectAlexander (Russia) joined the development team in 2016. He starting contributing fixes for Weight Painting and later on his attention moved to Cloth and Physics simulation in general.

He is also active in the bug tracker, providing bug fixes on regular basis.

Sybren Stüvel (59)

screen-shot-2016-12-31-at-18-10-55Sybren (Netherlands) works for Blender Institute as Cloud developer (shot management, render manager, libraries, security) and as developer for Blender pipeline features – such as Blender file manipulations, UI previews and the Pose library.

João Araújo (65)

800px-improved_extrusion1João (Portugal) accepted a Google Summer of Code grant to work on Blender’s 3D Curve object. He added improved extrusion options and tools for Extend, Batch Extend, Trim, Offset, Chamfer and Fillet.

His project is almost ready and will be submitted for review early 2017.

Benoit Bolsee (65)

screen-shot-2016-12-31-at-17-53-50Benoit (Belgium) is a long term contributor to Blender’s Game Engine. In 2016 he worked on the “Decklink” branch, supporting one of the industry’s best video capture cards.

Pascal Schön (78)

screen-shot-2016-12-31-at-17-48-36Pascal (Germany) joined the Cycles team this year, contributing the implementation of the Disney BSDF/BSSRDF.

This new physically based shading model  is able to reproduce a wide range of materials with only a few parameters.

Nathan Vollmer (80)

screen-shot-2016-12-31-at-17-44-50Nathan (Germany) accepted a GSoC grant to work on vertex painting and weight painting in Blender.

With the new P-BVH vertex painting we now get much improved performance, especially when painting dense meshes.

Philipp Oeser (83)

screen-shot-2016-12-31-at-17-38-34Philipp (Germany) is active in Blender’s bug tracker, providing fixes for issues in many areas in Blender.

Contributors who work on Blender’s quality this way are super important and can’t be valued enough. Kudos!

Phil Gosch (131)

pack_1_comparisonPhil (Austria) accepted a GSoC grant to work on Blender’s UV Tools, especially the Pack Island tool. While a bit more computation heavy, the solutions found by the new algorithm give much better results than the old “Pack Islands” in terms of used UV space.

Tainwei Shen (142)

blender33Tianwei (China) accepted a GSoC grant to work on Multiview camera reconstruction. This allows film makers to retrieve more accurate camera position information from footage, when one area gets shot from different positions.

His work is ready and close to be added in Blender.

Thomas Dinges (144)

cycles_278_single_channel_texturesThomas (Germany) started in the UI team for the 2.5 project, but with the start of Cycles in 2011 he put all his time in helping making it even more awesome.

His main contribution this year was work on  Cycles texture system, increasing the maximum amount of textures that can be used on CUDA GPUs, and lowering memory usage in many cases.

Dalai Felinto (192)

screen-shot-2016-12-31-at-17-06-00Dalai (Brazil, lives in Netherlands) added Multiview and Stereo rendering to Blender in 2015. In 2016 he contributed to making VR rendering possible in Cycles.

Dalai currently works (with Clement “PBR branch” Foucault) for Blender Institute on the Viewport 2.8 project. Check the posts on https://code.blender.org to see what’s coming.

Martin Felke (199)

screen-shot-2016-12-31-at-16-57-06Martin (Germany) deserves our respect and admiration for maintaining one of the oldest and very popular Blender branches: the “Fracture Modifier” branch.

For technical and quality reasons his work was never deemed to fit for a release. But for Blender 2.8 internal design will get updated to finally get his work released. Stay tuned!

Mai Lavelle (202)

screen-shot-2016-12-31-at-16-51-58Mai (USA) surprised everyone by falling from the sky with a patch for Cycles to support micro-polygon rendering. The skepticism from the Cycles developers quickly changed. “This is actually really good code” said one of them, which is a huge compliment when coming from coders!

She is currently working for Blender Institute on the Cycles “Split Kernel” project, especially for OpenCL GPU rendering.

Brecht Van Lommel (210)

cycles_shader_ao-200x170Brecht (Belgium, lives in Spain) worked on Blender for a decade. His most memorable contribution was the Cycles render engine (2011). 

Aside of working on Cycles, Brecht is active in maintaining the MacOS version and Blender’s UI code.

Joshua Leung (264)

bbone-restpose_curves-inactionJoshua (New Zealand) is Blender’s animation system coder. He contributed many new features to Blender in the past decade (including Grease Pencil).

Joshua’s highlight for 2016 was adding the “Bendy Bones”. A project that was started by Jose Molina and Daniel Lara.

Lukas Stockner (277)

screen-shot-2016-12-31-at-16-34-38Lukas (Germany) is a new contributor to Cycles, since 2015. He accepted a Google Summer of Code grant to work on Cycles denoising.

Lukas’ specialism is implementing math. One of his last 2016 commits was titled “Replace T-SVD algorithm with new Jacobi Eigen-decomposition solver”. Right on!

Sebastián Barschkis (300)

300px-nb_flipSebastián (Germany) is a recurring GSoC student. He is currently working in his branch on “Manta Flow”, an improved fluid simulation library.

Mike Erwin (308)

imgresMike (USA) has been contracted this year by AMD to help modernizing Blenders’s OpenGL, and to make sure we’re Vulkan ready in the future.

He currently works on the Blender 2.8 branch. making Blender work with OpenGL 3.2 or later.

Lukas Toenne (413)

cv7t4cnxaauludsLukas (Germany) worked for Blender Institute on hair simulation in 2014-2015. In 2016 he went back experimenting with node systems for objects and particles and wrote a review and proposal for how to add this in Blender.

Most of his commits were in the object-nodes branch, a project which is currently on hold, until we find more people for it.

Kévin Dietrich (516)

screen-shot-2016-12-31-at-15-59-04Kévin (France) has mainly been working on two topics in 2016. In a branch he still works on integration of OpenVDB – tools for storage of volumetric data such as smoke.

Released in 2.78 was his work on Alembic input/output. Alembic is essential for mixed application pipelines for film and animation.

Julian Eisel (760)

manipulator_spinJulian (Germany) not only finds usability and UI interesting topics, he also manages to untangle Blender’s code for it. He contributed to many areas already, such as pie-menus and node inserting.

His 2016 highlight is ongoing work on Custom Manipulators – which is a topic for 2.8 workflow project. Goal: bring back editing to the viewport!

Bastien Montagne (1008)

screenBastien (France) is working full-time for Blender Foundation for many years now. He became our #1 bug tracker reviewer in the past years.

His special interest is Asset management though. He’s now an expert in Blender’s file system and works on 2.8 Asset Browsing.

Sergey Sharybin (1143)

xmas3Sergey (Russia, living in Netherlands) is on his way to become the #1 Blender contributor. He is best known for work on Motion tracking, Cycles rendering, Open Subdiv and recently on the Blender dependency graph.

And: of course we shouldn’t forget all of his 100s of bug fixes and patch reviews. The Blender Institute is happy to have him on board.

Campbell Barton (1156)

290px-bmesh_boolean_example_03Campbell (Australia) surprised everyone in August with his announcement to step down from his duties at blender.org. He is taking a well deserved break to renew his energy, and to work on other (own) projects.

He’s still Blender’s #1 committer of 2016 though. Even after his retirement he kept providing code, over 50 commits now. One of this year highlights was adding a high quality boolean modifier in Blender.

December 29, 2016

Commercial open-source: Sentry

Commercial open-source software is usually based around some kind of asymmetry: the owner possesses something that you as a user do not, allowing them to make money off of it.

This asymmetry can take on a number of forms. One popular option is to have dual licensing: the product is open-source (usually GPL), but if you want to deviate from that, there’s the option to buy a commercial license. These projects are recognizable by the fact that they generally require you to sign a Contributor License Agreement (CLA) in which you transfer all your rights to the code over to the project owners. A very bad deal for you as a contributor (you work but get nothing in return) so I recommend against participating in those projects. But that’s a subject for a different day.

Another option for making asymmetry is open core: make a limited version open-source and sell a full-featured version. Typically named “the enterprise version”. Where you draw the line between both versions determines how useful the project is in its open-source form versus how much potential there is to sell it. Most of the time this tends towards a completely useless open-source version, but there are exceptions (e.g. Gitlab).

These models are so prevalent that I was pleasantly surprised to see who Sentry does things: as little asymmetry as possible. The entire product is open-source and under a very liberal license. The hosted version (the SaaS product that they sell) is claimed to be running using exactly the same source code. The created value, for which you’ll want to pay, is in the belief that a) you don’t want to spend time running it yourself and b) they’ll do a better job at it than you do.

This model certainly won’t work in all contexts and it probably won’t lead to a billion dollar exit, but that doesn’t always have to be the goal.

So kudos to Sentry, they’re certainly trying to make money in the nicest way possible, without giving contributors and hobbyists a bad deal. I hope they do well.

More info on their open-source model can be read on their blog: Building an Open Source Service.


Comments | More on rocketeer.be | @rubenv on Twitter

December 28, 2016

Last chance for ColorHug(1) users to get upgraded

For the early adopters of the original ColorHug I’ve been offering a service where I send all the newer parts out to people so they can retrofit their device to the latest design. This included an updated LiveCD, the large velcro elasticated strap and the custom cut foam pad that replaced the old foam feet. In the last two years I’ve sent out over 300 free upgrades, but this has reduced to a dribble recently as later ColorHug1’s and all ColorHug2 had all the improvements and extra bits included by default. I’m going to stop this offer soon as I need to make things simpler so I can introduce a new thing (+? :) next year. If you do need a HugStrap and gasket still, please fill in the form before the 4th January. Thanks, and Merry Christmas to all.

December 27, 2016

December 25, 2016

Photographing Farolitos (and other night scenes)

Excellent Xmas to all! We're having a white Xmas here..

Dave and I have been discussing how "Merry Christmas" isn't alliterative like "Happy Holidays". We had trouble coming up with a good C or K adjective to go with Christmas, but then we hit on the answer: Have an Excellent Xmas! It also has the advantage of inclusivity: not everyone celebrates the birth of Christ, but Xmas is a secular holiday of lights, family and gifts, open to people of all belief systems.

Meanwhile: I spent a couple of nights recently learning how to photograph Xmas lights and farolitos.

Farolitos, a New Mexico Christmas tradition, are paper bags, weighted down with sand, with a candle inside. Sounds modest, but put a row of them alongside a roadway or along the top of a typical New Mexican adobe or faux-dobe and you have a beautiful display of lights.

They're also known as luminarias in southern New Mexico, but Northern New Mexicans insist that a luminaria is a bonfire, and the little paper bag lanterns should be called farolitos. They're pretty, whatever you call them.

Locally, residents of several streets in Los Alamos and White Rock set out farolitos along their roadsides for a few nights around Christmas, and the county cooperates by turning off streetlights on those streets. The display on Los Pueblos in Los Alamos is a zoo, a slow exhaust-choked parade of cars that reminds me of the Griffith Park light show in LA. But here in White Rock the farolito displays are a lot less crowded, and this year I wanted to try photographing them.

Canon bugs affecting night photography

I have a little past experience with night photography. I went through a brief astrophotography phase in my teens (in the pre-digital phase, so I was using film and occasionally glass plates). But I haven't done much night photography for years.

That's partly because I've had problems taking night shots with my current digital SLRcamera, a Rebel Xsi (known outside the US as a Canon 450d). It's old and modest as DSLRs go, but I've resisted upgrading since I don't really need more features.

Except maybe when it comes to night photography. I've tried shooting star trails, lightning shots and other nocturnal time exposures, and keep hitting a snag: the camera refuses to take a photo. I'll be in Manual mode, with my aperture and shutter speed set, with the lens in Manual Focus mode with Image Stabilization turned off. Plug in the remote shutter release, push the button ... and nothing happens except a lot of motorized lens whirring noises. Which shouldn't be happening -- in MF and non-IS mode the lens should be just sitting there intert, not whirring its motors. I couldn't seem to find a way to convince it that the MF switch meant that, yes, I wanted to focus manually.

It seemed to be primarily a problem with the EF-S 18-55mm kit lens; the camera will usually condescend to take a night photo with my other two lenses. I wondered if the MF switch might be broken, but then I noticed that in some modes the camera explicitly told me I was in manual focus mode.

I was almost to the point of ordering another lens just for night shots when I finally hit upon the right search terms and found, if not the reason it's happening, at least an excellent workaround.

Back Button Focus

I'm so sad that I went so many years without knowing about Back Button Focus. It's well hidden in the menus, under Custom Functions #10.

Normally, the shutter button does a bunch of things. When you press it halfway, the camera both autofocuses (sadly, even in manual focus mode) and calculates exposure settings.

But there's a custom function that lets you separate the focus and exposure calculations. In the Custom Functions menu option #10 (the number and exact text will be different on different Canon models, but apparently most or all Canon DSLRs have this somewhere), the heading says: Shutter/AE Lock Button. Following that is a list of four obscure-looking options:

  • AF/AE lock
  • AE lock/AF
  • AF/AF lock, no AE lock
  • AE/AF, no AE lock

The text before the slash indicates what the shutter button, pressed halfway, will do in that mode; the text after the slash is what happens when you press the * or AE lock button on the upper right of the camera back (the same button you use to zoom out when reviewing pictures on the LCD screen).

The first option is the default: press the shutter button halfway to activate autofocus; the AE lock button calculates and locks exposure settings.

The second option is the revelation: pressing the shutter button halfway will calculate exposure settings, but does nothing for focus. To focus, press the * or AE button, after which focus will be locked. Pressing the shutter button won't refocus. This mode is called "Back button focus" all over the web, but not in the manual.

Back button focus is useful in all sorts of cases. For instance, if you want to autofocus once then keep the same focus for subsequent shots, it gives you a way of doing that. It also solves my night focus problem: even with the bug (whether it's in the lens or the camera) that the lens tries to autofocus even in manual focus mode, in this mode, pressing the shutter won't trigger that. The camera assumes it's in focus and goes ahead and takes the picture.

Incidentally, the other two modes in that menu apply to AI SERVO mode when you're letting the focus change constantly as it follows a moving subject. The third mode makes the * button lock focus and stop adjusting it; the fourth lets you toggle focus-adjusting on and off.

Live View Focusing

There's one other thing that's crucial for night shots: live view focusing. Since you can't use autofocus in low light, you have to do the focusing yourself. But most DSLR's focusing screens aren't good enough that you can look through the viewfinder and get a reliable focus on a star or even a string of holiday lights or farolitos.

Instead, press the SET button (the one in the middle of the right/left/up/down buttons) to activate Live View (you may have to enable it in the menus first). The mirror locks up and a preview of what the camera is seeing appears on the LCD. Use the zoom button (the one to the right of that */AE lock button) to zoom in; there are two levels of zoom in addition to the un-zoomed view. You can use the right/left/up/down buttons to control which part of the field the zoomed view will show. Zoom all the way in (two clicks of the + button) to fine-tune your manual focus. Press SET again to exit live view.

It's not as good as a fine-grained focusing screen, but at least it gets you close. Consider using relatively small apertures, like f/8, since it will give you more latitude for focus errors. Yyou'll be doing time exposures on a tripod anyway, so a narrow aperture just means your exposures have to be a little longer than they otherwise would have been.

After all that, my Xmas Eve farolitos photos turned out mediocre. We had a storm blowing in, so a lot of the candles had blown out. (In the photo below you can see how the light string on the left is blurred, because the tree was blowing around so much during the 30-second exposure.) But I had fun, and maybe I'll go out and try again tonight.


An excellent X-mas to you all!

Stellarium 0.15.1

Version 0.15.1 introduces a few new exciting features.
- The Digital Sky Survey (DSS) can be shown (requires online connection).
- AstroCalc is now available from the main menu and gives interesting new computational insight.
- Stellarium can act as Spout sender (important for multimedia environments; Windows only).
In addition, a lot of bugs have been fixed.
- wait() and waitFor() in the Scripting Engine no longer inhibits performance of moves.
- DE430/431 DeltaT may be OK now. We still want to test a bit more, though.
- ArchaeoLines also offers two arbitrary declination lines.
- Added support of time zones dependent by location.
- Added new skyculture: Sardinian.
- Added updates and improvements in catalogs.
- Added improvements in the GUI.
- Added cross identification data for stars from Bright Star Catalogue, 5th Revised Ed. (Hoffleit+, 1991)

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes:
- Added code which allows to display full-sky multi-resolution remote images projected using the TOAST format
- Added new option to Oculars plugin: enabling automatic install the type of mount from the telescope settings for saving horizontal orientation of CCD frame
- Added new option stars/flag_forced_twinkle=(false|true) for planetariums to enable a twinkling of stars without atmosphere (LP: #1616007)
- Added calculations of conjunction between major planets and deep-sky objects (AstroCalc)
- Added calculations of occultations between major planets and DSO/SSO (AstroCalc)
- Added option to toggle visibility of designations of exoplanets (esp. for exoplanetary systems)
- Added support of time zones dependent by location (LP: #505096, #1492941, #1431926, #1106753, #1099547, #1092629, #898710, #510480)
- Added option in GUI to edit colours of the markings
- Added a special case for educational purpose to drawing orbits for the Solar System Observer
- Added the new config option in the sky cultures for management of boundaries of constellations
- Added support synonyms of star names (2nd edition of the sky cultures)
- Added support reference data for star names (2nd edition of the sky cultures)
- Added support synonyms of DSO names (2nd edition of the sky cultures)
- Added support native names of DSO (2nd edition of the sky cultures)
- Added support reference data for native DSO names (2nd edition of the sky cultures)
- Added support Spout under Windows (Stellarium is Spout Sender now)
- Added a few trojans and quasi-satellites in ssystem.ini file
- Added a virtual planet 'Earth Observer' for educational purposes (illustration for the Moon phases)
- Added orbit lines for some asteroids and comets
- Added config option to switch between styles of color of orbits (LP: #1586812)
- Added support of separate colors for orbits (LP: #1586812)
- Added function and shortcut for quick reversing direction of time
- Added custom objects to avoid of getting of empty space for Search Tool (SIMBAD mode) and to adding custom markers on the sky (LP: #958218, #1485349)
- Added tool to management of visibility for grids/lines in Oculars plugin
- Added the hiding of supernova remnants before their birth (LP: #1623177)
- Added markers for various poles and zenith/nadir (LP: #1629501, #1366643, #1639010)
- Added markers for equinoxes
- Added supergalactic coordinate system
- Added code for management of SkyPoints in Oculars plugin
- Added notes to the Help window
- Added commands and shortcuts for quick move to celestial poles
- Added textures for deep-sky objects
- Added Millions of light years distance to infostring for galaxies (LP: #1637809)
- Added 2 custom declination lines to ArchaeoLines plugin
- Added dialog with options to adjust of colors of markers of deep-sky objects
- Added new options to infobox
- Added calculating and displaying the Moon phases
- Added allow for proper stopping of time dragging (LP: #1640574)
- Added time scrolling
- Added Scottish Gaelic (gd) translations for landscapes and sceneries.
- Added usage the Qt 5 JSON parser instead of the hand-made one we used so far
- Added synonyms for DSO
- Added names consistency for DSO
- Added support operational statuses for few classes of artificial satellites
- Added special zoom level for Telrad to avoid undefined behaviour of zooming (LP: #1643427)
- Added new icons for Bookmarks and AstroCalc tools
- Added missing Bayer designation of 4 Aurigae (LP: #1645087)
- Added calculations of conjunctions/occultations between planets and bright stars (AstroCalc tool)
- Added 'Altitude vs Time' feature for AstroCalc tool
- Added location identification which adds canonical IAU 3-letter code to infostring and PointerCoordinates plugin
- Added elongation and phase angle to Comet and MinorPlanet infostrings
- Added cross identification data for stars from Bright Star Catalogue, 5th Revised Ed. (Hoffleit+, 1991)
- Added AstroCalc icon (LP: #1641256)
- Added new scriptable methods to CustomObjectMgr class
- Added Sardinian sky culture
- Allow changes of milky way and zodiacal light brightness via GUI while stars switched off (LP: #1651897)
- Fixed visual issue for AstroCalc tool
- Fixed incorrectly escaping of translated strings for some languages in Remote Control plugin (LP: #1608177)
- Fixed disappearing of star labels when the 'Navigational Stars' plugin is loaded and not activated (LP: #1608288)
- Fixed offset issue for image sensor frame in Oculars plugin (LP: #1610629)
- Fixed crash when labels and markers box in sky and Viewing Options tab is ticked (LP: #1609958)
- Fixed spherical distortion with night mode (LP: #1606969)
- Fixed behaviour of buttons on Oculars on-screen control panel (LP: #1609060)
- Fixed misalignment of Veil nebula image (LP: #1610824)
- Fixed build of planetarium without scripting support (Thanks to Alexey Dokuchaev for the bug report)
- Fixed coordinates of the open cluster Melotte 227
- Fixed typos in Kamilorai skyculture
- Fixed cmake typos for scenery3d model (Thanks to Alexey Dokuchaev for the bug report)
- Fixed crash when turning off custom GRS if GRS Details dialogue has not been opened before (LP: #1620083)
- Fixed small issue rounding for JD for DeltaT tooltip on bottom bar
- Fixed rendering orbits of artificial satellites during changing of location through spaceship feature (LP: #1622796)
- Fixed hiding the Moon during a total solar eclipse when option 'limit magnitude' for Solar system objects is enabled
- Fixed small bug for updating catalog in the Bright Novae plugin
- Fixed displaying date and time in AstroCalc Tool (LP: #1630685)
- Fixed a typographical error in coefficient for precession expressions by Vodnark et. al.
- Fixed crash on exit in the debug mode in StelTexture module (LP: #1632355)
- Fixed crash in the debug mode (LP: #1631910)
- Fixed coverity issues
- Fixed obvious typos
- Fixed issue for wrong calculation of exit pupil for binoculars in Oculars plugin (LP: #1632145)
- Fixed issue for Arabic translation (LP: #1635587)
- Fixed packaging on OS X for Qt 5.6.2+
- Fixed calculations of the value of the Moon secular acceleration when JPL DE43x ephemeris is used
- Fixed distances for objects of Caldwell catalogue
- Fixed calculations for transit phenomena (LP: #1641255)
- Fixed missing translations in Remote Control plug-in (LP: #1641296)
- Fixed displaying of localized projection name in Remote Control (LP: #1641297)
- Fixed search localized names of artificial satellites
- Fixed showing translated names of artificial satellite
- Fixed GUI Problem in Satellites plug-in (LP: #1640291)
- Fixed moon size infostring (LP: #1641755)
- Fixed AstroCalc lists transits as occulations (LP: #1641255)
- Fixed resetting of DSO Limit Magnitude by Image Sensor Frame (LP: #1641736)
- Fixed Nova Puppis 1942 coordinates
- Fixed error in light curve model of historical supernovae (LP: #1648978)
- Fixed width of search dialog for Meteor Showers plugin (LP: #1649640)
- Fixed shortcut for "Remove selection of constellations" action (LP: #1649771)
- Fixed retranslation name and type of meteor showers in Meteor Showers when language of application is changed
- Fixed wrong tooltip info (LP: #1650725)
- Fixed Mercury's magnitude formula (LP: #1650757)
- Fixed cross-id error for alpha2 Cen (LP: #1651414)
- Fixed core.moveToAltAzi(90,XX) issue (LP: #1068529)
- Fixed view centered on zenith (restart after save) issue (LP: #1620030)
- Updated DSO Catalog: Added support full of UGC catalog
- Updated DSO Catalog: Added galaxies from PGC2003 catalog (mag <=15)
- Updated DSO Catalog: Removed errors and duplicates
- Updated stars catalogues
- Updated stars names (LP: #1641455)
- Updated settings for AstroCalc
- Updated Satellites plugin
- Updated Scenery3d plugin
- Updated Stellarium User Guide
- Updated support for High DPI monitors
- Updated list of star names for Western sky culture
- Updated Search Tool (LP: #958218)
- Updated default catalogues for plugins
- Updated list of locations (2nd version of format - TZ support)
- Updated pulsars catalog & util
- Updated Scripting Engine
- Updated rules for visibility of exoplanets labels
- Updated shortcuts
- Updated info for the landscape actions (LP: #1650727)
- Updated Bengali translation of Sardinian skyculture (LP: #1650733)
- Correctly apply constellation fade duration (LP: #1612967)
- Expanded behaviour of for isolated constellations selection (only for skycultures with IAU generic boundaries) (LP: #1412595)
- Ensure stable up vector for lookZenith, lookEast etc.
- Verification the star names by the official IAU star names list
- Minor improvement in ini file writing
- Set usage Bortle index only after year 1825
- Re-implement wait() and waitFor() scripting functions to avoid large delays in main thread
- Restored broken feature (hidding markers for selected planets)
- Removed color profiles from PNG files
- Removed the flips of the CCD frame because it works incorrect and he introduced new one bug
- Removed the useless misspelled names from list of stars
- Removed the Time Zone plug-in
- Removed useless translations in ArchaeoLines plug-in

December 24, 2016

darktable 2.2.0 released

we're proud to finally announce the new feature release of darktable, 2.2.0!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the sha256 checksum is:

3eca193831faae58200bb1cb6ef29e658bce43a81706b54420953a7c33d79377  darktable-2.2.0.tar.xz
75d5f68fec755fefe6ccc82761d379b399f9fba9581c0f4c2173f6c147a0109f  darktable-2.2.0.dmg

and the changelog as compared to 2.0.0 can be found below.

when updating from the currently stable 2.0.x series, please bear in mind that your edits will be preserved during this process, but it will not be possible to downgrade from 2.2 to 2.0.x any more.

  • Well over 2k commits since 2.0.0
  • 298 pull requests handled
  • 360+ issues closed

Gource visualization of git log from 2.0.0 to right before 2.2.0:

https://youtu.be/E2UU5x7sS3g

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in “more modules” so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Add basic undo/redo support for the darkroom (masks are not accounted !)
  • Support the Exif date and time when importing photos from camera
  • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
  • Rudimentary CYGM and RGBE color filter array support
  • Nicer web gallery exporter -- now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Hotpixels module: make it actually work for X-Trans
  • Cmstest tool should now produce correct output in more cases, especially in multi-monitor setups.
  • Darkroom histogram now uses more bins: use all 8-bit of the output, not just 6.

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar XMP schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • Lens correction module: switched back to normal Lensfun search mode for lens lookups.
  • Make sure that proper signal handlers are still set after GM initialization...
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Feathering size in some mask shapes can be set with shift+scroll
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series
  • Tone curve: mode “automatic in XYZ” mode for “scale chroma”
  • Some compilation fixes

Lua specific changes:

  • All asynchronous calls have been rewritten
    • the darktable-specific implementation of yield was removed
    • darktable.control.execute allows to execute some shell commands without blocking Lua
    • darktable.control.read allows to wait for a file to be readable without blocking Lua
    • darktable.control.sleep allows to pause the Lua execution without blocking other Lua threads
  • darktable.gui.libs.metadata_view.register_info allows to add new field to the metadata widget in the darkroom view
  • The TextView widget can now be created in Lua, allowing input of large chunks of text
  • It is now possible to use a custom widget in the Lua preference window to configure a preference
  • It is now possible to set the precision and step on slider widgets

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even Debian/stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.
  • Remove gnome keyring password backend

Base Support:

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-G81 (4:3)
  • Panasonic DMC-G85 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Panasonic DMC-LX10 (3:2)
  • Panasonic DMC-LX15 (3:2)
  • Panasonic DMC-LX9 (3:2)
  • Panasonic DMC-TZ100 (3:2)
  • Panasonic DMC-TZ101 (3:2)
  • Panasonic DMC-TZ110 (3:2)
  • Panasonic DMC-ZS110 (3:2)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX100M5
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCA-99M2
  • Sony ILCE-6300

We were unable to bring back these 2 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800

White Balance Presets:

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-1
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles:

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX100 IS
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5
  • Nikon D5500
  • Olympus E-PL6
  • Olympus E-PM2
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-1
  • Pentax K-S2
  • Ricoh GR
  • Sony DSLR-A900
  • Sony DSC-RX10
  • Sony ILCE-6300
  • Sony NEX-5
  • Sony SLT-A37

New Translations:

  • Hebrew
  • Slovenian

Updated Translations:

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Italian
  • Polish
  • Russian
  • Slovak
  • Spanish
  • Swedish
  • Ukrainian

We wish you a merry Christmas, happy Hanukkah or just a good time. Enjoy taking photos and developing them with darktable.

December 23, 2016

Release GCompris 0.70

screenshots of the new activities in version 0.61

Hi,

just in time for Christmas, we are pleased to announce the new GCompris version 0.70.

It is an important release because we officially drop the Gtk+ version for Windows to use the Qt one.
Everyone who bought the full version for the last two years will get a new activation code in a few days.

Also, for people who like numbers, we are beyond 100000 downloads in the google play store.

This new version contains 8 new activities, half of them being created by last Google Summer of Code students:

  • an activity where the child has to move elements to build a given model (crane by Stefan Toncu)
  • an activity to draw the numbers from 0 to 9 (drawnumbers by Nitish Chauhan)
  • an activity to draw the letters (drawletters by Nitish Chauhan)
  • an activity to find on which words a given letter belongs (letter-in-word by Akshat Tandon)
  • the nine men morris game against Tux (nine_men_morris by Pulkit Gupta)
  • the nine men morris game with a friend (nine_men_morris_2_players by Pulkit Gupta)
  • an activity to learn to split a given number of candies amonst children (share by Stefan Toncu)
  • an activity to learn roman numbers (roman_numerals by Bruno Coudoin)
  • We always have new features and bug fixes:

  • search feature by Rishabh Gupta
  • windows build by Bruno Coudoin and Johnny Jazeix
  • hint icon in the bar (used in photohunter) by Johnny Jazeix
  • neon build by Jonathan Riddell
  • we are now in openSUSE Tumbleweed repository thanks to the Bruno Friedmann
  • archlinux (https://aur.archlinux.org/packages/gcompris-qt/) by Jose Riha
  • package on mageia cauldron by Timothee Giet
  • word list for Slovakia by Jose Riha
  • word list for Belarusian by Antos Vaclauski
  • various updates on Romanian wordlists and voices (probably the most complete one) by Horia Pelle
  • voices added for Portuguese Brazilian by Marcos D.
  • new graphics for crane by Timothee Giet
  • screenshots on gcompris.net updated to the Qt version by Timothee and Johnny
  • You can find this new version here:

    Android version

    Windows 32bit or Windows 64bit version

    Linux version (64bit)

    source code

    On the translation side, we have 15 languages fully supported: Belarusian, British English, Brazilian Portuguese, Catalan, Catalan (Valencian), Dutch, Estonian, French, Italian, Polish, Portuguese, Romanian, Spanish, Swedish, Ukrainian and some partially: Breton (82%), Chinese Simplified (93%), Chinese Traditional (91%), Finnish (70%), Galician (93%), German (97%), Norwegian Nynorsk (98%), Russian (83%), Slovak (85%), Slovenian (88%), Turkish (77%).

    If you want to help, please make some posts in your community about GCompris.

    December 22, 2016

    Tips on Developing Python Projects for PyPI

    I wrote two recent articles on Python packaging: Distributing Python Packages Part I: Creating a Python Package and Distributing Python Packages Part II: Submitting to PyPI. I was able to get a couple of my programs packaged and submitted.

    Ongoing Development and Testing

    But then I realized all was not quite right. I could install new releases of my package -- but I couldn't run it from the source directory any more. How could I test changes without needing to rebuild the package for every little change I made?

    Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a directory that includes all the modules you normally want to test. For example, inside my bin directory I have a python directory where I can symlink any development modules I might need:

    mkdir ~/bin/python
    ln -s ~/src/metapho/metapho ~/bin/python/
    

    Then add the directory at the beginning of PYTHONPATH:

    export PYTHONPATH=$HOME/bin/python
    

    With that, I could test from the development directory again, without needing to rebuild and install a package every time.

    Cleaning up files used in building

    Building a package leaves some extra files and directories around, and git status will whine at you since they're not version controlled. Of course, you could gitignore them, but it's better to clean them up after you no longer need them.

    To do that, you can add a clean command to setup.py.

    from setuptools import Command
    
    class CleanCommand(Command):
        """Custom clean command to tidy up the project root."""
        user_options = []
        def initialize_options(self):
            pass
        def finalize_options(self):
            pass
        def run(self):
            os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
    
    (Obviously, that includes file types beyond what you need for just cleaning up after package building. Adjust the list as needed.)

    Then in the setup() function, add these lines:

          cmdclass={
              'clean': CleanCommand,
          }
    

    Now you can type

    python setup.py clean
    
    and it will remove all the extra files.

    Keeping version strings in sync

    It's so easy to update the __version__ string in your module and forget that you also have to do it in setup.py, or vice versa. Much better to make sure they're always in sync.

    I found several version of that using system("grep..."), but I decided to write my own that doesn't depend on system(). (Yes, I should do the same thing with that CleanCommand, I know.)

    def get_version():
        '''Read the pytopo module versions from pytopo/__init__.py'''
        with open("pytopo/__init__.py") as fp:
            for line in fp:
                line = line.strip()
                if line.startswith("__version__"):
                    parts = line.split("=")
                    if len(parts) > 1:
                        return parts[1].strip()
    

    Then in setup():

          version=get_version(),
    

    Much better! Now you only have to update __version__ inside your module and setup.py will automatically use it.

    Using your README for a package long description

    setup has a long_description for the package, but you probably already have some sort of README in your package. You can use it for your long description this way:

    # Utility function to read the README file.
    # Used for the long_description.
    def read(fname):
        return open(os.path.join(os.path.dirname(__file__), fname)).read()
    
        long_description=read('README'),
    

    December 21, 2016

    Casa Min-Max

    Esta casa custa R$ 55 000. Completa, com tudo. This house costs BRL 55 000. Complete, with everything. Como chegamos ate aqui? ...

    December 17, 2016

    Distributing Python Packages Part II: Submitting to PyPI

    In Part I, I discussed writing a setup.py to make a package you can submit to PyPI. Today I'll talk about better ways of testing the package, and how to submit it so other people can install it.

    Testing in a VirtualEnv

    You've verified that your package installs. But you still need to test it and make sure it works in a clean environment, without all your developer settings.

    The best way to test is to set up a "virtual environment", where you can install your test packages without messing up your regular runtime environment. I shied away from virtualenvs for a long time, but they're actually very easy to set up:

    virtualenv venv
    source venv/bin/activate
    

    That creates a directory named venv under the current directory, which it will use to install packages. Then you can pip install packagename or pip install /path/to/packagename-version.tar.gz

    Except -- hold on! Nothing in Python packaging is that easy. It turns out there are a lot of packages that won't install inside a virtualenv, and one of them is PyGTK, the library I use for my user interfaces. Attempting to install pygtk inside a venv gets:

    ********************************************************************
    * Building PyGTK using distutils is only supported on windows. *
    * To build PyGTK in a supported way, read the INSTALL file.    *
    ********************************************************************
    

    Windows only? Seriously? PyGTK works fine on both Linux and Mac; it's packaged on every Linux distribution, and on Mac it's packaged with GIMP. But for some reason, whoever maintains the PyPI PyGTK packages hasn't bothered to make it work on anything but Windows, and PyGTK seems to be mostly an orphaned project so that's not likely to change.

    (There's a package called ruamel.venvgtk that's supposed to work around this, but it didn't make any difference for me.)

    The solution is to let the virtualenv use your system-installed packages, so it can find GTK and other non-PyPI packages there:

    virtualenv --system-site-packages venv
    source venv/bin/activate
    

    I also found that if I had a ~/.local directory (where packages normally go if I use pip install --user packagename), sometimes pip would install to .local instead of the venv. I never did track down why this happened some times and not others, but when it happened, a temporary mv ~/.local ~/old.local fixed it.

    Test your Python package in the venv until everything works. When you're finished with your venv, you can run deactivate and then remove it with rm -rf venv.

    Tag it on GitHub

    Is your project ready to publish?

    If your project is hosted on GitHub, you can have pypi download it automatically. In your setup.py, set

    download_url='https://github.com/user/package/tarball/tagname',
    

    Check that in. Then make a tag and push it:

    git tag 0.1 -m "Name for this tag"
    git push --tags origin master
    

    Try to make your tag match the version you've set in setup.py and in your module.

    Push it to pypitest

    Register a new account and password on both pypitest and on pypi.

    Then create a ~/.pypirc that looks like this:

    [distutils]
    index-servers =
      pypi
      pypitest
    
    [pypi]
    repository=https://pypi.python.org/pypi
    username=YOUR_USERNAME
    password=YOUR_PASSWORD
    
    [pypitest]
    repository=https://testpypi.python.org/pypi
    username=YOUR_USERNAME
    password=YOUR_PASSWORD
    

    Yes, those passwords are in cleartext. Incredibly, there doesn't seem to be a way to store an encrypted password or even have it prompt you. There are tons of complaints about that all over the web but nobody seems to have a solution. You can specify a password on the command line, but that's not much better. So use a password you don't use anywhere else and don't mind too much if someone guesses.

    Update: Apparently there's a newer method called twine that solves the password encryption problem. Read about it here: Uploading your project to PyPI. You should probably use twine instead of the setup.py commands discussed in the next paragraph.

    Now register your project and upload it:

    python setup.py register -r pypitest
    python setup.py sdist upload -r pypitest
    

    Wait a few minutes: it takes pypitest a little while before new packages become available. Then go to your venv (to be safe, maybe delete the old venv and create a new one, or at least pip uninstall) and try installing:

    pip install -i https://testpypi.python.org/pypi YourPackageName
    

    If you get "No matching distribution found for packagename", wait a few minutes then try again.

    If it all works, then you're ready to submit to the real pypi:

    python setup.py register -r pypi
    python setup.py sdist upload -r pypi
    

    Congratulations! If you've gone through all these steps, you've uploaded a package to pypi. Pat yourself on the back and go tell everybody they can pip install your package.

    Some useful reading

    Some pages I found useful:

    A great tutorial except that it forgets to mention signing up for an account: Python Packaging with GitHub

    Another good tutorial: First time with PyPI

    Allowed PyPI classifiers -- the categories your project fits into Unfortunately there aren't very many of those, so you'll probably be stuck with 'Topic :: Utilities' and not much else.

    Python Packages and You: not a tutorial, but a lot of good advice on style and designing good packages.

    Call to translators

    We plan to release Stellarium 0.15.1 at next one or two weeks.

    This is a bugfix release and he introduces a few new important features from the version 1.0. Currently translators can improve translation of version 0.15.0 and fix some mistakes in translations. If you can assist with translation to any of the 136 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

    If it will be required we can postpone release on few days.

    Thank you!

    December 15, 2016

    Making your own retro keyboard

    We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

    The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

    How do keyboards work?

    Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

    The keyboard hardware

    I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

    The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



    Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

    That needs a wash in soapy water

    After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

    Finicky metal covered plastic

    Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

    Desoldered connectors

    After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

    The micro-controller

    The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

    I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

    As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

    The soldering

    Colorful tinning

    I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

    As always, make sure early that things would fit, especially the cables!

    The unnecessary pollution

    The firmware

    Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

    This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

    Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


    sudo dnf install -y avr-libc avr-binutils avr-gcc

    I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

    It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

    I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

    Some advices

    This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
    • Don't forget the size, length and non-flexibility of cables in your design
    • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
    • Use breadboard cables and pins to connect things, if you have the room
    • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
    That last one explains the slightly funny cabling of my keyboard.

    Finishing touches

    All Sugru'ed up

    To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

    There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

    Prototype and final hardware

    All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

    (If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)

    December 12, 2016

    security things in Linux v4.9

    Previously: v4.8.

    Here are a bunch of security things I’m excited about in the newly released Linux v4.9:

    Latent Entropy GCC plugin

    Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX’s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel’s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is “credited”, but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.

    vmapped kernel stack and thread_info relocation on x86

    Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process’s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.

    Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.

    CONFIG_DEBUG_RODATA mandatory on arm64

    As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there’s no reason to make the protection optional.

    random_page() cleanup

    Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.

    That’s it for now! Let me know if there are other fun things to call attention to in v4.9.

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    Logitech Unifying Hardware Required

    Does anyone have a spare Logitech Unifying dongle I can borrow? I specifically need the newer Texas Instruments version, rather than the older Nordic version.

    You can tell if it’s the version I need by looking at the etching on the metal USB plug, if it says U0008 above the CE marking then it’s the one I’m looking for. I’m based in London, UK if that matters. Thanks!

    darktable 2.2.0rc3 released

    we're proud to announce the fourth release candidate of darktable 2.2.0, with some fixes over the previous release candidate.

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc3.

    as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

    f7b9e8f5f56b2a52a4fa51e085b8aefe016ab08daf7b4a6ebf3af3464b1d2c29  darktable-2.2.0~rc3.tar.xz
    86293aded568903eba3b225d680ff06bc29ea2ed678de05a0fd568aed93a0587  darktable-2.2.0.rc3.3.g9af0d4fcb.dmg

    the changelog vs. the stable 2.0.x series is below:

    • Well over 2k commits since 2.0.0

    The Big Ones:

    Quite Interesting Changes:

    • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
    • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
    • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
    • Allow darktable-cli to work on directories
    • Allow to import/export tags from Lightroom keyword files
    • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
    • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
    • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
    • Add range operator and date compare to the collection module
    • Add basic undo/redo support for the darkroom (masks are not accounted !)
    • Support the Exif date and time when importing photos from camera
    • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
    • Rudimentary CYGM and RGBE color filter array support
    • Nicer web gallery exporter -- now touch friendly!
    • OpenCL implementation of VNG/VNG4 demosaicing methods
    • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
    • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
    • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
    • darktable-cli: do not even try to open display, we don't need it.
    • Hotpixels module: make it actually work for X-Trans
    • Cmstest tool should now produce correct output in more cases, especially in multi-monitor setups.
    • Darkroom histogram now uses more bins: use all 8-bit of the output, not just 6.

    Some More Changes, Probably Not Complete:

    • Drop darktable-viewer tool in favor of slideshow view
    • Remove gnome keyring password backend, use libsecret instead
    • When using libsecret to store passwords then put them into the correct collection
    • Hint via window manager when import/export is done
    • Quick tagging searches anywhere, not just at the start of tags
    • The sidecar XMP schema for history entries is now more consistent and less error prone
    • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
    • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
    • Add geolocation to watermark variables
    • Fix some crashes with missing configured ICC profiles
    • Support greyscale color profiles
    • Make sure that proper signal handlers are still set after GM initialization...
    • OSX: add trash support (thanks to Michael Kefeder for initial patch)
    • Attach Xmp data to EXR files
    • Several fixes for HighDPI displays
    • Use Pango for text layout, thus supporting RTL languages
    • Feathering size in some mask shapes can be set with shift+scroll
    • Many bugs got fixed and some memory leaks plugged
    • The usermanual was updated to reflect the changes in the 2.2 series
    • Tone curve: mode "automatic in XYZ" mode for "scale chroma"
    • Some compilation fixes

    Lua specific changes:

    • All asynchronous calls have been rewritten
    • The darktable-specific implementation of yield was removed
    • darktable.control.execute allows to execute some shell commands without blocking Lua
    • darktable.control.read allows to wait for a file to be readable without blocking Lua
    • darktable.control.sleep allows to pause the Lua execution without blocking other Lua threads
    • darktable.gui.libs.metadata_view.register_info allows to add new field to the metadata widget in the darkroom view
    • The TextView widget can now be created in Lua, allowing input of large chunks of text
    • It is now possible to use a custom widget in the Lua preference window to configure a preference
    • It is now possible to set the precision and step on slider widgets

    Changed Dependencies:

    • CMake 3.0 is now required.
    • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
    • Drop support for OS X 10.6
    • Bump required libexiv2 version up to 0.24
    • Bump GTK+ requirement to gtk-3.14. (because even debian stable has it)
    • Bump GLib requirement to glib-2.40.
    • Port to OpenJPEG2
    • SDL is no longer needed.

    Base Support:

    • Canon EOS-1D X Mark II
    • Canon EOS 5D Mark IV
    • Canon EOS 80D
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS M10
    • Canon PowerShot A720 IS (dng)
    • Canon PowerShot G7 X Mark II
    • Canon PowerShot G9 X
    • Canon PowerShot SD450 (dng)
    • Canon PowerShot SX130 IS (dng)
    • Canon PowerShot SX260 HS (dng)
    • Canon PowerShot SX510 HS (dng)
    • Fujifilm FinePix S100FS
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X70
    • Fujifilm XQ2
    • GITUP GIT2 (chdk-a, chdk-b)
    • (most nikon cameras here are just fixes, and they were supported before already)
    • Nikon 1 AW1 (12bit-compressed)
    • Nikon 1 J1 (12bit-compressed)
    • Nikon 1 J2 (12bit-compressed)
    • Nikon 1 J3 (12bit-compressed)
    • Nikon 1 J4 (12bit-compressed)
    • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
    • Nikon 1 S1 (12bit-compressed)
    • Nikon 1 S2 (12bit-compressed)
    • Nikon 1 V1 (12bit-compressed)
    • Nikon 1 V2 (12bit-compressed)
    • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix A (14bit-compressed)
    • Nikon Coolpix P330 (12bit-compressed)
    • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix P6000 (12bit-uncompressed)
    • Nikon Coolpix P7000 (12bit-uncompressed)
    • Nikon Coolpix P7100 (12bit-uncompressed)
    • Nikon Coolpix P7700 (12bit-compressed)
    • Nikon Coolpix P7800 (12bit-compressed)
    • Nikon D1 (12bit-uncompressed)
    • Nikon D100 (12bit-compressed, 12bit-uncompressed)
    • Nikon D1H (12bit-compressed, 12bit-uncompressed)
    • Nikon D1X (12bit-compressed, 12bit-uncompressed)
    • Nikon D200 (12bit-compressed, 12bit-uncompressed)
    • Nikon D2H (12bit-compressed, 12bit-uncompressed)
    • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
    • Nikon D2X (12bit-compressed, 12bit-uncompressed)
    • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3000 (12bit-compressed)
    • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3100 (12bit-compressed)
    • Nikon D3200 (12bit-compressed)
    • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
    • Nikon D3400 (12bit-compressed)
    • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3X (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D40 (12bit-compressed, 12bit-uncompressed)
    • Nikon D40X (12bit-compressed, 12bit-uncompressed)
    • Nikon D4S (14bit-compressed)
    • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D50 (12bit-compressed)
    • Nikon D500 (14bit-compressed, 12bit-compressed)
    • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
    • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
    • Nikon D5200 (14bit-compressed)
    • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D60 (12bit-compressed, 12bit-uncompressed)
    • Nikon D600 (14bit-compressed, 12bit-compressed)
    • Nikon D610 (14bit-compressed, 12bit-compressed)
    • Nikon D70 (12bit-compressed)
    • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
    • Nikon D7000 (14bit-compressed, 12bit-compressed)
    • Nikon D70s (12bit-compressed)
    • Nikon D7100 (14bit-compressed, 12bit-compressed)
    • Nikon D80 (12bit-compressed, 12bit-uncompressed)
    • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D90 (12bit-compressed, 12bit-uncompressed)
    • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon E5400 (12bit-uncompressed)
    • Nikon E5700 (12bit-uncompressed)
    • Olympus PEN-F
    • OnePlus One (dng)
    • Panasonic DMC-FZ150 (1:1, 16:9)
    • Panasonic DMC-FZ18 (16:9, 3:2)
    • Panasonic DMC-FZ300 (4:3)
    • Panasonic DMC-FZ50 (16:9, 3:2)
    • Panasonic DMC-G8 (4:3)
    • Panasonic DMC-G80 (4:3)
    • Panasonic DMC-G81 (4:3)
    • Panasonic DMC-G85 (4:3)
    • Panasonic DMC-GX80 (4:3)
    • Panasonic DMC-GX85 (4:3)
    • Panasonic DMC-LX3 (1:1)
    • Panasonic DMC-LX10 (3:2)
    • Panasonic DMC-LX15 (3:2)
    • Panasonic DMC-LX9 (3:2)
    • Panasonic DMC-TZ100 (3:2)
    • Panasonic DMC-TZ101 (3:2)
    • Panasonic DMC-TZ110 (3:2)
    • Panasonic DMC-ZS110 (3:2)
    • Pentax K-1
    • Pentax K-70
    • Samsung GX20 (dng)
    • Sony DSC-F828
    • Sony DSC-RX100M5
    • Sony DSC-RX10M3
    • Sony DSLR-A380
    • Sony ILCA-68
    • Sony ILCA-99M2
    • Sony ILCE-6300

    We were unable to bring back these 2 cameras, because we have no samples.
    If anyone reading this owns such a camera, please do consider providing samples.

    • Nikon E8400
    • Nikon E8800

    White Balance Presets:

    • Canon EOS 1200D
    • Canon EOS Kiss X70
    • Canon EOS Rebel T5
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS 5D Mark IV
    • Canon EOS 5DS
    • Canon EOS 5DS R
    • Canon EOS 750D
    • Canon EOS Kiss X8i
    • Canon EOS Rebel T6i
    • Canon EOS 760D
    • Canon EOS 8000D
    • Canon EOS Rebel T6s
    • Canon EOS 80D
    • Canon EOS M10
    • Canon EOS-1D X Mark II
    • Canon PowerShot G7 X Mark II
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X-T10
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5500
    • Olympus PEN-F
    • Pentax K-1
    • Pentax K-70
    • Pentax K-S1
    • Pentax K-S2
    • Sony ILCA-68
    • Sony ILCE-6300

    Noise Profiles:

    • Canon EOS 5DS R
    • Canon EOS 80D
    • Canon PowerShot G15
    • Canon PowerShot S100
    • Canon PowerShot SX100 IS
    • Canon PowerShot SX50 HS
    • Fujifilm X-T10
    • Fujifilm X-T2
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5
    • Nikon D5500
    • Olympus E-PL6
    • Olympus E-PM2
    • Olympus PEN-F
    • Panasonic DMC-FZ1000
    • Panasonic DMC-GF7
    • Pentax K-1
    • Pentax K-S2
    • Ricoh GR
    • Sony DSLR-A900
    • Sony DSC-RX10
    • Sony ILCE-6300
    • Sony NEX-5
    • Sony SLT-A37

    New Translations:

    • Hebrew
    • Slovenian

    Updated Translations:

    • Catalan
    • Czech
    • Danish
    • Dutch
    • French
    • German
    • Hungarian
    • Polish
    • Russian
    • Slovak
    • Spanish
    • Swedish

    December 11, 2016

    Distributing Python Packages Part I: Creating a Python Package

    I write lots of Python scripts that I think would be useful to other people, but I've put off learning how to submit to the Python Package Index, PyPI, so that my packages can be installed using pip install.

    Now that I've finally done it, I see why I put it off for so long. Unlike programming in Python, packaging is a huge, poorly documented hassle, and it took me days to get a working.package. Maybe some of the hints here will help other struggling Pythonistas.

    Create a setup.py

    The setup.py file is the file that describes the files in your project and other installation information. If you've never created a setup.py before, Submitting a Python package with GitHub and PyPI has a decent example, and you can find lots more good examples with a web search for "setup.py", so I'll skip the basics and just mention some of the parts that weren't straightforward.

    Distutils vs. Setuptools

    However, there's one confusing point that no one seems to mention. setup.py examples all rely on a predefined function called setup, but some examples start with

    from distutils.core import setup
    
    while others start with
    from setuptools import setup
    

    In other words, there are two different versions of setup! What's the difference? I still have no idea. The setuptools version seems to be a bit more advanced, and I found that using distutils.core , sometimes I'd get weird errors when trying to follow suggestions I found on the web. So I ended up using the setuptools version.

    But I didn't initially have setuptools installed (it's not part of the standard Python distribution), so I installed it from the Debian package:

    apt-get install python-setuptools python-wheel
    

    The python-wheel package isn't strictly needed, but I found I got assorted warnings warnings from pip install later in the process ("Cannot build wheel") unless I installed it, so I recommend you install it from the start.

    Including scripts

    setup.py has a scripts option to include scripts that are part of your package:

        scripts=['script1', 'script2'],
    

    But when I tried to use it, I had all sorts of problems, starting with scripts not actually being included in the source distribution. There isn't much support for using scripts -- it turns out you're actually supposed to use something called console_scripts, which is more elaborate.

    First, you can't have a separate script file, or even a __main__ inside an existing class file. You must have a function, typically called main(), so you'll typically have this:

    def main():
        # do your script stuff
    
    if __name__ == "__main__":
        main()
    

    Then add something like this to your setup.py:

          entry_points={
              'console_scripts': [
                  script1=yourpackage.filename:main',
                  script2=yourpackage.filename2:main'
              ]
          },
    

    There's a secret undocumented alternative that a few people use for scripts with graphical user interfaces: use 'gui_scripts' rather than 'console_scripts'. It seems to work when I try it, but the fact that it's not documented and none of the Python experts even seem to know about it scared me off, and I stuck with 'console_scripts'.

    Including data files

    One of my packages, pytopo, has a couple of files it needs to install, like an icon image. setup.py has a provision for that:

          data_files=[('/usr/share/pixmaps',      ["resources/appname.png"]),
                      ('/usr/share/applications', ["resources/appname.desktop"]),
                      ('/usr/share/appname',      ["resources/pin.png"]),
                     ],
    

    Great -- except it doesn't work. None of the files actually gets added to the source distribution.

    One solution people mention to a "files not getting added" problem is to create an explicit MANIFEST file listing all files that need to be in the distribution. Normally, setup generates the MANIFEST automatically, but apparently it isn't smart enough to notice data_files and include those in its generated MANIFEST.

    I tried creating a MANIFEST listing all the .py files plus the various resources -- but it didn't make any difference. My MANIFEST was ignored.

    The solution turned out to be creating a MANIFEST.in file, which is used to generate a MANIFEST. It's easier than creating the MANIFEST itself: you don't have to list every file, just patterns that describe them:

    include setup.py
    include packagename/*.py
    include resources/*
    
    If you have any scripts that don't use the extension .py, don't forget to include them as well. This may have been why scripts= didn't work for me earlier, but by the time I found out about MANIFEST.in I had already switched to using console_scripts.

    Testing setup.py

    Once you have a setup.py, use it to generate a source distribution with:

    python setup.py sdist
    
    (You can also use bdist to generate a binary distribution, but you'll probably only need that if you're compiling C as part of your package. Source dists are apparently enough for pure Python packages.)

    Your package will end up in dist/packagename-version.tar.gz so you can use tar tf dist/packagename-version.tar.gz to verify what files are in it. Work on your setup.py until you don't get any errors or warnings and the list of files looks right.

    Congratulations -- you've made a Python package! I'll post a followup article in a day or two about more ways of testing, and how to submit your working package to PyPI.

    Update: Part II is up: Distributing Python Packages Part II: Submitting to PyPI.

    December 08, 2016

    Fedora Design Interns Update

    Fedora Design Team Logo

    I wanted to give you an update on the status of the Fedora Design team’s interns. We currently have two interns on our team:

    Flock 2016 Logo

    Mary Shakshober – (IRC: mshakshober) Mary started her internship full time this summer and amongst other things designed the beautiful, Polish folk art-inspired Flock 2016 logo. She’s currently working limited hours as the school year is back in swing at UNH, but she is still working on design team tickets, including new Fedora booth material designs and a template for Fedora’s logic model.

    Suzanne Hillman – (IRC: shillman) Suzanne just started her Outreachy internship with us two days ago. She has been working on UX design research for a new Fedora Hubs feature – Regional Hubs. She’s already had some interviews with Fedora folks who’ve been involved in organizing regional Fedora events, and we’ll be using an affinity mapping exercise along with Matthew Miller to analyze the data she’s collected.

    If you see Mary or Suzanne around, please say hi! 🙂

    December 05, 2016

    Welcome Digital Painters


    Welcome Digital Painters

    You mean there's art outside photography?

    Yes, there really is art outside photography. :)

    The history and evolution of painting has undergone a similar transformation as most things adapting to a digital age. As photographers, we adapted techniques and tools commonly used in the darkroom to software, and found new ways to extend what was possible to help us achieve a vision. Just as we tried to adapt skills to a new environment, so too did traditional artists, like painters.

    Pat David Painting by Gustavo Deveze My headshot, as painted by Gustavo Deveze

    These artists adapted by not only emulating the results of various techniques, but by pushing forward the boundaries of what was possible through these new (Free Software) tools.

    Impetus

    Digital painting discussions with Free Software lacks a good outlet for collaboration that can open the discussion for others to learn from and participate in. This is a similar situation the Free Software + photography world was in that prompted the creation of pixls.us.

    Due to this, both Americo Gobbo and Elle Stone reached out to us to see if we could create a new category in the community about Digital Painting with a focus on promoting serious discussion around techniques, processes, and associated tools.

    Both of them have been working hard on advancing the capabilities and quality of various Free Software tools for years now. Americo brings with him the interest of other painters who want to help accelerate the growth and adoption of Free Software projects for painting (and more) in a high-quality and professional capacity. A little background about them:

    Americo Gobbo studied Fine Arts in Bologna, Italy. Today he lives and works in Brazil, where he continues to develop studies and create experimentation with painting and drawing mainly within the digital medium in which he tries to replicate the traditional effects and techniques from the real world to the virtual.

    Imaginary Landscape Painting by Americo Gobbo Imaginary Landscape - Wet sketches, experiments on GIMP 2.9.+
    Americo Gobbo, 2016.

    Elle Stone is an amateur photographer with a long-standing interest in the history of photography and print making, and in combining painting and photography. She’s been contributing to GIMP development since 2012, mostly in the areas of color management and proper color mixing and blending.

    Leaves in May Image by Elle Stone Leaves in May, GIMP-2.9 (GIMP-CCE)
    Elle Stone, 2016.

    Artists

    With this introductory post to the new Digital painting category forum we feature Gustavo Deveze, who is a Visual Artist using free software. Deveze’s work is characterized by mixing different medias and techniques. With future posts we want to continue featuring artists using free software.

    Gustavo Deveze

    Gustavo Deveze is a visual artist and lives in Buenos Aires. He trained as a draftsman at the National School of Fine Arts “Manuel Belgrano”, and filmmaker at IDAC - Instituto de Arte Cinematográfica in Avellaneda, Argentina.

    His works utilize different materials and supports and he is published by different publishers. Although in the last years he works mainly in digital format and with free software. He has participated in national and international shows and exhibitions of graphics and cinema with many awards. His last exposition can be seen on issuu.com: https://issuu.com/gustavodeveze/docs/inadecuado2edicion

    Website: http://www.deveze.com.ar

    Cudgels and Bootlickers: The Emperor's happiness - Gustavo Deveze Cudgels and Bootlickers: The Emperor’s happiness - Gustavo Deveze.
    Let's be clear: the village's idiot is not tall... - Gustavo Deveze Let’s be clear: the village’s idiot is not tall… - Gustavo Deveze.

    Digital Painting Category

    The new Digital Painting category is for discussing painting techniques, processes, and associated tools in a digital environment using Free/Libre software. Some relevant topics might include:

    • Emulating non-digital art, drawing on diverse historical and cultural genres and styles of art.

    • Emulating traditional “wet darkroom” photography, drawing on the rich history of photographic and printmaking techniques.

    • Exploring ways of making images that were difficult or impossible before the advent of new algorithms and fast computers to run them on, including averaging over large collections of images.

    • Discussion of topics that transcend “just photography” or “just painting”, such as composition, creating a sense of volume or distance, depicting or emphasizing light and shadow, color mixing, color management, and so forth.

    • Combining painting and photography: Long before digital image editing artists already used photographs as aids to and part of making paintings and illustrations, and photographers incorporated painting techniques into their photographic processing and printmaking.

    • An important goal is also to encourage artists to submit tutorials and videos about Digital Painting with Free Software and to also submit high-quality finished works.

    Say Hello!

    Please feel free to stop into the new Digital Painting category, introduce yourself, and say hello! I look forward to seeing what our fellow artists are up to.

    All images not otherwise specified are licensed CC-BY-NC-SA

    December 04, 2016

    darktable 2.2.0rc2 released

    we're proud to announce the third release candidate of darktable 2.2.0, with some fixes over the previous release candidate.

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc2.

    as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

    f3ed739f79858a1ce2b3746bbab11994f5fb38db6e96941d84ba475beab890a6  darktable-2.2.0.rc2.tar.xz
    5d91cfd1622fb82e8f59db912e8b784a36b83f4a06d179e906f437104edc96f1  darktable-2.2.0.rc2.39.g684e8af41.dmg
    

    the changelog vs. the stable 2.0.x series is below:

    • Well over 2k commits since 2.0.0

    The Big Ones:

    Quite Interesting Changes:

    • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
    • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
    • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
    • Allow darktable-cli to work on directories
    • Allow to import/export tags from Lightroom keyword files
    • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
    • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
    • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
    • Add range operator and date compare to the collection module
    • Add basic undo/redo support for the darkroom (masks are not accounted !)
    • Support the Exif date and time when importing photos from camera
    • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
    • Rudimentary CYGM and RGBE color filter array support
    • Nicer web gallery exporter -- now touch friendly!
    • OpenCL implementation of VNG/VNG4 demosaicing methods
    • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
    • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
    • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
    • darktable-cli: do not even try to open display, we don't need it.
    • Hotpixels module: make it actually work for X-Trans
    • Cmstest tool should now produce correct output in more cases, especially in multi-monitor setups.
    • Darkroom histogram now uses more bins: use all 8-bit of the output, not just 6.

    Some More Changes, Probably Not Complete:

    • Drop darktable-viewer tool in favor of slideshow view
    • Remove gnome keyring password backend, use libsecret instead
    • When using libsecret to store passwords then put them into the correct collection
    • Hint via window manager when import/export is done
    • Quick tagging searches anywhere, not just at the start of tags
    • The sidecar XMP schema for history entries is now more consistent and less error prone
    • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
    • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
    • Add geolocation to watermark variables
    • Fix some crashes with missing configured ICC profiles
    • Support greyscale color profiles
    • Make sure that proper signal handlers are still set after GM initialization...
    • OSX: add trash support (thanks to Michael Kefeder for initial patch)
    • Attach Xmp data to EXR files
    • Several fixes for HighDPI displays
    • Use Pango for text layout, thus supporting RTL languages
    • Feathering size in some mask shapes can be set with shift+scroll
    • Many bugs got fixed and some memory leaks plugged
    • The usermanual was updated to reflect the changes in the 2.2 series
    • Tone curve: mode "automatic in XYZ" mode for "scale chroma"
    • Some compilation fixes

    Lua specific changes:

    • All asynchronous calls have been rewritten
    • The darktable-specific implementation of yield was removed
    • darktable.control.execute allows to execute some shell commands without blocking Lua
    • darktable.control.read allows to wait for a file to be readable without blocking Lua
    • darktable.control.sleep allows to pause the Lua execution without blocking other Lua threads
    • darktable.gui.libs.metadata_view.register_info allows to add new field to the metadata widget in the darkroom view
    • The TextView widget can now be created in Lua, allowing input of large chunks of text
    • It is now possible to use a custom widget in the Lua preference window to configure a preference
    • It is now possible to set the precision and step on slider widgets

    Changed Dependencies:

    • CMake 3.0 is now required.
    • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
    • Drop support for OS X 10.6
    • Bump required libexiv2 version up to 0.24
    • Bump GTK+ requirement to gtk-3.14. (because even Debian stable has it)
    • Bump GLib requirement to glib-2.40.
    • Port to OpenJPEG2
    • SDL is no longer needed.

    Base Support

    • Canon EOS-1D X Mark II
    • Canon EOS 5D Mark IV
    • Canon EOS 80D
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS M10
    • Canon PowerShot A720 IS (dng)
    • Canon PowerShot G7 X Mark II
    • Canon PowerShot G9 X
    • Canon PowerShot SD450 (dng)
    • Canon PowerShot SX130 IS (dng)
    • Canon PowerShot SX260 HS (dng)
    • Canon PowerShot SX510 HS (dng)
    • Fujifilm FinePix S100FS
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X70
    • Fujifilm XQ2
    • GITUP GIT2 (chdk-a, chdk-b)
    • (most nikon cameras here are just fixes, and they were supported before already)
    • Nikon 1 AW1 (12bit-compressed)
    • Nikon 1 J1 (12bit-compressed)
    • Nikon 1 J2 (12bit-compressed)
    • Nikon 1 J3 (12bit-compressed)
    • Nikon 1 J4 (12bit-compressed)
    • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
    • Nikon 1 S1 (12bit-compressed)
    • Nikon 1 S2 (12bit-compressed)
    • Nikon 1 V1 (12bit-compressed)
    • Nikon 1 V2 (12bit-compressed)
    • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix A (14bit-compressed)
    • Nikon Coolpix P330 (12bit-compressed)
    • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix P6000 (12bit-uncompressed)
    • Nikon Coolpix P7000 (12bit-uncompressed)
    • Nikon Coolpix P7100 (12bit-uncompressed)
    • Nikon Coolpix P7700 (12bit-compressed)
    • Nikon Coolpix P7800 (12bit-compressed)
    • Nikon D1 (12bit-uncompressed)
    • Nikon D100 (12bit-compressed, 12bit-uncompressed)
    • Nikon D1H (12bit-compressed, 12bit-uncompressed)
    • Nikon D1X (12bit-compressed, 12bit-uncompressed)
    • Nikon D200 (12bit-compressed, 12bit-uncompressed)
    • Nikon D2H (12bit-compressed, 12bit-uncompressed)
    • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
    • Nikon D2X (12bit-compressed, 12bit-uncompressed)
    • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3000 (12bit-compressed)
    • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3100 (12bit-compressed)
    • Nikon D3200 (12bit-compressed)
    • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
    • Nikon D3400 (12bit-compressed)
    • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3X (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D40 (12bit-compressed, 12bit-uncompressed)
    • Nikon D40X (12bit-compressed, 12bit-uncompressed)
    • Nikon D4S (14bit-compressed)
    • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D50 (12bit-compressed)
    • Nikon D500 (14bit-compressed, 12bit-compressed)
    • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
    • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
    • Nikon D5200 (14bit-compressed)
    • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D60 (12bit-compressed, 12bit-uncompressed)
    • Nikon D600 (14bit-compressed, 12bit-compressed)
    • Nikon D610 (14bit-compressed, 12bit-compressed)
    • Nikon D70 (12bit-compressed)
    • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
    • Nikon D7000 (14bit-compressed, 12bit-compressed)
    • Nikon D70s (12bit-compressed)
    • Nikon D7100 (14bit-compressed, 12bit-compressed)
    • Nikon D80 (12bit-compressed, 12bit-uncompressed)
    • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D90 (12bit-compressed, 12bit-uncompressed)
    • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon E5400 (12bit-uncompressed)
    • Nikon E5700 (12bit-uncompressed)
    • Olympus PEN-F
    • OnePlus One (dng)
    • Panasonic DMC-FZ150 (1:1, 16:9)
    • Panasonic DMC-FZ18 (16:9, 3:2)
    • Panasonic DMC-FZ300 (4:3)
    • Panasonic DMC-FZ50 (16:9, 3:2)
    • Panasonic DMC-G8 (4:3)
    • Panasonic DMC-G80 (4:3)
    • Panasonic DMC-GX80 (4:3)
    • Panasonic DMC-GX85 (4:3)
    • Panasonic DMC-LX3 (1:1)
    • Panasonic DMC-LX10 (3:2)
    • Panasonic DMC-LX15 (3:2)
    • Panasonic DMC-LX9 (3:2)
    • Pentax K-1
    • Pentax K-70
    • Samsung GX20 (dng)
    • Sony DSC-F828
    • Sony DSC-RX10M3
    • Sony DSLR-A380
    • Sony ILCA-68
    • Sony ILCE-6300

    We were unable to bring back these 3 cameras, because we have no samples.
    If anyone reading this owns such a camera, please do consider providing samples.

    • Nikon E8400
    • Nikon E8800

    White Balance Presets:

    • Canon EOS 1200D
    • Canon EOS Kiss X70
    • Canon EOS Rebel T5
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS 5D Mark IV
    • Canon EOS 5DS
    • Canon EOS 5DS R
    • Canon EOS 750D
    • Canon EOS Kiss X8i
    • Canon EOS Rebel T6i
    • Canon EOS 760D
    • Canon EOS 8000D
    • Canon EOS Rebel T6s
    • Canon EOS 80D
    • Canon EOS M10
    • Canon EOS-1D X Mark II
    • Canon PowerShot G7 X Mark II
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X-T10
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5500
    • Olympus PEN-F
    • Pentax K-1
    • Pentax K-70
    • Pentax K-S1
    • Pentax K-S2
    • Sony ILCA-68
    • Sony ILCE-6300

    Noise Profiles:

    • Canon EOS 5DS R
    • Canon EOS 80D
    • Canon PowerShot G15
    • Canon PowerShot S100
    • Canon PowerShot SX100 IS
    • Canon PowerShot SX50 HS
    • Fujifilm X-T10
    • Fujifilm X-T2
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5
    • Nikon D5500
    • Olympus E-PL6
    • Olympus E-PM2
    • Olympus PEN-F
    • Panasonic DMC-FZ1000
    • Panasonic DMC-GF7
    • Pentax K-1
    • Pentax K-S2
    • Ricoh GR
    • Sony DSLR-A900
    • Sony DSC-RX10
    • Sony ILCE-6300
    • Sony NEX-5
    • Sony SLT-A37

    New Translations:

    • Hebrew
    • Slovenian

    Updated Translations:

    • Catalan
    • Czech
    • Danish
    • Dutch
    • French
    • German
    • Hungarian
    • Polish
    • Russian
    • Slovak
    • Spanish
    • Swedish

    November 28, 2016

    A Masashi Wakui look with GIMP


    A Masashi Wakui look with GIMP

    A color bloom fit for night urban landscapes

    This tutorial explains how to achieve an effect based on the post processing by photographer Masashi Wakui. His primary subjects appear as urban landscape views of Japan where he uses some pretty and aggressive color toning to complement his scenes along with a soft ‘bloom’ effect on the highlights. The results evoke a strong feeling of an almost cyberpunk or futuristic aesthetic (particularly for fans of Bladerunner or Akira!).

    Untitled Untitled Untitled

    This tutorial started its life in the pixls.us forum, which was inspired by a forum post seeking assistance on replicating the color grading and overall look/feel of Masashi’s photography.

    Prerequisites

    To follow along will require a couple of plugins for GIMP.

    The Luminosity Mask filter will be used to target color grading to specific tones. You can find out more about luminosity masks in GIMP at Pat David’s blog post and his follow-up blog post. If you need to install the script, directions can be found (along with the scripts) at the PIXLS.US GIMP scripts git repository.

    You will also need the Wavelet decompose plugin. The easiest way to get this plugin is to use the one available in G’MIC. As a bonus you’ll get access to many other incredible filters as well! Once you’ve installed G’MIC the filter can be found under
    Details → Split details [wavelets].

    We will do some basic toning and then apply Gimp’s wavelet decompose filter to do some magic. Two things will be used from the wavelet decompose results:

    • the residual
    • the coarsest wavelet scale (number 8 in this case)

    The basic idea is to use the residual of the the wavelet decompose filter to color the image. What this does is average and blur the colors. The trick strengthens the effect of the surroundings being colored by the lights. The number of wavelet scales to use depends on the pixel size of the picture; the relative size of the coarsest wavelet scale compared to the picture is the defining parameter. The wavelet scale 8 will then produce overemphasised local contrasts, which will accentuate the lights further. This works nicely in pictures with lights as the brightest areas will be around lights. Used on daytime picture this effect will also accentuate brighter areas which will lead to a kind of “glow” effect. I tried this as well and it does look good on some pictures while on others it looks just wrong. Try it!

    We will be applying all the following steps to this picture, taken in Akihabara, Tokyo.

    The unaltered photograph The starting image (download full resolution).
    1. Apply the luminosity mask filter to the base picture. We will use this later.

      Filters → Generic → Luminosity Masks

    2. Duplicate the base picture (Ctrl+Shift+D).

      Layer → Duplicate Layer

    3. Tone the shadows of the duplicated picture using the tone curve by lowering the reds in the shadows. If you want your shadows to be less green, slightly raise the blues in the shadows.

      Colors → Curves

      The toning curves
      The photograph with the toning curve applied
    4. Apply a layer mask to the duplicated and toned picture. Choose the DD luminosity mask from a channel.

      Layer → Mask → Add Layer Mask

      Luminosity Mask Added
    5. With both layers visible, create a new layer from what is visible. Call this layer the “blended” layer.

      Layer → New from Visible

      The photograph after the blended layer
    6. Apply the wavelet decompose filter to the “blended” layer and choose 9 as number of detail scales. Set the G’MIC output mode to “New layer(s)” (see below).

      Filters → G’MIC
      Details → Split Details [wavelets]

      G'MIC Split Details Wavelet Decompose dialog Remember to set G’MIC to output the results on New Layer(s).
    7. Make the blended and blended [residual] layers visible. Then set the mode of the blended [residual] layer to color. This will give you a picture with averaged, blurred colors.

      The fully colored photograph
    8. Turn the opacity of the blended [residual] down to 70%, or any other value to your taste, to bring back some color detail.

      The partially colored photograph
    9. Turn on the blended [scale #8] layer, set the mode to grain merge, and see how the lights start shining. Adjust opacity to taste.

      The augmented contrast layer
    10. Optional: Turn the wavelet scale 3 (or any other) on to sharpen the picture and blend to taste.

    11. Make sure the following layers are visible:

      • blended
      • residual
      • wavelet scale 8
      • Any other wavelet scale you want to use for sharpening
    12. Make a new layer from visible

      Layer → New from Visible

    13. Raise and slightly crush the shadows using the tone curve.

      Raise the shadow curve
    14. Optional: Adjust saturation to taste. If there are predominantly white lights and the colors come mainly from other objects, the residual will be washed out, as is the case with this picture.

      I noticed that the reds and yellows were very dominant compared to greens and blues. So using the Hue-Saturation dialog I raised the master saturation by +70 and lowered the yellow saturation by -50 and lowered the red saturation by -40 all using an overlap of 60.

    The final result:

    The final image! The final result. (Click to compare to original.)
    Download the full size result.

    Linux communities, we need your help!

    There are a lot of Linux communities all over the globe filled with really nice people who just want to help others. Typically these people either can’t (or don’t feel comfortable) coding, and I’d love to harness some of that potential by adding a huge number of new application reviews to the ODRS. At the moment we have about 1100 reviews, mostly covering the more popular applications, and also mostly written in English.

    What I would love is for a few groups of people to come together for their next LUG/outreach/InstallFest and sit down together somewhere cozy and write a few reviews. Bonus points if you use a less-well-known application, and even more points if you can write in a language other than English. Submitting a review is easy; just open up GNOME Software, find the application, and click ‘Write a Review‘ at the bottom of the page.

    Application reviews help new users what to install, and the star ratings you give means we can return useful search results full of great applications. Please write an email, ask about helping the ODRS, and perhaps you can help a lot of new users next time you meet with your Linuxy friends.

    Thanks!

    November 26, 2016

    FreeCAD Arch development news

    There is quite some time I don't write about Arch development, so here goes a little overview of what's been going on during the last weeks. As always, I'll be describing mostly what I've been doing myself, but many other people are very actively working on FreeCAD too, much more is going on. The best...

    November 24, 2016

    Watching org.libelektra with Qt

    libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.

    Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:

    Declare a callback function in your Qt class header:

    public slots:
     void configChanged( QString msg );

    Add the QtDBus API in your sources:

    #include <QtDBus/QtDBus>

    Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:

    if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(),
     this, SLOT( configChanged( QString ) )) )
     fprintf(stderr, "=================== Done connect\n" );

    In your callback arrive the org.libelektra signals:

    void Synnefo::configChanged( QString msg )
    {
     fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );
    };

    As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:

    // init a gate keeper in the class constructor:
    acceptDBusUpdate = true;
    
    void Synnefo::configChanged( QString msg )
    {
      // allow the first message to ping
      if(acceptDBusUpdate == false) return;
      // block more messages
      acceptDBusUpdate = false;
    
      // update the view slightly later and avoid trouble
      QTimer::singleShot(250, this, SLOT( update() ));
    };
    
    void Synnefo::update()
    {
      // clear the Oyranos settings cache (Oyranos CMS specific)
      oyGetPersistentStrings( NULL );
    
      // the data model reading from libelektra and GUI update
      // code ...
    
      // open the door for more messages to come
      acceptDBusUpdate = true;
    }

    The above code works for both Qt4 and Qt5.

    String freeze for the upcoming 2.2 series

    This is a call for all our translators, now is the time to bring your .po file in the master branch up to date. We will not ship any translation that is not relatively complete, the exact threshold is still to be determined.

    As a quick reminder, these are the steps to update the translation if you are working from git. language_code is not the whole filename of the po file but just the first part of it. For example, when for Italian the language code is it while the filename is it.po. You also have to compile darktable before updating your .po file as some of the translated files are auto-generated.

    cd /path/to/your/darktable/checkout/
    git checkout master
    git pull
    ./build.sh
    cd po/
    intltool-update <language_code>
    <edit language_code.po>

    If you don't have a build environment set up to compile darktable you can also use this .pot file.

    November 23, 2016

    darktable 2.2.0rc1 released

    we're proud to announce the second release candidate of darktable 2.2.0, with some fixes over the previous release candidate. the most important one might be bringing back read support for very old xmp files (~4 years).

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc1.

    as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

    0612163b0020bc3326909f6d7f7cbd8cfb5cff59b8e0ed1a9e2a2aa17d8f308e  darktable-2.2.0~rc1.tar.xz
    

    the changelog vs. the stable 2.0.x series is below:

    • Well over 2k commits since 2.0.0

    The Big Ones:

    Quite Interesting Changes:

    • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
    • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
    • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
    • Allow darktable-cli to work on directories
    • Allow to import/export tags from Lightroom keyword files
    • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
    • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
    • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
    • Add range operator and date compare to the collection module
    • Add basic undo/redo support for the darkroom (masks are not accounted !)
    • Support the Exif date and time when importing photos from camera
    • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
    • Rudimentary CYGM and RGBE color filter array support
    • Nicer web gallery exporter -- now touch friendly!
    • OpenCL implementation of VNG/VNG4 demosaicing methods
    • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
    • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
    • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
    • darktable-cli: do not even try to open display, we don't need it.
    • Hotpixels module: make it actually work for X-Trans

    Some More Changes, Probably Not Complete:

    • Drop darktable-viewer tool in favor of slideshow view
    • Remove gnome keyring password backend, use libsecret instead
    • When using libsecret to store passwords then put them into the correct collection
    • Hint via window manager when import/export is done
    • Quick tagging searches anywhere, not just at the start of tags
    • The sidecar XMP schema for history entries is now more consistent and less error prone
    • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
    • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
    • Add geolocation to watermark variables
    • Fix some crashes with missing configured ICC profiles
    • Support greyscale color profiles
    • OSX: add trash support (thanks to Michael Kefeder for initial patch)
    • Attach Xmp data to EXR files
    • Several fixes for HighDPI displays
    • Use Pango for text layout, thus supporting RTL languages
    • Feathering size in some mask shapes can be set with shift+scroll
    • Many bugs got fixed and some memory leaks plugged
    • The usermanual was updated to reflect the changes in the 2.2 series

    Changed Dependencies:

    • CMake 3.0 is now required.
    • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
    • Drop support for OS X 10.6
    • Bump required libexiv2 version up to 0.24
    • Bump GTK+ requirement to gtk-3.14. (because even debian stable has it)
    • Bump GLib requirement to glib-2.40.
    • Port to OpenJPEG2
    • SDL is no longer needed.

    A special note to all the darktable Fedora users: Fedora-provided darktable packages are intentionally built with Lua disabled. Thus, Lua scripting will not work. This breaks e.g. darktable-gimp integration. Please bug Fedora. In the mean time you could fix that by self-compiling darktable (pass -DDONT_USE_INTERNAL_LUA=OFF to cmake in order to enable use of bundled Lua5.2.4).

    Base Support

    • Canon EOS-1D X Mark II
    • Canon EOS 5D Mark IV
    • Canon EOS 80D
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS M10
    • Canon PowerShot A720 IS (dng)
    • Canon PowerShot G7 X Mark II
    • Canon PowerShot G9 X
    • Canon PowerShot SD450 (dng)
    • Canon PowerShot SX130 IS (dng)
    • Canon PowerShot SX260 HS (dng)
    • Canon PowerShot SX510 HS (dng)
    • Fujifilm FinePix S100FS
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X70
    • Fujifilm XQ2
    • GITUP GIT2 (chdk-a, chdk-b)
    • (most nikon cameras here are just fixes, and they were supported before already)
    • Nikon 1 AW1 (12bit-compressed)
    • Nikon 1 J1 (12bit-compressed)
    • Nikon 1 J2 (12bit-compressed)
    • Nikon 1 J3 (12bit-compressed)
    • Nikon 1 J4 (12bit-compressed)
    • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
    • Nikon 1 S1 (12bit-compressed)
    • Nikon 1 S2 (12bit-compressed)
    • Nikon 1 V1 (12bit-compressed)
    • Nikon 1 V2 (12bit-compressed)
    • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix A (14bit-compressed)
    • Nikon Coolpix P330 (12bit-compressed)
    • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix P6000 (12bit-uncompressed)
    • Nikon Coolpix P7000 (12bit-uncompressed)
    • Nikon Coolpix P7100 (12bit-uncompressed)
    • Nikon Coolpix P7700 (12bit-compressed)
    • Nikon Coolpix P7800 (12bit-compressed)
    • Nikon D1 (12bit-uncompressed)
    • Nikon D100 (12bit-compressed, 12bit-uncompressed)
    • Nikon D1H (12bit-compressed, 12bit-uncompressed)
    • Nikon D1X (12bit-compressed, 12bit-uncompressed)
    • Nikon D200 (12bit-compressed, 12bit-uncompressed)
    • Nikon D2H (12bit-compressed, 12bit-uncompressed)
    • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
    • Nikon D2X (12bit-compressed, 12bit-uncompressed)
    • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3000 (12bit-compressed)
    • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3100 (12bit-compressed)
    • Nikon D3200 (12bit-compressed)
    • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
    • Nikon D3400 (12bit-compressed)
    • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3X (14bit-compressed, 14bit-uncompressed)
    • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D40 (12bit-compressed, 12bit-uncompressed)
    • Nikon D40X (12bit-compressed, 12bit-uncompressed)
    • Nikon D4S (14bit-compressed)
    • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D50 (12bit-compressed)
    • Nikon D500 (14bit-compressed, 12bit-compressed)
    • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
    • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
    • Nikon D5200 (14bit-compressed)
    • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D60 (12bit-compressed, 12bit-uncompressed)
    • Nikon D600 (14bit-compressed, 12bit-compressed)
    • Nikon D610 (14bit-compressed, 12bit-compressed)
    • Nikon D70 (12bit-compressed)
    • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
    • Nikon D7000 (14bit-compressed, 12bit-compressed)
    • Nikon D70s (12bit-compressed)
    • Nikon D7100 (14bit-compressed, 12bit-compressed)
    • Nikon D80 (12bit-compressed, 12bit-uncompressed)
    • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D90 (12bit-compressed, 12bit-uncompressed)
    • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon E5400 (12bit-uncompressed)
    • Nikon E5700 (12bit-uncompressed)
    • Olympus PEN-F
    • OnePlus One (dng)
    • Panasonic DMC-FZ150 (1:1, 16:9)
    • Panasonic DMC-FZ18 (16:9, 3:2)
    • Panasonic DMC-FZ300 (4:3)
    • Panasonic DMC-FZ50 (16:9, 3:2)
    • Panasonic DMC-G8 (4:3)
    • Panasonic DMC-G80 (4:3)
    • Panasonic DMC-GX80 (4:3)
    • Panasonic DMC-GX85 (4:3)
    • Panasonic DMC-LX3 (1:1)
    • Panasonic DMC-LX10 (3:2)
    • Panasonic DMC-LX15 (3:2)
    • Panasonic DMC-LX9 (3:2)
    • Pentax K-1
    • Pentax K-70
    • Samsung GX20 (dng)
    • Sony DSC-F828
    • Sony DSC-RX10M3
    • Sony DSLR-A380
    • Sony ILCA-68
    • Sony ILCE-6300

    We were unable to bring back these 3 cameras, because we have no samples.
    If anyone reading this owns such a camera, please do consider providing samples.

    • Nikon E8400
    • Nikon E8800
    • Nikon D3X (12-bit)

    White Balance Presets

    • Canon EOS 1200D
    • Canon EOS Kiss X70
    • Canon EOS Rebel T5
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS 5D Mark IV
    • Canon EOS 5DS
    • Canon EOS 5DS R
    • Canon EOS 750D
    • Canon EOS Kiss X8i
    • Canon EOS Rebel T6i
    • Canon EOS 760D
    • Canon EOS 8000D
    • Canon EOS Rebel T6s
    • Canon EOS 80D
    • Canon EOS M10
    • Canon EOS-1D X Mark II
    • Canon PowerShot G7 X Mark II
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X-T10
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5500
    • Olympus PEN-F
    • Pentax K-1
    • Pentax K-70
    • Pentax K-S1
    • Pentax K-S2
    • Sony ILCA-68
    • Sony ILCE-6300

    Noise Profiles

    • Canon EOS 5DS R
    • Canon EOS 80D
    • Canon PowerShot G15
    • Canon PowerShot S100
    • Canon PowerShot SX50 HS
    • Fujifilm X-T10
    • Fujifilm X-T2
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5500
    • Olympus E-PL6
    • Olympus PEN-F
    • Panasonic DMC-FZ1000
    • Panasonic DMC-GF7
    • Pentax K-S2
    • Ricoh GR
    • Sony DSLR-A900
    • Sony DSC-RX10
    • Sony SLT-A37

    New Translations

    • Hebrew
    • Slovenian

    Updated Translations

    • Catalan
    • Czech
    • Danish
    • Dutch
    • French
    • German
    • Hungarian
    • Russian
    • Slovak
    • Spanish
    • Swedish

    November 22, 2016

    Giving Thanks


    Giving Thanks

    For an awesome community!

    Here in the U.S., we have a big holiday coming up this week: Thanksgiving. Serendipitously, this holiday also happens to fall when a few neat things are happening around the community, and what better time is there to recognize some folks and to give thanks of our own? No time like the present!

    A Special Thanks

    I feel a special “Thank You” should first go to a photographer and fantastic supporter of the community, Dimitrios Psychogios. Last year for our trip to Libre Graphics Meeting, London he stepped up with an awesome donation to help us bring some fun folks together.

    LGM2016 Dinner Fun folks together.
    Mairi, the darktable nerds, a RawTherapee nerd, and a PhotoFlow nerd.
    (and the nerd taking the photo, patdavid)

    This year he was incredibly kind by offering a donation to the community (completely unsolicited) that covers our hosting and infrastructure costs for an entire year! So on behalf of the community, Thank You for your support, Dimitrios!

    I’ll be creating a page soon that will list our supporters as a means of showing our gratitude. Speaking of supporters and a new page on the site…

    A Support Page

    Someone had asked about the possibility of donating to the community on a post. We were talking about providing support in darktable for using a midi controller deck and the costs for some of the options weren’t too extravagant. This got us thinking that enough small donations could probably cover something like this pretty easily, and if it was community hardware we could make sure it got passed around to each of the projects that would be interested in creating support for it.

    KORG NanoControl2 An example midi-controller that we might get support
    for in darktable and other projects.

    That conversation had me thinking about ways to allow folks to support the community. In particular, ways to make it easy to provide support on an on-going basis if possible (in addition to simple, single donations). There are goal-oriented options out there that folks are probably already familiar with (Kickstarter, Indiegogo and others) but the model for us is less goal-oriented and more about continuous support.

    Patreon was an option as well (and I already had a skeleton Patreon account set up), but the fees were just too much in the end. They wanted a flat 5% along with the regular PayPal fees. The general consensus among the staff was that we wanted to maximize the funds getting to the community.

    The best option in the end was to create a merchant account on PayPal and manually set up the various payment options. I’ve set them up similar to how a service like Patreon might run with four different recurring funding levels and an option for a single one-time payment of whatever a user would like. Recurring levels are nice because they make it easier to plan with.

    We’re Not Asking

    Our requirements for the infrastructure of the site are modest and we haven’t actively pursued support or donations for the site before. That hasn’t changed.

    We’re not asking for support now. The best way that someone can help the community is by being an active part of it.

    Engaging others, sharing what you’ve done or learned, and helping other users out wherever you can. This is the best way to support the community.

    I purposely didn’t talk about funding before because I don’t want folks to have to worry or think about it. And before you ask: no, we are not and will not run any advertising on the site. I’d honestly rather just keep paying for things out of my pocket instead.

    We’re not asking for support, but we’ll accept it.

    With that being said, I understand that there’s still some folks that would like to contribute to the infrastructure or help us to get hardware to add support in projects and more. So if you do want to contribute, the page for doing so can be found here:

    https://pixls.us/support

    There are four recurring funding levels of $1, $3, $5, and $10 per month. There is also a one-time contribution option as well.

    We also have an Amazon Affiliate link option. If you’re not familiar with it, you simply click the link to go to Amazon.com. Then anything you buy for the next 24 hours will give us some small percentage of your purchase price. It doesn’t affect the price of what you’re buying at all. So if you were going to purchase something from Amazon anyway, and don’t mind - then by all means use our link first to help out!


    1000 Users

    This week we also finally hit 1,000 users registered on discuss! Which is just bananas to me. I am super thankful for each and every member of the community that has taken the time to participate, share, and generally make one of the better parts of my day catching up on what’s been going on. You all rock!

    While we’re talking about a number “1” with bunch of zeros after it, we recently made some neat improvements to the forums…

    100 Megabytes

    We are a photography community and it seemed stupid to have to restrict users from uploading full quality images or raw files. Previously it was a concern because the server the forums are hosted on have limited disk space (40GB). Luckily, Discourse has an option for storing all uploads to the forum on Amazon S3 buckets.

    I went ahead and created some S3 buckets so that any uploads to the forums will now be hosted on Amazon instead of taking up precious space on the server. The costs are quite reasonable (around $0.30/GB right now), and it also means that I’ve been able to bump the upload size to 100MB for forum posts! You can now just drag and drop full resolution raw files directly into the post editor to include the file!

    Drag and Drop files in discuss 70MB GIMP .xcf file? Just drag-and-drop to upload, no problem! :)

    Travis CI Automation

    On a slightly geekier note, did you know that the code for the entire website is available on Github? It’s also licensed liberally (CC-BY-SA), so no reason not to come and fiddle with things with us! One of the features of using Github is integration with Travis CI (Continuous Integration).

    What this basically means is that every commit to the Github repo for the website gets picked up by Travis and built to test that everything is working ok. You can actually see the history of the website builds there.

    I’ve now got it set up so that when a build is successful on Travis, it will automatically publish the results to the main webserver and make it live. Our build system, Metalsmith, is a static site generator. This means that we build the entire website on our local computers when we make changes, and then publish all of those changes to the webserver. This change automates that process for us now by handling the building and publishing if everything is ok.

    In fact, if everything is working the way I think it should, this very blog post will be the first one published using the new automated system! Hooray!

    You can poke me or @paperdigits on discuss if you want more details or feel like playing with the website.

    Mica

    Speaking of @paperdigits, I want to close this blog post with a great big “Thank You!“ to him as well. He’s the only other person insane enough to try and make sense of all the stuff I’ve done building the site so far, and he’s been extremely helpful hacking at the website code, writing articles, make good infrastructure suggestions, taking the initiative on things (t-shirts and github repos), and generally being awesome all around.

    November 21, 2016

    Last batch of ColorHugALS

    I’ve got 9 more ColorHugALS devices in stock and then when they are sold they will be no more for sale. With all the supplier costs going recently up my “sell at cost price” has turned into “make a small loss on each one” which isn’t sustainable. It’s all OpenHardware, both hardware design and the firmware itself so if someone wanted to start building them for sale they would be doing it with my blessing. Of course, I’m happy to continue supporting the existing sold devices into the distant future.

    colorhug-als1-large

    In part the original goal is fixed, the kernel and userspace support for the new SensorHID protocol works great and ambient light functionality works out of the box for more people on more hardware. I’m slightly disappointed more people didn’t get involved in making the ambient lighting algorithms more smart, but I guess it’s quite a niche area of development.

    Plus, in the Apple product development sense, killing off one device lets me start selling something else OpenHardware in the future. :)

    November 18, 2016

    Solar diagrams in FreeCAD

    New feature in FreeCAD: Arch Sites can now display a solar diagram: More info at http://forum.freecadweb.org/viewtopic.php?f=23&p=145036#p145036

    Miyazaki Tribute

    I am dono, CG freelancer from Paris, France. I use Blender as my main tool for both personal and professional work.

    My workflow was a bit hectic during the creation of my tribute to Hayao Miyazaki short. There’s a ton of ways to produce such film anyway, and everyone has its own workflow, so the best I can do is to simply share how I personally did it.

    I always loved the work of Hayao Miyazaki. I already had a lot of references from blu-ray, art books, mangas and such, so I didn’t spend a lot of time searching for references, but all I can say is that’s quite an important task at the beginning of a project. Having good references can save a lot of time.

    I simply started the project as a modeling and texturing exercise, just to practice. After modeling the bath of “Spirited Away”, I thought it could be cool to do something more evolved.

    miazaki_tribute_01

    So I first did a layout with very low poly meshes to have a realtime preview of the camera’s movements. I also extracted frames from the movies using blurays footage to make two different quality versions. One version used low res JPGs to use for realtime preview in 3D viewport. The second one used raw PNGs for final renders.

    miazaki_tribute_02

    I used realtime previews to edit it all together using Blender’s sequencer. I wanted to find a good tempo and feeling for the music, and with realtime in Blender’s viewport, it was easy and smooth to built up. I edited directly the 3D viewport, by linking the scene in the sequencer, so I didn’t need to render anything!

    miazaki_tribute_03

    Next, I did the rotoscoping in Blender frame by frame. Having used realtime previews for the editing, I already knew exactly how many frames I had to rotoscope. That way I didn’t wasted any time rotoscoping unecessary footage, which was crucial because rotoscoping is very, very time consuming. The very important thing when you do a rotoscoping is to separate parts. You do not want to have everything in one part. Having separated layers makes it more flexible and faster.

    miazaki_tribute_04

    Then, I modeled and unwraped the assets in blender, textured them in Blender and Gimp. I used one blend for each asset to limit blends file size, and used linking to bring everything together in one scene. I also created a blend file that contained a lot of materials (different kind of metal, wood), so I could link them and reuse them at will. It was worth it since it having a modular workflow often really saves time.

    miazaki_tribute_05

    For the smoke, I used the blender smoke, directly rendered in openGL in Blender Internal. You can see and correct very easily any mistakes. I did also some dust and fog pass with it.

    miazaki_tribute_06

    Ocean was done using ocean modifier in Blender. I baked an image sequence in EXR, and used these images to do the wave displacement and foam.

    miazaki_tribute_07

    For rendering I used Octane since I wanted to try a new renderer for this project, but it could have been done using Cycles without any troubles. I rendered layers separatly: characters, sets, backgrounds and fxs. It was very good to have rendered things separatly: the render is more fast, you can have more bigger scene with more polys, and mostly, you can render again a part, if necessary (and it was very often the case) without to render the whole image all over again. Renders were saved in PNG 16 bits for the layer color, and in EXR 32 bits for the Z layer pass. I also rendered some masks and ID mask. This allowed to correct details very quickly during compositing without having to render again the whole image. The rendering time for one frame was from 4 minutes to 15 minutes.

    miazaki_tribute_08

    I finished the compo with Natron, added glow, vignetting, motion blur. The Z layer pass was used to add some fog, and ID mask to correct some objects colors. When you have a lot of layer pass from blender, it is very easy to do compositing and tweak things very quickly. I remember when I used to do everything in one single pass at the time. I did renders over and over to fixe errors and it was very time consuming. Sozap, a friend of mine and a very talented artist taught me to use separate layers. It was a really great tip and thanks to him I could work more efficiently.

    miazaki_tribute_09

    During the production, I showed wip to my friends, because they could provide a new and fresh look on my work. Sometimes, it is hard to have critics, but it is important to listen as they can help you a lot to to improve your work. Without critics, my short most certainly wouldn’t have looked as it does now. Thanks again to Blackschmoll, Boby, Christophe, Clouclou, Cremuss, David, Félicia, Frenchman, Sozap, Stéphane, Virgil! And Thanks to Ton Roosendaal, the Blender community, developers of Blender, Gimp and Natron!

    miazaki_tribute_10

    Check out the making of video!

    November 16, 2016

    Wed 2016/Nov/16

    • Debugging Rust code inside a C library

      An application that uses librsvg is in fact using a C library that has some Rust code inside it. We can debug C code with gdb as usual, but what about the Rust code?

      Fortunately, Rust generates object code! From the application's viewpoint, there is no difference between the C parts and the Rust parts: they are just part of the librsvg.so to which it linked, and that's it.

      Let's try this. I'll use the rsvg-view-3 program that ships inside librsvg — this is a very simple program that opens a window and displays an SVG image in it. If you build the rustification branch of librsvg (clone instructions at the bottom of that page), you can then run this in the toplevel directory of the librsvg source tree:

      tlacoyo:~/src/librsvg-latest (rustification)$ libtool --mode=execute gdb ./rsvg-view-3

      Since rsvg-view-3 is an executable built with libtool, we can't plainly run gdb on it. We need to invoke libtool with the incantation for "do your magic for shared library paths and run gdb on this binary".

      Gdb starts up, but no shared libraries are loaded yet. I like to set up a breakpoint in main() and run the program with its command-line arguments, so its shared libs will load, and then I can start setting breakpoints:

      (gdb) break main
      Breakpoint 1 at 0x40476c: file rsvg-view.c, line 583.
      
      (gdb) run tests/fixtures/reftests/bugs/340047.svg
      Starting program: /home/federico/src/librsvg-latest/.libs/rsvg-view-3 tests/fixtures/reftests/bugs/340047.svg
      
      ...
      
      Breakpoint 1, main (argc=2, argv=0x7fffffffdd48) at rsvg-view.c:583
      583         int retval = 1;
      (gdb)

      Okay! Now the rsvg-view-3 binary is fully loaded, with all its initial shared libraries. We can set breakpoints.

      But what does Rust call the functions we defined? The functions we exported to C code with the #[no_mangle] attribute of course get the name we expect, but what about internal, Rust-only functions? Let's ask gdb!

      Finding mangled names

      I have a length.rs file which defines an RsvgLength structure with a "parse" constructor: it takes a string which is a CSS length specifier, and returns an RsvgLength structure. I'd like to debug that RsvgLength::parse(), but what is it called in the object code?

      The gdb command to list all the functions it knows about is "info functions". You can pass a regexp to it to narrow down your search. I want a regexp that will match something-something-length-something-parse, so I'll use "ength.*parse". I skip the L in "Length" because I don't know how Rust mangles CamelCase struct names.

      (gdb) info functions ength.*parse
      All functions matching regular expression "ength.*parse":
      
      File src/length.rs:
      struct RsvgLength rsvg_internals::length::rsvg_length_parse(i8 *, enum class LengthDir);
      static struct RsvgLength rsvg_internals::length::{{impl}}::parse(struct &str, enum class LengthDir);

      All right! The first one, rsvg_length_parse(), is a function I exported from Rust so that C code can call it. The second one is the mangled name for the RsvgLength::parse() that I am looking for.

      Printing values

      Let's cut and paste the mangled name, set a breakpoint in it, and continue the execution:

      (gdb) break rsvg_internals::length::{{impl}}::parse
      Breakpoint 2 at 0x7ffff7ac6297: file src/length.rs, line 89.
      
      (gdb) cont
      Continuing.
      [New Thread 0x7fffe992c700 (LWP 26360)]
      [New Thread 0x7fffe912b700 (LWP 26361)]
      
      Thread 1 "rsvg-view-3" hit Breakpoint 2, rsvg_internals::length::{{impl}}::parse (string=..., dir=Both) at src/length.rs:89
      89              let (mut value, rest) = strtod (string);
      (gdb)

      Can we print values? Sure we can. I'm interested in the case where the incoming string argument contains "100%" — this will be parse()d into an RsvgLength value with length.length=1.0 and length.unit=Percent. Let's print the string argument:

      89              let (mut value, rest) = strtod (string);
      (gdb) print string
      $2 = {data_ptr = 0x8bd8e0 "12.0\377\177", length = 4}

      Rust strings are different from null-terminated C strings; they have a pointer to the char data, and a length value. Here, gdb is showing us a string that contains the four characters "12.0". I'll make this a conditional breakpoint so I can continue the execution until string comes in with a value of "100%", but I'll cheat: I'll use the C function strncmp() to test those four characters in string.data_ptr; I can't use strcmp() as the data_ptr is not null-terminated.

      (gdb) cond 2 strncmp (string.data_ptr, "100%", 4) == 0
      (gdb) cont
      Continuing.
      
      Thread 1 "rsvg-view-3" hit Breakpoint 2, rsvg_internals::length::{{impl}}::parse (string=..., dir=Vertical) at src/length.rs:89
      89              let (mut value, rest) = strtod (string);
      (gdb) p string
      $8 = {data_ptr = 0x8bd8e0 "100%", length = 4}

      All right! We got to the case we wanted. Let's execute this next line that has "let (mut value, rest) = strtod (string); in it, and print out the results:

      (gdb) next
      91              match rest.as_ref () {
      (gdb) print value
      $9 = 100
      (gdb) print rest
      $10 = {data_ptr = 0x8bd8e3 "%", length = 1}

      What type did "value" get assigned?

      (gdb) ptype value
      type = f64 

      A floating point value, as expected.

      You can see that the value of rest indicates that it is a string with "%" in it. The rest of the parse() function will decide that in fact it is a CSS length specified as a percentage, and will translate our value of 100 into a normalized value of 1.0 and a length.unit of LengthUnit.Percent.

      Summary

      Rust generates object code with debugging information, which gets linked into your C code as usual. You can therefore use gdb on it.

      Rust creates mangled names for methods. Inside gdb, you can find the mangled names with "info functions"; pass it a regexp that is close enough to the method name you are looking for, unless you want tons of function names from the whole binary and all its libraries.

      You can print Rust values in gdb. Strings are special because they are not null-terminated C strings.

      You can set breakpoints, conditional breakpoints, and do pretty much do all the gdb magic that you expect.

      I didn't have to do anything for gdb to work with Rust. The version that comes in openSUSE Tumbleweed works fine. Maybe it's because Rust generates standard object code with debugging information, which gdb readily accepts. In any case, it works out of the box and that's just as it should be.

    Responsive HTML with CSS and Javscript

    In this article you can learn how to make a minimalist web page readable on different format readers like larger desktop screens and handhelds. The ingredients are HTML, with CSS and few JavaScript. The goals for my home page are:

    • most of the layout resides in CSS in a stateless way
    • minimal JavaScript
    • on small displays – single column layout
    • on wide format displays – division of text in columns
    • count of columns adapts to browser window width or screen size
    • combine with markdown

    CSS:

    h1,h2,h3 {
      font-weight: bold;
      font-style: normal;
    }
    
    @media (min-width: 1000px) {
      .tiles {
        display: flex;
        justify-content: space-between;
        flex-wrap: wrap;
        align-items: flex-start;
        width: 100%;
      }
      .tile {
        flex: 0 1 49%;
      }
      .tile2 {
        flex: 1 280px
      }
      h1,h2,h3 {
        font-weight: normal;
      }
    }
    @media (min-width: 1200px) {
      @supports ( display: flex ) {
        .tile {
          flex: 0 1 24%;
        }
      }
    }

    The content in class=”tile” is shown as one column up to 4 columns. tile2 has a fixed with and picks its column count by itself. All flex boxes behave like one normal column. With @media (min-width: 1000px) { a bigger screen is assumed. Very likely there is a overlapping width for bigger handhelds, tablets and smaller laptops. But the layout works reasonable and performs well on shrinking the web browser on a desktop or viewing fullscreen and is well readable. Expressing all tile stuff in flex: syntax helps keeping compatibility with non flex supporting layout engines like in e.g. dillo.

    For reading on High-DPI monitors on small it is essential to set font size properly. Update: Google and Mozilla recommend a meta “viewport” tag to signal browsers, that they are prepared to handle scaling properly. No JavaScript is needed for that.

    <meta name="viewport" content="width=device-width, initial-scale=1.0">

    [Outdated: I found no way to do that in CSS so far. JavaScript:]

    function make_responsive () {
      if( typeof screen != "undefined" ) {
        var fontSize = "1rem";
        if( screen.width < 400 ) {
          fontSize = "2rem";
        }
        else if( screen.width < 720 ) {
          fontSize = "1.5rem";
        }
        else if( screen.width < 1320 ) {
          fontSize = "1rem";
        }
        if( typeof document.children === "object" ) {
          var obj = document.children[0]; // html node
          obj.style["font-size"] = fontSize;
        } else if( typeof document.body != "undefined" ) {
          document.body.style.fontSize = fontSize;
        }
      }
    }
    document.addEventListener( "DOMContentLoaded", make_responsive, false );
    window.addEventListener( "orientationchange", make_responsive, false );

    [The above JavaScript checks carefully if various browser attributes and scales the font size to compensate for small screens and make it readable.]

    The above method works in all tested browsers (FireFox, Chrome, Konqueror, IE) beside dillo and on all platforms (Linux/KDE, Android, WP8.1). The meta tag method works as well better for printing.

    Below some markdown to illustrate the approach.

    HTML:

    <div class="tiles">
    <div class="tile"> My first text goes here. </div>
    <div class="tile"> Second text goes here. </div>
    <div class="tile"> Third text goes here. </div>
    <div class="tile"> Fourth text goes here. </div>
    </div>

    In my previous articles you can read about using CSS3 for Translation and Web Open Font Format (WOFF) for Web Documents.

    November 15, 2016

    Fedora Hubs and Meetbot: A Recursive Tale

    Fedora Hubs

    Hubs and Chat Integration Basics

    One of the planned features of Fedora Hubs that I am most excited about is chat integration with Fedora development chat rooms. As a mentor and onboarder of designers and other creatives into the Fedora project, I’ve witnessed IRC causing a lot of unnecessary pain and delay in the onboarding experience. The idea we have for Hubs is to integrate Fedora’s IRC channels into the Hubs web UI, requiring no IRC client installation and configuration on the part of users in order to be able to participate. The model is meant to be something like this:

    Diagram showing individual hubs mapping to individual IRC channels / privmsgs.

    By default, any given hub won’t have an IRC chat window. And whether or not a chat window appears on the hub is configurable by the hub admin (they can choose to not display the chat widget.) However, the hub admin may map their hub to a specific channel – whatever is appropriate for their team / project / self – and the chat widget on their hub will give visitors the possibility to interact with that team via chat, right in the web interface. Early mockups depict this feature looking something like this, for inclusion on a team or project hub (a PM window for user hubs):

    mockup showing an irc widget for #fedora-design on the design team hub

    Note this follows our general principle of enabling new contributors while not uprooting our existing ones. We followed this with HyperKitty – if you prefer to interact with mailing lists on the web, you can, but if you’ve got your own email-based workflow and client that you don’t want to change at all, HyperKitty doesn’t affect you. Same principle here: if you’ve got an IRC client you like, no change for you. This is just an additional interface by which new folks can interact with you in the same places you already are.

    Implementation is planned to be based on waartaa, for which the lead Hubs developer Sayan Chowdhury is also an upstream developer.

    Long-term, we (along with waartaa upstream) have been thinking about matrix as a better chat protocol that waartaa could support or be ported to in the future. (I personally have migrated from HexChat to Riot.im – popular matrix web + smartphone client – as my only client to connect to Freenode. The experiment has gone quite well. I access my usual freenode channels using Riot.im’s IRC bridges.) So when we think about implementing chat, we also keep in mind the protocol underneath may change at some point.

    That’s a high-level explanation of how we’re thinking about integrating chat into Hubs.

    Next Level: HALP!!1

    As of late, Aurélien Bompard has been investigating the “Help/Halp” feature of feature hubs. (https://pagure.io/fedora-hubs/issue/98)

    The general idea is to have a widget that aggregates all help requests (created using the meetbot #help command while meeting minutes are being recorded) across all teams / meetings and have a single place to sort through them. Folks (particularly new contributors) looking for things they can help out with can refer to it as a nice, timely bucket of tasks that are needed with clear suggestions for how to get started. (Timely, because new contributors want to help with tasks that are needed now and not waste their time on requests that are stale and are no longer needed or already fixed. On the other side, the widget helps bring some attention to the requests people in need of help are making, hopefully increasing the chances they’ll get the help they are looking for.

    The mechanism for generating the list of help requests is to gather #help requests from meeting minutes and display them from most recent to least recent. The chances you’ll find a task that is actually needed now are high. As the requests age, they scroll further and further back into the backlog until they are no longer displayed (the idea being, if enough time has passed, the help is likely no longer needed or has already been provided.) The contact point for would-be helpers is easy – the person who ran the #help command in the meeting is listed as a contact for you to sync up with to get started.)

    The mockups are available in the ticket, but are shown below as well for purposes of illustration:

    Main help widget, showing active help requests across various Fedora teams

    Main help widget, showing active help requests across various Fedora teams

    Mockup showing UI panel where someone can volunteer to help someone with a request.

    Mockup showing UI panel where someone can volunteer to help someone with a request.

    An issue that came up has to do with the mapping we talked about earlier. Many Fedora team meetings occur in #fedora-meeting-*; e.g., #fedora-meeting, #fedora-meeting-1, etc. Occasionally, Fedora meetings occur in a team channel (e.g., #fedora-design) that may not map up with the team’s ‘namespace’ in other applications (e.g., our mailing list is design-team. Our pagure.io repo is ‘/design’.) Based on how Fedora teams use IRC and how meetbot works, we cannot rely on the channel name to get the correct namespace / hub name for a team making a request during a meeting using the meetbot #help command.

    Meetbot does also have a mechanism to set a topic for a meeting, and many teams use this to identify the team meeting – in fact, it’s required to start a meeting now – but depending on who is running the meeting, this freeform field can vary. (For instance – the design team has meetings marked fedora_design, fedora-design, designteam, design-team, design, etc. etc.) So the topic field in the fedmsg meetbot puts out may also not be reliable for pointing to a hub / team.

    One idea we talked about in our meeting a couple of weeks ago as well as last week’s meeting was having some kind of lookup table to map a team to all of its various namespaces in different applications. The problem with this is that because meetbot issues the fedmsgs used to generate the halp widget list of requests as soon as the #help command is issued – it is meetbot itself that would need to lookup the mapping so that it had the correct team name issued in its fedmsg. We couldn’t write some kind of script or something to reconcile things after the meeting concluded. Meetbot itself needs to be changed for this to work – for the #help requests put out on fedmsg by meetbot to have the correct team names associated with them.

    Which Upstream is Less Decomposed?

    Do you see dead upstreams? Zombie image

    Zombie artwork credit: Zombies Silhouette by GDJ on OpenClipArt.

    We determined we needed to make a change to meetbot. meetbot is a plugin to an IRC bot called supybot. Fedora infrastructure doesn’t actually use supybot to run meetbot, though. (There haven’t been any commits to supybot for about 2 years.) Instead, we use a fork called limnoria that is Python 3-based and has various enhancements applied to it.

    How about meetbot? Well, meetbot hasn’t been touched by its upstream since 2009 (7 years ago.) I believe Fedora carries some local patches to it. In talking with Kevin Fenzi, we discovered there is a newer fork of meetbot maintained by the upstream OpenStack team. That hadn’t seen activity in 3 years, according to github.

    Aurélien contacted the upstream OpenStack folks and discovered that, pending a modification to implement file-based configs to enable deployment using tools like Ansible, they were looking to port their supybot plugins (including meetbot) to errbot and migrate to that. So we had a choice – we could implement what we needed on top of their newer meetbot as is and they would be willing to work with us, or we could join their team in migrating to errbot, participate in the meetbot porting process, and use errbot going forward. Errbot appears to have a very active upstream with many plugins available already.

    How Far Down the Spiral Do We Go?

    To unravel ourselves a bit from the spiral of recursion here… remember, we’re trying to implement a simple Help widget for Fedora Hubs. As we’ve discovered, the technology upon which the features we need to interact with to make the feature happen are a bit zombiefied. What to do?

    We agreed that the overall mission of Fedora Hubs as a project is to make collaboration in Fedora more efficient and easy for everyone. In this situation specifically, we decided that migrating to errbot and upgrading a ported meetbot to allow for mapping team namespaces to meeting minutes would be the right way to go. It’s definitely not the easy way, but we think it’s the right way.

    It’s our hope in general that as we work our way through implementing Hubs as a unified interface for collaboration in Fedora, we expose deficiencies present in the underlying apps and are able to identify and correct them as we go. This hopefully will result in a better experience for everyone using those apps, whether or not they are Hubs users.

    Want to Help?

    we need your help!

    Does this sound interesting? Want to help us make it happen? Here’s what you can do:

    • Come say hi on the hubs-devel mailing list, introduce yourself, read up on our past meeting minutes.
    • Join us during our weekly meetings on Tuesdays at 15:00 UTC in #fedora-hubs on irc.freenode.net.
    • Reach out to Aurélien and coordinate with him if you’d like to help with the meetbot porting effort to errbot. You may want to check out those codebases as well.
    • Reach out to Sayan if you’d like to help with the implementation of waartaa to provide IRC support in Fedora Hubs!
    • Hit me up if you’ve got ideas or would like to help out with any of the UX involved!

    Ideas, feedback, questions, etc. provided in a respectful manner are welcome in the comments.

    CSS3 for Translation

    Years ago I used a CMS to bring content to a web page. But with evolving CSS, markdown syntax and comfortable git hosting, publication of smaller sites can be handled without a CMS. My home page is translated. Thus I liked to express page translations in a stateless language. The ingredients are simple. My requirements are:

    • stateless CSS, no javascript
    • integrable with markdown syntax (html tags are ok’ish)
    • default language shall remain visible, when no translation was found
    • hopefully searchable by robots (Those need to understand CSS.)

    CSS:

    /* hide translations initially */
    .hide {
      display: none
    }
    /* show a browser detected translation */
    :lang(de) { display: block; }
    li:lang(de) { display: list-item; }
    a:lang(de) { display: inline; }
    em:lang(de) { display: inline; }
    span:lang(de) { display: inline; }
    
    /* hide default language, if a translation was found */
    :lang(de) ~ [lang=en] {
     display: none;
    }

    The CSS uses the display property of the element, which was returned by the :lang() selector. However the selectors for different display: types are somewhat long. Which is not so short as I liked.

    Markdown:

    <span lang="de" class="hide"> Hallo _Welt_. </span>
    <span lang="en"> Hello _World_. </span>

    Even so the plain markdown text looks not as straight forward as before. But it is acceptable IMO.

    Hiding the default language uses the sibling elements combinator E ~ F and selects a element containing the lang=”en” attribute. Matching elements are hidden (display: none;). This is here the default language string “Hello _World_.” with the lang=”en” attribute. This approach works fine in FireFox(49), Chrome Browser(54), Konqueror(4.18 khtml&WebKit) and WP8.1 with Internet Explorer. Dillo(3.0.5) does not show the translation, only the english text, which is correct as fallback for a non :lang() supporting engine.

    On my search I found approaches for content swapping with CSS: :lang()::before { content: xxx; } . But those where not well accessible. Comments and ideas welcome.

    Lyon GNOME Bug day #1

    Last Friday, both a GNOME bug day and a bank holiday, a few of us got together to squash some bugs, and discuss GNOME and GNOME technologies.

    Guillaume, a new comer in our group, tested the captive portal support for NetworkManager and GNOME in Gentoo, and added instructions on how to enable it to their Wiki. He also tested a gateway related configuration problem, the patch for which I merged after a code review. Near the end of the session, he also rebuilt WebKitGTK+ to test why Google Docs was not working for him anymore in Web. And nobody believed that he could build it that quickly. Looks like opinions based on past experiences are quite hard to change.

    Mathieu worked on removing jhbuild's .desktop file as nobody seems to use it, and it was creating the Sundry category for him, in gnome-shell. He also spent time looking into the tracker blocker that is Mozilla's Focus, based on disconnectme's block lists. It's not as effective as uBlock when it comes to blocking adverts, but the memory and performance improvements, and the slow churn rate, could make it a good default blocker to have in Web.

    Haïkel looked into using Emeus, potentially the new GTK+ 4.0 layout manager, to implement the series properties page for Videos.

    Finally, I added Bolso to jhbuild, and struggled to get gnome-online-accounts/gnome-keyring to behave correctly in my installation, as the application just did not want to log in properly to the service. I also discussed Fedora's privacy policy (inappropriate for Fedora Workstation, as it doesn't cover the services used in the default installation), a potential design for Flatpak support of joypads and removable devices in general, as well as the future design of the Network panel.

    November 14, 2016

    João Almeida's darktable Presets


    João Almeida's darktable Presets

    A gorgeous set of film emulation for darktable

    I realize that I’m a little late to this, but photographer João Almeida has created a wonderful set of film emulation presets for darktable that he uses in his own workflow for personal and commisioned work. Even more wonderful is that he has graciously released them for everyone to use.

    These film emulations started as a personal side project for João, and he adds a disclaimer to them that he did not optimize them all for each brand or model of his cameras. His end goal was for these to be as simple as possible by using a few darktable modules. He describes it best on his blog post about them:

    The end goal of these presets is to be as simple as possible by using few Darktable modules, it works solely by manipulating Lab Tone Curves for color manipulation, black & white films rely heavily on Channel Mixer. Since I what I was aiming for was the color profiles of each film, other traits related with processing, lenses and others are unlikely to be implemented, this includes: grain, vignetting, light leaks, cross-processing, etc.

    Some before/after samples from his blog post:

    João Almeida Portra 400 sample João Portra 400
    (Click to compare to original)
    João Alemida Kodachrome 64 sample João Kodachrome 64
    (Click to compare to original)
    João Alemida Velvia 50 sample João Velvia 50
    (Click to compare to original)

    You can read more on João’s website and you can see many more images on Flickr with the #t3mujinpack tag. The full list of film emulations included with his pack:

    • AGFA APX 25, 100
    • Fuji Astia 100F
    • Fuji Neopan 1600, Acros 100
    • Fuji Pro 160C, 400H, 800Z
    • Fuji Provia 100F, 400F, 400X
    • Fuji Sensia 100
    • Fuji Superia 100, 200, 400, 800, 1600, HG 1600
    • Fuji Velvia 50, 100
    • Ilford Delta 100, 400, 3200
    • Ilford FP4 125
    • Ilford HP5 Plus 400
    • Ilford XP2
    • Kodak Ektachrome 100 GX, VS
    • Kodak Ektar 100
    • Kodak Elite Chrome 400
    • Kodak Kodachrome 25, 64, 200
    • Kodak Portra 160 NC, VC
    • Kodak Portra 400 NC, UC, VC
    • Kodak Portra 800
    • Kodak T-Max 3200
    • Kodak Tri-X 400

    If you see João around the forums stop and say hi (and maybe a thank you). Even better, if you find these useful, consider buying him a beer (donation link is on his blog post)!

    Mon 2016/Nov/14

    • Exposing Rust objects to C code

      When librsvg parses an SVG file, it will encounter elements that generate path-like objects: lines, rectangles, polylines, circles, and actual path definitions. Internally, librsvg translates all of these into path definitions. For example, librsvg will read an element from the SVG that defines a rectangle like

      <rect x="20" y="30" width="40" height="50" style="..."></rect> 

      and translate it into a path definition with the following commands:

      move_to (20, 30)
      line_to (60, 30)
      line_to (60, 80)
      line_to (20, 80)
      line_to (20, 30)
      close_path ()

      But where do those commands live? How are they fed into Cairo to actually draw a rectangle?

      Get your Cairo right here

      One of librsvg's public API entry points is rsvg_handle_render_cairo():

      gboolean rsvg_handle_render_cairo (RsvgHandle * handle, cairo_t * cr);

      Your program creates an appropriate Cairo surface (a window, an off-screen image, a PDF surface, whatever), obtains a cairo_t drawing context for the surface, and passes the cairo_t to librsvg using that rsvg_handle_render_cairo() function. It means, "take this parsed SVG (the handle), and render it to this cairo_t drawing context".

      SVG files may look like an XML-ization of a tree of graphical objects: here is a group which contains a blue rectangle and a green circle, and here is a closed Bézier curve with a black outline and a red fill. However, SVG is more complicated than that; it allows you to define objects once and recall them later many times, it allows you to use CSS cascading rules for applying styles to objects ("all the objects in this group are green unless they define another color on their own"), to reference other SVG files, etc. The magic of librsvg is that it resolves all of that into drawing commands for Cairo.

      Feeding a path into Cairo

      This is easy enough: Cairo provides an API for its drawing context with functions like

      void cairo_move_to (cairo_t *cr, double x, double y);
      
      void cairo_line_to (cairo_t *cr, double x, double y);
      
      void cairo_close_path (cairo_t *cr);
      
      /* Other commands ommitted */

      Librsvg doesn't feed paths to Cairo as soon as it parses them from the XML; that is done until rendering time. In the meantime, librsvg has to keep an intermediate representation of path data.

      Librsvg uses an RsvgPathBuilder object to hold on to this path data for as long as needed. The API is simple enough:

      pub struct RsvgPathBuilder {
         ...
      }
      
      impl RsvgPathBuilder {
          pub fn new () -> RsvgPathBuilder { ... }
      
          pub fn move_to (&mut self, x: f64, y: f64) { ... }
      
          pub fn line_to (&mut self, x: f64, y: f64) { ... }
      
          pub fn curve_to (&mut self, x2: f64, y2: f64, x3: f64, y3: f64, x4: f64, y4: f64) { ... }
      
          pub fn close_path (&mut self) { ... }
      }

      This mimics the sub-API of cairo_t to build paths, except that instead of feeding them immediately into the Cairo drawing context, RsvgPathBuilder builds an array of path commands that it will later replay to a given cairo_t. Let's look at the methods of RsvgPathBuilder.

      "pub fn new () -> RsvgPathBuilder" - this doesn't take a self parameter; you could call it a static method in languages that support classes. It is just a constructor.

      "pub fn move_to (&mut self, x: f64, y: f64)" - This one is a normal method, as it takes a self parameter. It also takes (x, y) double-precision floating point values for the move_to command. Note the "&mut self": this means that you must pass a mutable reference to an RsvgPathBuilder, since the method will change the builder's contents by adding a move_to command. It is a method that changes the state of the object, so it must take a mutable object.

      The other methods for path commands are similar to move_to. None of them have return values; if they did, they would have a "-> ReturnType" after the argument list.

      But that RsvgPathBuilder is a Rust object! And it still needs to be called from the C code in librsvg that hasn't been ported over to Rust yet. How do we do that?

      Exporting an API from Rust to C

      C doesn't know about objects with methods, even though you can fake them pretty well with structs and pointers to functions. Rust doesn't try to export structs with methods in a fancy way; you have to do that by hand. This is no harder than writing a GObject implementation in C, fortunately.

      Let's look at the C header file for the RsvgPathBuilder object, which is entirely implemented in Rust. The C header file is rsvg-path-builder.h. Here is part of that file:

      typedef struct _RsvgPathBuilder RsvgPathBuilder;
      
      G_GNUC_INTERNAL
      void rsvg_path_builder_move_to (RsvgPathBuilder *builder,
                                      double x,
                                      double y);
      G_GNUC_INTERNAL
      void rsvg_path_builder_line_to (RsvgPathBuilder *builder,
                                      double x,
                                      double y);

      Nothing special here. RsvgPathBuilder is an opaque struct; we declare it like that just so we can take a pointer to it as in the rsvg_path_builder_move_to() and rsvg_path_builder_line_to() functions.

      How about the Rust side of things? This is where it gets more interesting. This is part of path-builder.rs:

      extern crate cairo;                                                         // 1
      
      pub struct RsvgPathBuilder {                                                // 2
          path_segments: Vec<cairo::PathSegment>,
      }
      
      impl RsvgPathBuilder {                                                      // 3
          pub fn move_to (&mut self, x: f64, y: f64) {                            // 4
              self.path_segments.push (cairo::PathSegment::MoveTo ((x, y)));      // 5
          }
      }
      
      #[no_mangle]                                                                    // 6
      pub extern fn rsvg_path_builder_move_to (raw_builder: *mut RsvgPathBuilder,     // 7
                                               x: f64,
                                               y: f64) {
          assert! (!raw_builder.is_null ());                                          // 8
      
          let builder: &mut RsvgPathBuilder = unsafe { &mut (*raw_builder) };         // 9
      
          builder.move_to (x, y);                                                     // 10
      }

      Let's look at the numbered lines:

      1. We use the cairo crate from the excellent gtk-rs, the Rust binding for GTK+ and Cairo.

      2. This is our Rust structure. Its fields are not important for this discussion; they are just what the struct uses to store Cairo path commands.

      3. Now we begin implementing methods for that structure. These are Rust-side methods, not visible from C. In 4 and 5 we see the implementation of ::move_to(); it just creates a new cairo::PathSegment and pushes it to the vector of segments.

      6. The "#[no_mangle]" line instructs the Rust compiler to put the following function name in the .a library just as it is, without any name mangling. The function name without name mangling looks just like rsvg_path_builder_move_to to the linker, as we expect. A name-mangled Rust function looks like _ZN14rsvg_internals12path_builder15RsvgPathBuilder8curve_to17h1b8f49042ff19daaE — you can explore these with "objdump -x rust/target/debug/librsvg_internals.a"

      7. "pub extern fn rsvg_path_builder_move_to (raw_builder: *mut RsvgPathBuilder". This is a public function with an exported symbol in the .a file, not an internal one, as it will be called from the C code. And the "raw_builder: *mut RsvgPathBuilder" is Rust-ese for "a pointer to an RsvgPathBuilder with mutable contents". If this were only an accessor function, we would use a "*const RsvgPathBuilder" argument type.

      8. "assert! (!raw_builder.is_null ());". You can read this as "g_assert (raw_builder != NULL);" if you come from GObject land.

      9. "let builder: &mut RsvgPathBuilder = unsafe { &mut (*raw_builder) }". This declares a builder variable, of type &mut RsvgPathBuilder, which is a reference to a mutable path builder. The variable gets intialized with the result of "&mut (*raw_builder)": first we de-reference the raw_builder pointer with the asterisk, and convert that to a mutable reference with the &mut. De-referencing pointers that come from who-knows-where is an unsafe operation in Rust, as the compiler cannot guarantee their validity, and so we must wrap that operation with an unsafe{} block. This is like telling the compiler, "I acknowledge that this is potentially unsafe". Already this is better than life in C, where *every* de-reference is potentially dangerous; in Rust, only those that "bring in" pointers from the outside are potentially dangerous.

      10. Now we have a Rust-side reference to an RsvgPathBuilder object, and we can call the builder.move_to() method as in regular Rust code.

      Those are methods. And the constructor/destructor?

      Excellent question! We defined an absolutely conventional method, but we haven't created a Rust object and sent it over to the C world yet. And we haven't taken a Rust object from the C world and destroyed it when we are done with it.

      Construction

      Here is the C prototype for the constructor, exactly as you would expect from a GObject library:

      G_GNUC_INTERNAL
      RsvgPathBuilder *rsvg_path_builder_new (void);

      And here is the corresponding implementation in Rust:

      #[no_mangle]
      pub unsafe extern fn rsvg_path_builder_new () -> *mut RsvgPathBuilder {    // 1
          let builder = RsvgPathBuilder::new ();                                 // 2
      
          let boxed_builder = Box::new (builder);                                // 3
      
          Box::into_raw (boxed_builder)                                          // 4
      }

      1. Again, this is a public function with an exported symbol. However, this whole function is marked as unsafe since it returns a pointer, a *mut RsvgPathBuilder. To Rust this declaration means, "this pointer will be out of your control", hence the unsafe. With that we acknowledge our responsibility in handling the memory to which the pointer refers.

      2. We instantiate an RsvgPathBuilder with normal Rust code...

      3. ... and ensure that that object is put in the heap by Boxing it. This is a common operation in garbage-collected languages. Boxing is Rust's primitive for putting data in the program's heap; it allows the object in question to outlive the scope where it got created, i.e. the duration of the rsvg_path_builder_new()function.

      4. Finally, we call Box::into_raw() to ask Rust to give us a pointer to the contents of the box, i.e. the actual RsvgPathBuilder struct that lives there. This statement doesn't end in a semicolon, so it is the return value for the function.

      You could read this as "builder = g_new (...); initialize (builder); return builder;". Allocate something in the heap and initialize it, and return a pointer to it. This is exactly what the Rust code is doing.

      Destruction

      This is the C prototype for the destructor. This not a reference-counted GObject; it is just an internal thing in librsvg, which does not need reference counting.

      G_GNUC_INTERNAL
      void rsvg_path_builder_destroy (RsvgPathBuilder *builder);

      And this is the implementation in Rust:

      #[no_mangle]
      pub unsafe extern fn rsvg_path_builder_destroy (raw_builder: *mut RsvgPathBuilder) {    // 1
          assert! (!raw_builder.is_null ());                                                  // 2
      
          let _ = Box::from_raw (raw_builder);                                                // 3
      }

      1. Same as before; we declare the whole function as public, exported, and unsafe since it takes a pointer from who-knows-where.

      2. Same as in the implementation for move_to(), we assert that we got passed a non-null pointer.

      3. Let's take this bit by bit. "Box::from_raw (raw_builder)" is the counterpart to Box::into_raw() from above; it takes a pointer and wraps it with a Box, which Rust knows how to de-reference into the actual object it contains. "let _ =" is to have a variable binding in the current scope (the function we are implementing). We don't care about the variable's name, so we use _ as a default name. The variable is now bound to a reference to an RsvgPathBuilder. The function terminates, and since the _ variable goes out of scope, Rust frees the memory for the RsvgPathBuilder. You can read this idiom as "g_free (builder)".

      Recapitulating

      Make your object. Box it. Take a pointer to it with Box::into_raw(), and send it off into the wild west. Bring back a pointer to your object. Unbox it with Box::from_raw(). Let it go out of scope if you want the object to be freed. Acknowledge your responsibilities with unsafe and that's all!

      Making the functions visible to C

      The code we just saw lives in path-builder.rs. By convention, the place where one actually exports the visible API from a Rust library is a file called lib.rs, and here is part of that file's contents in librsvg:

      pub use path_builder::{
          rsvg_path_builder_new,
          rsvg_path_builder_destroy,
          rsvg_path_builder_move_to,
          rsvg_path_builder_line_to,
          rsvg_path_builder_curve_to,
          rsvg_path_builder_close_path,
          rsvg_path_builder_arc,
          rsvg_path_builder_add_to_cairo_context
      };
      
      mod path_builder; 

      The mod path_builder indicates that lib.rs will use the path_builder sub-module. The pub use block exports the functions listed in it to the outside world. They will be visible as symbols in the .a file.

      The Cargo.toml (akin to a toplevel Makefile.am) for my librsvg's little sub-library has this bit:

      [lib]
      name = "rsvg_internals"
      crate-type = ["staticlib"]

      This means that the sub-library will be called librsvg_internals.a, and it is a static library. I will link that into my master librsvg.so. If this were a stand-alone shared library entirely implemented in Rust, I would use the "cdylib" crate type instead.

      Linking into the main .so

      In librsvg/Makefile.am I have a very simplistic scheme for building the librsvg_internals.a library with Rust's tools, and linking the result into the main librsvg.so:

      RUST_LIB = rust/target/debug/librsvg_internals.a
      
      .PHONY: rust/target/debug/librsvg_internals.a
      rust/target/debug/librsvg_internals.a:
      	cd rust && \
      	cargo build --verbose
      
      librsvg_@RSVG_API_MAJOR_VERSION@_la_CPPFLAGS = ...
      
      librsvg_@RSVG_API_MAJOR_VERSION@_la_CFLAGS = ...
      
      librsvg_@RSVG_API_MAJOR_VERSION@_la_LDFLAGS = ...
      
      librsvg_@RSVG_API_MAJOR_VERSION@_la_LIBADD = \
      	$(LIBRSVG_LIBS) 	\
      	$(LIBM)			\
      	$(RUST_LIB)

      This uses a .PHONY target for librsvg_internals.a, so "cargo build" will always be called on it. Cargo already takes care of dependency tracking; there is no need for make/automake to do that.

      I put the filename of my library in a RUST_LIB variable, which I then reference from LIBADD. This gets librsvg_internals.a linked into the final librsvg.so.

      When you run "cargo build" just like that, it creates a debug build in a target/debug subdirectory. I haven't looked for a way to make it play together with Automake when one calls "cargo build --release": that one puts things in a different directory, called target/release. Rust's tooling is more integrated that way, while in the Autotools world I'm expected to pass any CFLAGS for compilation by hand, depending on whether I'm doing a debug build or a release build. Any ideas for how to do this cleanly are appreciated.

      I don't have any code in configure.ac to actually detect if Rust is present. I'm just assuming that it is for now; fixes are appreciated :)

      Using the Rust functions from C

      There is no difference from what we had before! This comes from rsvg-shapes.c:

      static RsvgPathBuilder *
      _rsvg_node_poly_create_builder (const char *value,
                                      gboolean close_path)
      {
          RsvgPathBuilder *builder;
      
          ...
      
          builder = rsvg_path_builder_new ();
      
          rsvg_path_builder_move_to (builder, pointlist[0], pointlist[1]);
      
          ...
      
          return builder;
      }

      Note that we are calling rsvg_path_builder_new() and rsvg_path_builder_move_to(), and returning a pointer to an RsvgPathBuilder structure as usual. However, all of those are implemented in the Rust code. The C code has no idea!

      This is the magic of Rust: it allows you to move your C code bit by bit into a safe language. You don't have to do a whole rewrite in a single step. I don't know any other languages that let you do that.

    November 08, 2016

    November 06, 2016

    darktable 2.2.0rc0 released

    we’re proud to announce the first release candidate for the upcoming 2.2 series of darktable, 2.2.0rc0!

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc0.

    as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

    a084ef367b1a1b189ad11a6300f7e0cadb36354d11bf0368de7048c6a0732229 darktable-2.2.0~rc0.tar.xz
    

    and the changelog as compared to 2.0.0 can be found below.

    • Well over 2 thousand commits since 2.0.0

    The Big Ones:

    Quite Interesting Changes:

    • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
    • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
    • darktable is now happy to use smaller stack sizes. That should allow using musl libc
    • Allow darktable-cli to work on directories
    • Allow to import/export tags from Lightroom keyword files
    • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
    • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
    • Support presets in “more modules” so you can quickly switch between your favorite sets of modules shown in the GUI
    • Add range operator and date compare to the collection module
    • Support the Exif date and time when importing photos from camera
    • Rudimentary CYGM and RGBE color filter array support
    • Preview pipe now does run demosaic module too, and its input is no longer pre-demosaiced, but is just downscaled without demosaicing it at the same time.
    • Nicer web gallery exporter – now touch friendly!
    • OpenCL implementation of VNG/VNG4 demosaicing methods
    • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
    • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
    • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
    • darktable-cli: do not even try to open display, we don’t need it.
    • Hotpixels module: make it actually work for X-Trans

    Some More Changes, Probably Not Complete:

    • Drop darktable-viewer tool in favor of slideshow view
    • Remove gnome keyring password backend, use libsecret instead
    • When using libsecret to store passwords then put them into the correct collection
    • Hint via window manager when import/export is done
    • Quick tagging searches anywhere, not just at the start of tags
    • The sidecar Xmp schema for history entries is now more consistent and less error prone
    • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
    • Give the choice of equidistant and proportional feathering when using elliptical masks
    • Add geolocation to watermark variables
    • Fix some crashes with missing configured ICC profiles
    • Support greyscale color profiles
    • OSX: add trash support (thanks to Michael Kefeder for initial patch)
    • Attach Xmp data to EXR files
    • Several fixes for HighDPI displays
    • Use Pango for text layout, thus supporting RTL languages
    • Many bugs got fixed and some memory leaks plugged
    • The usermanual was updated to reflect the changes in the 2.2 series

    Changed Dependencies:

    • CMake 3.0 is now required.
    • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
    • Drop support for OS X 10.6
    • Bump required libexiv2 version up to 0.24
    • Bump GTK+ requirement to gtk-3.14. (because even Debian/stable has it)
    • Bump GLib requirement to glib-2.40.
    • Port to OpenJPEG2
    • SDL is no longer needed.

    A special note to all the darktable Fedora users: Fedora-provided darktable packages are intentionally built with Lua disabled. Thus, Lua scripting will not work. This breaks e.g. darktable-gimp integration. Please bug Fedora. In the mean time you could fix that by self-compiling darktable (pass -DDONT_USE_INTERNAL_LUA=OFF to cmake in order to enable use of bundled Lua5.2.4).

    Base Support

    • Canon EOS-1D X Mark II
    • Canon EOS 5D Mark IV
    • Canon EOS 80D
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS M10
    • Canon PowerShot A720 IS (dng)
    • Canon PowerShot G7 X Mark II
    • Canon PowerShot G9 X
    • Canon PowerShot SD450 (dng)
    • Canon PowerShot SX130 IS (dng)
    • Canon PowerShot SX260 HS (dng)
    • Canon PowerShot SX510 HS (dng)
    • Fujifilm FinePix S100FS
    • Fujifilm X-Pro2
    • Fujifilm X-T2
    • Fujifilm X70
    • Fujifilm XQ2
    • GITUP GIT2 (chdk-a, chdk-b)
    • (most nikon cameras here are just fixes, and they were supported before already)
    • Nikon 1 AW1 (12bit-compressed)
    • Nikon 1 J1 (12bit-compressed)
    • Nikon 1 J2 (12bit-compressed)
    • Nikon 1 J3 (12bit-compressed)
    • Nikon 1 J4 (12bit-compressed)
    • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
    • Nikon 1 S1 (12bit-compressed)
    • Nikon 1 S2 (12bit-compressed)
    • Nikon 1 V1 (12bit-compressed)
    • Nikon 1 V2 (12bit-compressed)
    • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix A (14bit-compressed)
    • Nikon Coolpix P330 (12bit-compressed)
    • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
    • Nikon Coolpix P6000 (12bit-uncompressed)
    • Nikon Coolpix P7000 (12bit-uncompressed)
    • Nikon Coolpix P7100 (12bit-uncompressed)
    • Nikon Coolpix P7700 (12bit-compressed)
    • Nikon Coolpix P7800 (12bit-compressed)
    • Nikon D1 (12bit-uncompressed)
    • Nikon D100 (12bit-compressed, 12bit-uncompressed)
    • Nikon D1H (12bit-compressed, 12bit-uncompressed)
    • Nikon D1X (12bit-compressed, 12bit-uncompressed)
    • Nikon D200 (12bit-compressed, 12bit-uncompressed)
    • Nikon D2H (12bit-compressed, 12bit-uncompressed)
    • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
    • Nikon D2X (12bit-compressed, 12bit-uncompressed)
    • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3000 (12bit-compressed)
    • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3100 (12bit-compressed)
    • Nikon D3200 (12bit-compressed)
    • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
    • Nikon D3400 (12bit-compressed)
    • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D3X (14bit-compressed, 14bit-uncompressed)
    • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D40 (12bit-compressed, 12bit-uncompressed)
    • Nikon D40X (12bit-compressed, 12bit-uncompressed)
    • Nikon D4S (14bit-compressed)
    • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D50 (12bit-compressed)
    • Nikon D500 (14bit-compressed, 12bit-compressed)
    • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
    • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
    • Nikon D5200 (14bit-compressed)
    • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
    • Nikon D60 (12bit-compressed, 12bit-uncompressed)
    • Nikon D600 (14bit-compressed, 12bit-compressed)
    • Nikon D610 (14bit-compressed, 12bit-compressed)
    • Nikon D70 (12bit-compressed)
    • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
    • Nikon D7000 (14bit-compressed, 12bit-compressed)
    • Nikon D70s (12bit-compressed)
    • Nikon D7100 (14bit-compressed, 12bit-compressed)
    • Nikon D80 (12bit-compressed, 12bit-uncompressed)
    • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon D90 (12bit-compressed, 12bit-uncompressed)
    • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
    • Nikon E5400 (12bit-uncompressed)
    • Nikon E5700 (12bit-uncompressed)
    • Olympus PEN-F
    • OnePlus One (dng)
    • Panasonic DMC-FZ150 (1:1, 16:9)
    • Panasonic DMC-FZ18 (16:9, 3:2)
    • Panasonic DMC-FZ300 (4:3)
    • Panasonic DMC-FZ50 (16:9, 3:2)
    • Panasonic DMC-G8 (4:3)
    • Panasonic DMC-G80 (4:3)
    • Panasonic DMC-GX80 (4:3)
    • Panasonic DMC-GX85 (4:3)
    • Panasonic DMC-LX3 (1:1)
    • Pentax K-1
    • Pentax K-70
    • Samsung GX20 (dng)
    • Sony DSC-F828
    • Sony DSC-RX10M3
    • Sony DSLR-A380
    • Sony ILCA-68
    • Sony ILCE-6300

    White Balance Presets

    • Canon EOS 1200D
    • Canon EOS Kiss X70
    • Canon EOS Rebel T5
    • Canon EOS 1300D
    • Canon EOS Kiss X80
    • Canon EOS Rebel T6
    • Canon EOS 5D Mark IV
    • Canon EOS 5DS
    • Canon EOS 5DS R
    • Canon EOS 750D
    • Canon EOS Kiss X8i
    • Canon EOS Rebel T6i
    • Canon EOS 760D
    • Canon EOS 8000D
    • Canon EOS Rebel T6s
    • Canon EOS 80D
    • Canon EOS M10
    • Canon EOS-1D X Mark II
    • Canon PowerShot G7 X Mark II
    • Fujifilm X-Pro2
    • Fujifilm X-T10
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5500
    • Olympus PEN-F
    • Pentax K-70
    • Pentax K-S1
    • Pentax K-S2
    • Sony ILCA-68
    • Sony ILCE-6300

    Noise Profiles

    • Canon EOS 5DS R
    • Canon EOS 80D
    • Canon PowerShot G15
    • Canon PowerShot S100
    • Canon PowerShot SX50 HS
    • Fujifilm X-T10
    • Fujifilm X-T2
    • Fujifilm X100T
    • Fujifilm X20
    • Fujifilm X70
    • Nikon 1 V3
    • Nikon D5500
    • Olympus E-PL6
    • Olympus PEN-F
    • Panasonic DMC-FZ1000
    • Panasonic DMC-GF7
    • Pentax K-S2
    • Ricoh GR
    • Sony DSC-RX10
    • Sony SLT-A37

    New Translations

    • Hebrew
    • Slovenian

    Updated Translations

    • Catalan
    • Czech
    • Danish
    • Dutch
    • French
    • German
    • Hungarian
    • Russian
    • Slovak
    • Spanish
    • Swedish

    Stellarium 0.12.7 discussion

    Thank you Alexander! This will keep a few old computers happy...

    G.

    Stellarium 0.12.7

    Stellarium 0.12.7 has been released today!

    Yes, the series 0.12 is LTS for owners of old computers (old with weak graphics cards). This release has ports of some features from the series 1.x/0.15:
    - textures for deep-sky objects
    - star catalogues
    - fixes for MPC search tool in Solar System Editor plugin

    November 04, 2016

    Aligning Images with Hugin


    Aligning Images with Hugin

    Easily process your bracketed exposures

    Hugin is an excellent tool for for aligning and stitching images. In this article, we’ll focus on aligning a stack of images. Aligning a stack of images can be useful for achieving several results, such as:

    • bracketed exposures to make an HDR or fused exposure (using enfuse/enblend), or manually blending the images together in an image editor
    • photographs taken at different focal distances to extend the depth of field, which can be very useful when taking macros
    • photographs taken over a period of time to make a time-lapse movie

    For the example images included with this tutorial, the focal length is 12mm and the focal length multiplier is 1. A big thank you to @isaac for providing these images.

    You can download a zip file of all of the sample Beach Umbrellas images here:

    Download Outdoor_Beach_Umbrella.zip (62MB)

    Other sample images to try with this tutorial can be found at the end of the post.

    These instructions were adapted from the original forum post by @Carmelo_DrRaw; many thanks to him as well.

    We’re going to align these bracked exposures so we can blend them:

    Blend Examples
    1. Select InterfaceExpert to set the interface to Expert mode. This will expose all of the options offered by Hugin.

    2. Select the Add images… button to load your bracketed images. Select your images from the file chooser dialog and click Open.

    3. Set the optimal setting for aligning images:

      • Feature Matching Settings: Align image stack
      • Optimize Geometric: Custom parameters
      • Optimize Photometric: Low dynamic range
    4. Select the Optimizer tab.

    5. In the Image Orientation section, select the following variables for each image:

      • Roll
      • X (TrX) [horizontal translation]
      • Y (TrY) [vertical translation]

      You can Ctrl + left mouse click to enable or disable the variables.

      roll x y Hugin

      Note that you do not need to select the parameters for the anchor image:

      Hugin anchor image
    6. Select Optimize now! and wait for the software to finish the calculations. Select Yes to apply the changes.

    7. Select the Stitcher tab.

    8. Select the Calculate Field of View button.

    9. Select the Calculate Optimal Size button.

    10. Select the Fit Crop to Images button.

    11. To have the maximum number of post-processing options, select the following image outputs:

      • Panorama Outputs: Exposure fused from any arrangement
        • Format: TIFF
        • Compression: LZW
      • Panorama Outputs: High dynamic range
        • Format: EXR
      • Remapped Images: No exposure correction, low dynamic range

        Hugin Image Export
    12. Select the Stitch! button and choose a place to save the files. Since Hugin generates quite a few temporary images, save the PTO file in it’s own folder.

    Hugin will output the following images:

    • a tif file blended by enfuse/enblend
    • an HDR image in the EXR format
    • the individual images after remapping and without any exposure correction that you can import into the GIMP as layers and blend manually.

    You can see the result of the image blended with enblend/enfuse:

    Beach Umbrella Fused

    With the output images, you can:

    • edit the enfuse/enblend tif file further in the GIMP or RawTherapee
    • tone map the EXR file in LuminanceHDR
    • manually blend the remapped tif files in the GIMP or PhotoFlow

    Image files

    • Camera: Olympus E-M10 mark ii
    • Lens: Samyang 12mm F2.0

    Indoor_Guitars

    Download Indoor_Guitars.zip (75MB)

    • 5 brackets
    • ±0.3 EV increments
    • f5.6
    • focus at about 1m
    • center priority metering
    • exposed for guitars, bracketed for the sky, outdoor area, and indoor area
    • manual mode (shutter speed recorded in EXIF)
    • shot in burst mode, handheld

    Outdoor_Beach_Umbrella

    Download Outdoor_Beach_Umbrella.zip (62MB)

    • 3 brackets
    • ±1 EV increments
    • f11
    • focus at infinity
    • center priority metering
    • exposed for the water, bracketed for umbrella and sky
    • manual mode (shutter speed recorded in EXIF)
    • shot in burst mode, handheld

    Outdoor_Sunset_Over_Ocean

    Download Outdoor_Sunset_Over_Ocean.zip (60MB)

    • 3 brackets
    • ±1 EV increments
    • f11
    • focus at infinity
    • center priority metering
    • exposed for the darker clouds, bracketed for darker water and lighter sky areas and sun
    • manual mode (shutter speed recorded in EXIF)
    • shot in burst mode, handheld

    Licencing Information

    November 03, 2016

    Thu 2016/Nov/03

    • Refactoring C to make Rustification easier

      In SVG, the sizes and positions of objects are not just numeric values or pixel coordinates. You can actually specify physical units ("this rectangle is 5 cm wide"), or units relative to the page ("this circle's X position is at 50% of the page's width, i.e. centered"). Librsvg's machinery for dealing with this is in two parts: parsing a length string from an SVG file into an RsvgLength structure, and normalizing those lengths to final units for rendering.

      How RsvgLength is represented

      The RsvgLength structure used to look like this:

      typedef struct {
          double length;
          char factor;
      } RsvgLength;

      The parsing code would then do things like

      RsvgLength
      _rsvg_css_parse_length (const char *str)
      {
          RsvgLength out;
      
          out.length = ...; /* parse a number with strtod() and friends */
      
          if (next_token_is ("pt")) { /* points */
              out.length /= 72;
      	out.factor = 'i';
          } else if (next_token_is ("in")) { /* inches */
              out.factor = 'i';
          } else if (next_token_is ("em")) { /* current font's Em size */
              out.factor = 'm';
          } else if (next_token_is ("%")) { /* percent */
              out.factor = 'p';
          } else {
              out.factor = '\0';
          }
      }

      That is, it uses a char for the length.factor field, and then uses actual characters to indicate each different type. This is pretty horrible, so I changed it to use an enum:

      typedef enum {
          LENGTH_UNIT_DEFAULT,
          LENGTH_UNIT_PERCENT,
          LENGTH_UNIT_FONT_EM,
          LENGTH_UNIT_FONT_EX,
          LENGTH_UNIT_INCH,
          LENGTH_UNIT_RELATIVE_LARGER,
          LENGTH_UNIT_RELATIVE_SMALLER
      } LengthUnit;
      
      typedef struct {
          double length;
          LengthUnit unit;
      } RsvgLength;

      We have a nice enum instead of chars, but also, the factor field is now renamed to unit. This ensures that code like

      if (length.factor == 'p')
          ...

      will no longer compile, and I can catch all the uses of "factor" easily. I replace them with unit as appropriate, and ensure that simply changing the chars for enums as appropriate is the right thing.

      When would it not be the right thing? I'm just replacing 'p' for LENGTH_UNIT_PERCENT, right? Well, it turns out that in a couple of hacky places in the rsvg-filters code, that code put an 'n' by hand in foo.factor to really mean, "this foo length value was not specified in the SVG data".

      That pattern seemed highly specific to the filters code, so instead of adding an extra LENGTH_UNIT_UNSPECIFIED, I added an extra field to the FilterPrimitive structures: when they used 'n' for primitive.foo.factor, instead they now have a primitive.foo_specified boolean flag, and the code checks for that instead of essentially monkey-patching the RsvgLength structure.

      Normalizing lengths for rendering

      At rendering time, these RsvgLength with their SVG-specific units need to be normalized to units that are relative to the current transformation matrix. There is a function used all over the code, called _rsvg_css_normalize_length(). This function gets called in an interesting way: one has to specify whether the length in question refers to a horizontal measure, or vertical, or both. For example, an RsvgNodeRect represents a rectangle shape, and it has x/y/w/h fields that are of type RsvgLength. When librsvg is rendering such an RsvgNodeRect, it does this:

      static void
      _rsvg_node_rect_draw (RsvgNodeRect *self, RsvgDrawingCtx *ctx)
      {
          double x, y, w, h;
      
          x = _rsvg_css_normalize_length (&rect->x, ctx, 'h');
          y = _rsvg_css_normalize_length (&rect->y, ctx, 'v');
      
          w = fabs (_rsvg_css_normalize_length (&rect->w, ctx, 'h'));
          h = fabs (_rsvg_css_normalize_length (&rect->h, ctx, 'v'));
      
          ...
      }

      Again with the fucking chars. Those 'h' and 'v' parameters are because lengths in SVG need to be resolved relative to the width or the height (or both) of something. Sometimes that "something" is the size of the current object's parent group; sometimes it is the size of the whole page; sometimes it is the current font size. The _rsvg_css_normalize_length() function sees if it is dealing with a LENGTH_UNIT_PERCENT, for example, and will pick up page_size->width if the requested value is 'h'orizontal, or page_size->height if it is 'v'ertical. Of course I replaced all of those with an enum.

      This time I didn't find hacky code like the one that would stick an 'n' in the length.factor field. Instead, I found an actual bug; a horizontal unit was using 'w' for "width", instead of 'h' for "horizontal". If these had been enums since the beginning, this bug would probably not be there.

      While I appreciate the terseness of 'h' instead of LINE_DIR_HORIZONTAL, maybe we can later refactor groups of coordinates into commonly-used patterns. For example, instead of

      patternx = _rsvg_css_normalize_length (&rsvg_pattern->x, ctx, LENGTH_DIR_HORIZONTAL);
      patterny = _rsvg_css_normalize_length (&rsvg_pattern->y, ctx, LENGTH_DIR_VERTICAL);
      patternw = _rsvg_css_normalize_length (&rsvg_pattern->width, ctx, LENGTH_DIR_HORIZONTAL);
      patternh = _rsvg_css_normalize_length (&rsvg_pattern->height, ctx, LENGTH_DIR_VERTICAL);

      perhaps we can have

      normalize_lengths_for_x_y_w_h (ctx,
                                     &rsvg_pattern->x,
                                     &rsvg_pattern->y,
                                     &rsvg_pattern->width,
                                     &rsvg_pattern->height);

      since those x/y/width/height groups get used all over the place.

      And in Rust?

      This is all so that when that code gets ported to Rust, it will be easier. Librsvg is old code, and it has a bunch of C-isms that either don't translate well to Rust, or are kind of horrible by themselves and could be turned into more robust C — to make the corresponding rustification obvious.

    Searching in GNOME Software

    I’ve spent a few days profiling GNOME Software on ARM, mostly for curiosity but also to help our friends at Endless. I’ve merged a few patches that make the existing --profile code more useful to profile start up speed. Already there have been some big gains, over 200ms of startup time and 12Mb of RSS, but there’s plenty more that we want to fix to make GNOME Software run really nicely on resource constrained devices.

    One of the biggest delays is constructing the search token cache at startup. This is where we look at all the fields of the .desktop files, the AppData files and the AppStream files and split them in a UTF8-sane way into search tokens, adding them into a big hash table after stemming them. We do it with 4 threads by default as it’s trivially parallelizable. With the search cache, when we search we just ask all the applications in the store “do you have this search term” and if so it gets added to the search results and ordered according to how good the match is. This takes 225ms on my super-fast Intel laptop (and much longer on ARM), and this happens automatically the very first time you search for anything in GNOME Software.

    At the moment we add (for each locale, including fallbacks) the package name, the app ID, the app name, app single line description, the app keywords and the application long description. The latter is the multi-paragraph long description that’s typically prose. We use 90% of the time spent loading the token cache just splitting and adding the words in the description. As the description is prose, we have to ignore quite a few words e.g. “and”, “the”, “is” and “can” are some of the most frequent, useless words. Just the nature of the text itself (long non-technical prose) it doesn’t actually add many useful keywords to the search cache, and the ones that is does add are treated with such low priority other more important matches are ordered before them.

    My proposal: continue to consume everything else for the search cache, and drop using the description. This means we start way quicker, use less memory, but it does require upstream actually adds some [localized] Keywords=foo;bar;baz in either the desktop file or <keywords> in the AppData file. At the moment most do, especially after I sent ~160 emails to the maintainers that didn’t have any defined keywords in the Fedora 25 Alpha, so I think it’s fairly safe at this point. Comments?

    November 02, 2016

    Casa Natureza

    USA, 2016 Em projeto Uma casa modernista da grande tradição brasileira do ideal das casas modernistas, que...

    The Royal Photographic Society Journal


    The Royal Photographic Society Journal

    Who let us in here?

    The Journal of the Photographic Society is the journal for one of oldest photographic societies in the world: the Royal Photographic Society. First published in 1853, the RPS Journal is the oldest photographic periodical in the world (just edging out the British Journal of Photography by about a year).

    So you can imagine my doubt when confronted with an email about using some material from pixls.us for their latest issue…


    If the name sounds familiar to anyone it may be from a recent post by Joe McNally who is featured prominently in the September 2016 issue. He was also just inducted as a fellow into the society!

    RPS Journal 2016-09 Cover

    It turns out my initial doubts were completely unfounded, and they really wanted to run a page based off one of our tutorials. The editors liked the Open Source Portrait tutorial. In particular, the section on using Wavelet Decompose to touch up the skin tones:

    RPS Journal 2016-11 PD Yay Mairi!

    How cool is that? I actually searched the archive and the only other mention I can find of GIMP (or any other F/OSS) is from a “Step By Step” article written by Peter Gawthrop (Vol. 149, February 2009). I think it’s pretty awesome that we can promote a little more exposure for Free Software alternatives. Especially in more mainstream publications and to a broader audience!

    November 01, 2016

    Tue 2016/Nov/01

    • Bézier curves, markers, and SVG's concept of directionality

      SVG reference image        with markers

      In the first post in this series I introduced SVG markers, which let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

      In that post and in the second one, I started porting some of the code in librsvg that renders SVG markers from C to Rust. So far I've focused on the code and how it looks in Rust vs. C, and on some initial refactorings to make it feel more Rusty. I have casually mentioned Bézier segments and their tangents, and you may have an idea that SVG paths are composed of Bézier curves and straight lines, but I haven't explained what this code is really about. Why not simply walk over all the nodes in the path, and slap a marker at each one?

      Aragorn        does not simply walk a degenerate path

      (Sorry. Couldn't resist.)

      SVG paths

      If you open an illustration program like Inkscape, you can draw paths based on Bézier curves.

      Path of Bézier        segments, nodes, and control points

      Each segment is a cubic Bézier curve and can be considered independently. Let's focus on the middle segment there.

      Single Bézier        segment with control points

      At each endpoint, the tangent direction of the curve is determined by the corresponding control point. For example, at endpoint 1 the curve goes out in the direction of control point 2, and at endpoint 4 the curve comes in from the direction of control point 3. The further away the control points are from the endpoints, the larger "pull" they will have on the curve.

      Tangents at the endpoints

      Let's consider the tangent direction of the curve at the endpoints. What cases do we have, especially when some of the control points are in the same place as the endpoints?

      Directions at the endpoints of Bézier segments

      When the endpoints and the control points are all in different places (upper-left case), the tangents are easy to compute. We just subtract the vectors P2-P1 and P4-P3, respectively.

      When just one of the control points coincides with one of the endpoints (second and third cases, upper row), the "missing" tangent just goes to the other control point.

      In the middle row, we have the cases where both endpoints are coincident. If the control points are both in different places, we just have a curve that loops back. If just one of the control points coincides with the endpoints, the "curve" turns into a line that loops back, and its direction is towards the stray control point.

      Finally, if both endpoints and both control points are in the same place, the curve is just a degenerate point, and it has no tangent directions.

      Here we only care about the direction of the curve at the endpoints; we don't care about the magnitude of the tangent vectors. As a side note, Bézier curves have the nice property that they fit completely inside the convex hull of their control points: if you draw a non-crossing quadrilateral using the control points, then the curve fits completely inside that quadrilateral.

      Convex hulls of Bézier segments

      How SVG represents paths

      SVG uses a representation for paths that is similar to that of PDF and its precursor, the PostScript language for printers. There is a pen with a current point. The pen can move in a line or in a curve to another point while drawing, or it can lift up and move to another point without drawing.

      To create a path, you specify commands. These are the four basic commands:

      • move_to (x, y) - Change the pen's current point without drawing, and begin a new subpath.
      • line_to (x, y) - Draw a straight line from the current point to another point.
      • curve_to (x2, y2, x3, y3, x4, y4) - Draw a Bézier curve from the current point to (x4, y4), with the control points (x2 y2) and (x3, y3).
      • close_path - Draw a line from the current point back to the beginning of the current subpath (i.e. the position of the last move_to command).

      For example, this sequence of commands draws a closed square path:

      move_to (0, 0)
      line_to (10, 0)
      line_to (10, 10)
      line_to (0, 10)
      close_path

      If we had omitted the close_path, we would have an open C shape.

      SVG paths provide secondary commands that are built upon those basic ones: commands to draw horizontal or vertical lines without specifying both coordinates, commands to draw quadratic curves instead of cubic ones, and commands to draw elliptical or circular arcs. All of these can be built from, or approximated from, straight lines or cubic Bézier curves.

      Let's say you have a path with two disconnected sections: move_to (0, 0), line_to (10, 0), line_to (10, 10), move_to (20, 20), line_to (30, 20).

      Bézier path        with two open subpaths

      These two sections are called subpaths. A subpath begins with a move_to command. If there were a close_path command somewhere, it would draw a line from the current point back to where the current subpath started, i.e. to the location of the last move_to command.

      Markers at nodes

      Repeating ourselves a bit: for each path, SVG lets you define markers. A marker is a symbol that can be automatically placed at each node along a path. For example, here is a path composed of line_to segments, and which has an arrow-shaped marker at each node:

      Bézier path with        markers

      Here, the arrow-shaped marker is defined to be orientable. Its anchor point is at the V shaped concavity of the arrow. SVG specifies the angle at which orientable markers should be placed: given a node, the angle of its marker is the average of the incoming and outgoing angles of the path segments that meet at that node. For example, at node 5 above, the incoming line comes in at 0° (Eastwards) and the outgoing line goes out at 90° (Southwards) — so the arrow marker at 5 is rotated so it points at 45° (South-East).

      In the following picture we see the angle of each marker as the bisection of the incoming and outgoing angles of the respective nodes:

      Bézier path with        markers and directions

      The nodes at the beginning and end of subpaths only have one segment that meets that node. So, the marker uses that segment's angle. For example, at node 6 the only incoming segment goes Southward, so the marker points South.

      Converting paths into Segments

      The path above is simple to define. The path definition is

      move_to (1)
      line_to (2)
      line_to (3)
      line_to (4)
      line_to (5)
      line_to (6)

      (Imagine that instead of those numbers, which are just for illustration purposes, we include actual x/y coordinates.)

      When librsvg turns that path into Segments, they more or less look like

      line from 1, outgoing angle East,       to 2, incoming angle East
      line from 2, outgoing angle South-East, to 3, incoming angle South-East
      line from 3, outgoing angle North-East, to 4, incoming angle North-East
      line from 4, outgoing angle East,       to 5, incoming angle East
      line from 5, outgoing angle South,      to 6, incoming angle South

      Obviously, straight line segments (i.e. from a line_to) have the same angles at the start and the end of each segment. In contrast, curve_to segments can have different tangent angles at each end. For example, if we had a single curved segment like this:

      move_to (1)
      curve_to (2, 3, 4)

      Bézier curve with directions

      Then the corresponding single Segment would look like this:

      curve from 1, outgoing angle North, to 4, incoming angle South-East

      Now you know what librsvg's function path_to_segments() does! It turns a sequence of move_to / line_to / curve_to commands into a sequence of segments, each one with angles at the start/end nodes of the segment.

      Paths with zero-length segments

      Let's go back to our path made up of line segments, the one that looks like this:

      Bézier path with        markers

      However, imagine that for some reason the path contains duplicated, contiguous nodes. If we specified the path as

      move_to (1)
      line_to (2)
      line_to (3)
      line_to (3)
      line_to (3)
      line_to (3)
      line_to (4)
      line_to (5)
      line_to (6)

      Then our rendered path would look the same, with duplicated nodes at 3:

      Bézier path with        duplicated nodes

      But now when librsvg turns that into Segments, they would look like

        line from 1, outgoing angle East,       to 2, incoming angle East
        line from 2, outgoing angle South-East, to 3, incoming angle South-East
        line from 3, to 3, no angles since this is a zero-length segment
      * line from 3, to 3, no angles since this is a zero-length segment
        line from 3, outgoing angle North-East, to 4, incoming angle North-East
        line from 4, outgoing angle East,       to 5, incoming angle East
        line from 5, outgoing angle South,      to 6, incoming angle South

      When librsvg has to draw the markers for this path, it has to compute the marker's angle at each node. However, in the starting node for the segment marked with a (*) above, there is no angle! In this case, the SVG spec says that you have to walk the path backwards until you find a segment which has an angle, and then forwards until you find another segment with an angle, and then take their average angles and use them for the (*) node. Visually this makes sense: you don't see where there are contiguous duplicated nodes, but you certainly see lines coming out of that vertex. The algorithm finds those lines and takes their average angles for the marker.

      Now you know where our exotic names find_incoming_directionality_backwards() and find_outgoing_directionality_forwards() come from!

      Next up: refactoring C to make Rustification easier.

    October 31, 2016

    Flatpak cross-compilation support: Epilogue

    You might remember my attempts at getting an easy to use cross-compilation for ARM applications on my x86-64 desktop machine.

    With Fedora 25 approaching, I'm happy to say that the necessary changes to integrate the feature have now rolled into Fedora 25.

    For example, to compile the GNU Hello Flatpak for ARM, you would run:

    $ flatpak install gnome org.freedesktop.Platform/arm org.freedesktop.Sdk/arm
    Installing: org.freedesktop.Platform/arm/1.4 from gnome
    [...]
    $ sudo dnf install -y qemu-user-static
    [...]
    $ TARGET=arm ./build.sh

    For other applications, add the --arch=arm argument to the flatpak-builder command-line.

    This example also works for 64-bit ARM with the architecture name aarch64.

    October 28, 2016

    Fri 2016/Oct/28

    • Porting a few C functions to Rust

      Last time I showed you my beginnings of porting parts of Librsvg to Rust. In this post I'll do an annotated porting of a few functions.

      Disclaimers: I'm learning Rust as I go. I don't know all the borrowing/lending rules; "Rust means never having to close a socket" is a very enlightening article, although it doesn't tell the whole story. I don't know Rust idioms that would make my code prettier. I am trying to refactor things to be prettier after a the initial pass of C-to-Rust. If you know an idiom that would be useful, please mail me!

      So, let's continue with the code to render SVG markers, as before. I'll start with this function:

      /* In C */
      
      static gboolean
      points_equal (double x1, double y1, double x2, double y2)
      {
          return DOUBLE_EQUALS (x1, x2) && DOUBLE_EQUALS (y1, y2);
      }

      I know that Rust supports tuples, and pretty structs, and everything. But so far, the refactoring I've done hasn't led me to really want to use them for this particular part of the library. Maybe later! Anyway, this translates easily to Rust; I already had a function called double_equals() from the last time. The result is as follows:

      /* In Rust */
      
      fn points_equal (x1: f64, y1: f64, x2: f64, y2: f64) -> bool {
          double_equals (x1, x2) && double_equals (y1, y2)
      }

      Pro-tip: text editor macros work very well for shuffling around the "double x1" into "x1: f64" :)

      Remove the return and the semicolon at the end of the line so that the function returns the value of the && expression. I could leave the return in there, but not having it is more Rusty, perhaps. (Rust also has a return keyword, which I think they keep around to allow early exits from functions.)

      This function doesn't get used yet, so the existing tests don't catch it. The first time I ran the Rust compiler on it, it complained of a type mismatch: I had put f64 instead of bool for the return type, which is of course wrong. Oops. Fix it, test again that it builds, done.

      Okay, next!

      But first, a note about how the original Segment struct in C evolved after refactoring in Rust.

      Original in C Straight port to Rust After refactoring
      typedef struct {
          /* If is_degenerate is true,
           * only (p1x, p1y) are valid.
           * If false, all are valid.
           */
          gboolean is_degenerate;
          double p1x, p1y;
          double p2x, p2y;
          double p3x, p3y;
          double p4x, p4y;
      } Segment;
      struct Segment {
          /* If is_degenerate is true,
           * only (p1x, p1y) are valid.
           * If false, all are valid.
           */
          is_degenerate: bool,
          p1x: f64, p1y: f64,
          p2x: f64, p2y: f64,
          p3x: f64, p3y: f64,
          p4x: f64, p4y: f64
      }
      pub enum Segment {
          Degenerate { // A single lone point
              x: f64,
              y: f64
          },
      
          LineOrCurve {
              x1: f64, y1: f64,
              x2: f64, y2: f64,
              x3: f64, y3: f64,
              x4: f64, y4: f64
          },
      }

      In the C version, and in the original Rust version, I had to be careful to only access the x1/y1 fields if is_degenerate==true. Rust has a very convenient "enum" type, which can work pretty much as a normal C enum, or as a tagged union, as shown here. Rust will not let you access fields that don't correspond to the current tag value of the enum. (I'm not sure if "tag value" is the right way to call it — in any case, if a segment is Segment::Degenerate, the compiler only lets you access the x/y fields; if it is Segment::LineOrCurve, it only lets you access x1/y1/x2/y2/etc.) We'll see the match statement below, which is how enum access is done.

      Next!

      Original in C Straight port to Rust
      /* A segment is zero length if it is degenerate, or if all four control points
       * coincide (the first and last control points may coincide, but the others may
       * define a loop - thus nonzero length)
       */
      static gboolean
      is_zero_length_segment (Segment *segment)
      {
          double p1x, p1y;
          double p2x, p2y;
          double p3x, p3y;
          double p4x, p4y;
      
          if (segment->is_degenerate)
              return TRUE;
      
          p1x = segment->p1x;
          p1y = segment->p1y;
      
          p2x = segment->p2x;
          p2y = segment->p2y;
      
          p3x = segment->p3x;
          p3y = segment->p3y;
      
          p4x = segment->p4x;
          p4y = segment->p4y;
      
          return (points_equal (p1x, p1y, p2x, p2y)
                  && points_equal (p1x, p1y, p3x, p3y)
                  && points_equal (p1x, p1y, p4x, p4y));
      }
      /* A segment is zero length if it is degenerate, or if all four control points
       * coincide (the first and last control points may coincide, but the others may
       * define a loop - thus nonzero length)
       */
      fn is_zero_length_segment (segment: Segment) -> bool {
          match segment {
              Segment::Degenerate { .. } => { true },
      
              Segment::LineOrCurve { x1, y1, x2, y2, x3, y3, x4, y4 } => {
                  (points_equal (x1, y1, x2, y2)
                   && points_equal (x1, y1, x3, y3)
                   && points_equal (x1, y1, x4, y4))
              }
          }
      }

      To avoid a lot of "segment->this, segment->that, segment->somethingelse", the C version copies the fields from the struct into temporary variables and calls points_equal() with them. The Rust version doesn't need to do this, since we have a very convenient match statement.

      Rust really wants you to handle all the cases that your enum may be in. You cannot do something like "if segment == Segment::Degenerate", because you may forget an "else if" for some case. Instead, the match statement is much more powerful. It is really a pattern-matching engine, and for enums it lets you consider each case separately. The fields inside each case get unpacked like in "Segment::LineOrCurve { x1, y1, ... }" so you can use them easily, and only within that case. In the Degenerate case, I don't use the x/y fields, so I write "Segment::Degenerate { .. }" to avoid having unused variables.

      I'm sure I'll need to change something in the prototype of this function. The plain "segment: Segment" argument in Rust means that the is_zero_length_segment() function will take ownership of the segment. I'll be passing it from an array, but I don't know what shape that code will take yet, so I'll leave it like this for now and change it later.

      This function could use a little test, couldn't it? Just to guard from messing up the coordinate names later if I decide to refactor it with tuples for points, or something. Fortunately, the tests are really easy to set up in Rust:

          #[test]
          fn degenerate_segment_is_zero_length () {
              assert! (super::is_zero_length_segment (degenerate (1.0, 2.0)));
          }
      
          #[test]
          fn line_segment_is_nonzero_length () {
              assert! (!super::is_zero_length_segment (line (1.0, 2.0, 3.0, 4.0)));
          }
      
          #[test]
          fn line_segment_with_coincident_ends_is_zero_length () {
              assert! (super::is_zero_length_segment (line (1.0, 2.0, 1.0, 2.0)));
          }
      
          #[test]
          fn curves_with_loops_and_coincident_ends_are_nonzero_length () {
              assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 1.0, 2.0)));
              assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 1.0, 2.0, 3.0, 4.0, 1.0, 2.0)));
              assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 1.0, 2.0)));
          }
      
          #[test]
          fn curve_with_coincident_control_points_is_zero_length () {
              assert! (super::is_zero_length_segment (curve (1.0, 2.0, 1.0, 2.0, 1.0, 2.0, 1.0, 2.0)));
          }

      The degenerate(), line(), and curve() utility functions are just to create the appropriate Segment::Degenerate { x, y } without so much typing, and to make the tests more legible.

      After running cargo test, all the tests pass. Yay! And we didn't have to fuck around with relinking a version specifically for testing, or messing with making static functions available to tests, like we would have had to do in C. Double yay!

      Next!

      Original in C Straight port to Rust
      static gboolean
      find_incoming_directionality_backwards (Segment *segments, int num_segments, int start_index, double *vx, double *vy)
      {
          int j;
          gboolean found;
      
          /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
      
          found = FALSE;
      
          for (j = start_index; j >= 0; j--) {                                                                 /* 1 */
              if (segments[j].is_degenerate)
                  break; /* reached the beginning of the subpath as we ran into a standalone point */
              else {                                                                                           /* 2 */
                  if (is_zero_length_segment (&segments[j]))                                                   /* 3 */
                      continue;
                  else {
                      found = TRUE;
                      break;
                  }
              }
          }
      
          if (found) {                                                                                         /* 4 */
              g_assert (j >= 0);
              *vx = segments[j].p4x - segments[j].p3x;
              *vy = segments[j].p4y - segments[j].p3y;
              return TRUE;
          } else {
              *vx = 0.0;
              *vy = 0.0;
              return FALSE;
          }
      }
      fn find_incoming_directionality_backwards (segments: Vec, start_index: usize) -> (bool, f64, f64)
      {
          let mut found: bool;
          let mut vx: f64;
          let mut vy: f64;
      
          /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
      
          found = false;
          vx = 0.0;
          vy = 0.0;
      
          for j in (0 .. start_index + 1).rev () {                                                            /* 1 */
              match segments[j] {
                  Segment::Degenerate { .. } => {
                      break; /* reached the beginning of the subpath as we ran into a standalone point */
                  },
      
                  Segment::LineOrCurve { x3, y3, x4, y4, .. } => {                                            /* 2 */
                      if is_zero_length_segment (&segments[j]) {                                              /* 3 */
                          continue;
                      } else {
                          vx = x4 - x3;
                          vy = y4 - y3;
                          found = true;
                          break;
                      }
                  }
              }
          }
      
          if found {                                                                                           /* 4 */
              (true, vx, vy)
          } else {
              (false, 0.0, 0.0)
          }
      }

      In reality this function returns three values: whether a directionality was found, and if so, the vx/vy components of the direction vector. In C the prototype is like "bool myfunc (..., out vx, out vy)": return the boolean conventionally, and get a reference to the place where we should store the other return values. In Rust, it is simple to just return a 3-tuple.

      (Keen-eyed rustaceans will detect a code smell in the bool-plus-extra-crap return value, and tell me that I could use an Option instead. We'll see what the code wants to look like during the final refactoring!)

      With this code, I need temporary variables vx/vy to store the result. I'll refactor it to return immediately without needing temporaries or a found variable.

      1. We are looking backwards in the array of segments, starting at a specific element, until we find one that satisfies a certain condition. Looping backwards in C in the way done here has the peril that your loop variable needs to be signed, even though array indexes are unsigned: j will go from start_index down to -1, but the loop only runs while j >= 0.

      Rust provides a somewhat strange idiom for backwards numeric ranges. A normal range looks like "0 .. n" and that means the half-open range [0, n). So if we want to count from start_index down to 0, inclusive, we need to rev()erse the half-open range [0, start_index + 1), and that whole thing is "(0 .. start_index + 1).rev ()".

      2. Handling the degenerate case is trivial. Handling the other case is a bit more involved in Rust. We compute the vx/vy values here, instead of after the loop has exited, as at that time the j loop counter will be out of scope. This ugliness will go away during refactoring.

      However, note the match pattern "Segment::LineOrCurve { x3, y3, x4, y4, .. }". This means, "I am only interested in the x3/y3/x4/y4 fields of the enum"; the .. indicates to ignore the others.

      3. Note the ampersand in "is_zero_length_segment (&segments[j])". When I first wrote this, I didn't include the & sign, and Rust complained that it couldn't pass segments[j] to the function because the function would take ownership of that value, while in fact the value is owned by the array. I need to declare the function as taking a reference to a segment ("a pointer"), and I need to call the function by actually passing a reference to the segment, with & to take the "address" of the segment like in C. And if you look at the C version, it also says "&segments[j]"! So, the function now looks like this:

      fn is_zero_length_segment (segment: &Segment) -> bool {
          match *segment {
              ...

      Which means, the function takes a reference to a Segment, and when we want to use it, we de-reference it as *segment.

      While my C-oriented brain interprets this as references and dereferencing pointers, Rust wants me to think in the higher-level terms. A function will take ownership of an argument if it is declared like fn foo(x: Bar), and the caller will lose ownership of what it passed in x. If I want the caller to keep owning the value, I can "lend" it to the function by passing a reference to it, not the actual value. And I can make the function "borrow" the value without taking ownership, because references are not owned; they are just pointers to values.

      It turns out that the three chapters of the Rust book that deal with this are very clear and understandable, and I was irrationally scared of reading them. Go through them in order: Ownership, References and borrowing, Lifetimes. I haven't used the lifetime syntax yet, but it lets you solve the problem of dangling pointers inside live structs.

      4. At the end of the function, we build our 3-tuple result and return it.

      And what if we remove the ugliness from a straight C-to-Rust port? It starts looking like this:

      fn find_incoming_directionality_backwards (segments: Vec, start_index: usize) -> (bool, f64, f64)
      {
          /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
      
          for j in (0 .. start_index + 1).rev () {
              match segments[j] {
                  Segment::Degenerate { .. } => {
                      return (false, 0.0, 0.0); /* reached the beginning of the subpath as we ran into a standalone point */
                  },
      
                  Segment::LineOrCurve { x3, y3, x4, y4, .. } => {
                      if is_zero_length_segment (&segments[j]) {
                          continue;
                      } else {
                          return (true, x4 - x3, y4 - y3);
                      }
                  }
              }
          }
      
          (false, 0.0, 0.0)
      }

      We removed the auxiliary variables by returning early from within the loop. I could remove the continue by negating the result of is_zero_length_segment() and returning the sought value in that case, but in my brain it is easier to read, "is this a zero length segment, i.e. that segment has no directionality? If yes, continue to the previous one, otherwise return the segment's outgoing tangent vector".

      But what if is_zero_length_segment() is the wrong concept? My calling function is called find_incoming_directionality_backwards(): it looks for segments in the array until it finds one with directionality. It happens to know that a zero-length segment has no directionality, but it doesn't really care about the length of segments. What if we called the helper function get_segment_directionality() and it returned false when the segment has none, and a vector otherwise?

      Rust provides the Option pattern just for this. And I'm itching to show you some diagrams of Bézier segments, their convex hulls, and what the goddamn tangents and directionalities actually mean graphically.

      But I have to evaluate Outreachy proposals, and if I keep reading Rust docs and refactoring merrily, I'll never get that done.

      Sorry to leave you in a cliffhanger! More to come soon!

    Arnold Newman Portraits


    Arnold Newman Portraits

    The beginnings of "Environmental Portraits"

    Anyone that has spent any time around me would realize that I’m particularly fond of portraits. From the wonderful works of Martin Schoeller to the sublime Dan Winters, I am simply fascinated by a well executed portrait. So I thought it would be fun to take a look at some selections from the “father” of environmental portraits - Arnold Newman.

    Arnold Newman, Self Portrait, Baltimore MD, 1939 Arnold Newman, Self Portrait, Baltimore MD, 1939

    Newman wanted to become a painter before needing to drop out of college after only two years to take a job shooting portraits in a photo studio in Philadelphia. This experience apparently taught him what he did not want to do with photography…

    Luckily it may have started defining what he did want to do with his photography. Namely, his approach to capturing his subjects alongside (or within) the context of the things that made them notable in some way. This would became known as “Environmental Portraiture”. He described it best in an interview for American Photo in 2000:

    I didn’t just want to make a photograph with some things in the background. The surroundings had to add to the composition and the understanding of the person. No matter who the subject was, it had to be an interesting photograph. Just to simply do a portrait of a famous person doesn’t mean a thing. 1

    Though he has felt that the term might be unnecessarily restrictive (and possibly overshadows his other pursuits including abstractions and photojournalism), there’s no denying the impact of the results. Possibly his most famous portrait, of composer Igor Stravinsky, illustrates this wonderfully. The overall tones are almost monotone (flat - pun intended, and likely intentional on behalf of Newman) and are dominated by the stark duality of the white wall with the black piano.

    Igor Stravinsky by Arnold Newman Igor Stravinsky, New York, NY, 1946 by Arnold Newman

    Newman realized that the open lid of the piano “…is like the shape of a musical flat symbol—strong, linear, and beautiful, just like Stravinsky’s work.” 1 The geometric construction of the image instantly captures the eye and the aggressive crop makes the final composition even more interesting. In this case the crop was a fundamental part of the original composition as shot, but it was not uncommon for him to find new life in images with different crops.

    In a similar theme his portraits of both Salador Dalí and John F. Kennedy show a willingness to allow the crop to bring in different defining characteristics of his subjects. In the case of Dalí it allows an abstraction to hang there mimicking the pose of the artist himself. Kennedy is mostly the only organic form, striking a relaxed pose, while dwarfed by the imposing architecture and hard lines surrounding him.

    Salvador Dali, New York, NY, 1951 Salvador Dali, New York, NY, 1951 by Arnold Newman
    John F. Kennedy, Washington D.C., 1953 John F. Kennedy, Washington D.C., 1953 by Arnold Newman

    He manages to bring the same deft handling of placing his subjects in the context of their work with other photographers as well. His portrait of Ansel Adams shows the photographer just outside his studio with the surrounding wilderness not only visible around the frame but reflected in the glass of the doors behind him (and the photographers glasses). Perhaps an indication of the nature of Adams work being to capture natural scenes through glass?

    Ansel Adams, 1975 by Arnold Newman Ansel Adams, 1975 by Arnold Newman

    For anyone familiar with the pioneer of another form of photography, Newman’s portrait of (the usually camera shy) Henri Cartier-Bresson will instantly evoke a sense of the artists candid street images. In it, Bresson appears to take the place of one of his subjects caught briefly on the streets in a fleeting moment. The portrait has an almost spontaneous feeling to it, (again) mirroring the style of the work of its subject.

    Henri Cartier-Bresson, New York, NY, 1947 Henri Cartier-Bresson, New York, NY, 1947 by Arnold Newman

    Eight years after his portrait of surrealist painter Dali, Newman shot another famous (abstraction) artist, Pablo Picasso. This particular portrait is much more intimate and more classically composed, framing the subject as a headshot with little of the surrounding environment as before. I can’t help but think that the placement of the hand being similar in both images is intentional; a nod to the unconventional views both artists brought to the world.

    Pablo Picasso,Vallauris, France, 1954 Pablo Picasso,Vallauris, France, 1954 by Arnold Newman

    The eloquent Gregory Heisler had a wonderful discussion about Newman for Atlanta Celebrates Photography at the High Musuem in 2008:

    Arnold Newman produced an amazing body of work that warrants some time and consideration for anyone interested in portraiture. These few examples simply do not do his collection of portraits justice. If you have a few moments to peruse some amazing images head over to his website and have a look (I’m particularly fond of his extremely design-oriented portrait of chinese-american architect I.M. Pei):

    I.M. Pei, New York, NY, 1967 I.M. Pei, New York, NY, 1967 by Arnold Newman

    Of historical interest is a look at Newman’s contact sheet for the Stravinsky image showing various compositions and approaches to his subject with the piano. (I would have easily chosen the last image in the first row as my pick.) I have seen the second image in the second row cropped as indicated, which was also a very strong choice. I adore being able to investigate contact sheets from shoots like this - it helps me to humanize these amazing photographers while simultaneously allowing me an opportunity to learn a little about their thought process and how I might incorporate it into my own photography.

    Igor Stravinsky contact sheet

    To close, a quote from his interview with American Photo magazine back in 2000 that will likely remain relevant to photographers for a long time:

    But a lot of photographers think that if they buy a better camera they’ll be able to take better photographs. A better camera won’t do a thing for you if you don’t have anything in your head or in your heart. 1

    1 Harris, Mark. “Arnold Newman: The Stories Behind Some of the Most Famous Portraits of the 20th Century.” American Photo, March/April 2000, pp. 36-38

    October 26, 2016

    Introducing Hundred Dollar Drawings!

    Notice: This has been popular! New orders temporarily suspended while I work on backlog. I’ll offer Hundred Dollar Drawings again soon.

    example of $100 drawing

    Example of a $100 drawing

    $100: Tell Nina what to draw* and she’ll draw it. It could be as vague as a word (“quadruped,” “equinox”) or more specific (“a cat driving a car,” “a sun and moon shaking hands”) or even more specific (“a tabby cat driving a convertible sportscar over a cardboard box,” “a sun and moon shaking hands over planet Earth, sky behind them half night and half day”). Nina will email you a photo of the finished drawing, and post it on her blog and social media.

    *Specify drawing in Paypal checkout

     

    + $25: We’ll ship you the original art. Sizes will vary but it will be on 8.5 x 11″ or smaller paper.

     

    + $100: I will also make a “Making-of” video of the drawing, such as the above.

     

    Example of cleaned-up, reproduction-ready PNG file

    Example of cleaned-up, reproduction-ready PNG file

    + $100: Drawing cleaned-up and reproduction-ready for ANY USE YOU WANT!

     

    FAQ

    Q: What if I don’t like my drawing?
    A: Too bad, sorry.

    Q: Can you submit a sketch and let me comment for revisions?
    A: No. If you want revisions, commission another $100 drawing, and a third, fourth, etc. You can get 10 $100 drawings for less than my usual professional rate.

    Q: Can I use the drawing as a commercial logo for my business?
    A: Yes.

    Q: Can I use the drawing for advertising or other commercial purposes?
    A: Yes, anything you want.

    Q: Isn’t that crazy cheap for commercial art?
    A: Yes. But some of these drawings are also non-commercial. It’s all less stress for me, and I don’t care what happens to the image after I draw it. (Actually I do care – the more it’s used, the better.)

    Q: What about copyright?
    A: Like most of my work this is Free Culture. There’s effectively no copyright to license or buy. You can do whatever you want with the art you commission, but it’s non-exclusive. I will be posting it on my blog and social media.

    Q: What if I want exclusive rights?
    A: Then you’ll have to pay more than $100 – same as most professional commercial art of this caliber. Shoot me an email to discuss.

    Q: What if Nina finds my drawing instructions abhorrent?
    A: I will refund your money and not do the drawing. Or I’ll keep the money and willfully misinterpret your request. That might be more interesting.

    Q: Can you do a caricature if I send you a photo?
    A: Not very well, but I’ll try. I am not a caricaturist so likenesses not guaranteed to be recognizable or remotely able to fulfill hopes and dreams.

    Share/Bookmark

    flattr this!

    Dual-GPU integration in GNOME

    Thanks to the work of Hans de Goede and many others, dual-GPU (aka NVidia Optimus or AMD Hybrid Graphics) support works better than ever in Fedora 25.

    On my side, I picked up some work I originally did for Fedora 24, but ended up being blocked by hardware support. This brings better integration into GNOME.

    The Details Settings panel now shows which video cards you have in your (most likely) laptop.

    dual-GPU Graphics

    The second feature is what Blender and 3D video games users have been waiting for: a contextual menu item to launch the application on the more powerful GPU in your machine.

    Mooo Powaa!

    This demonstration uses a slightly modified GtkGLArea example, which shows which of the GPUs is used to render the application in the title bar.

    on the integrated GPU

    on the discrete GPU

    Behind the curtain

    Behind those 2 features, we have a simple D-Bus service, which runs automatically on boot, and stays running to offer a single property (HasDualGpu) that system components can use to detect what UI to present. This requires the "switcheroo" driver to work on the machine in question.

    Because of the way applications are launched on the discrete GPU, we cannot currently support D-Bus activated applications, but GPU-heavy D-Bus-integrated applications are few and far between right now.

    Future plans

    There's plenty more to do in this area, to polish the integration. We might want applications to tell us whether they'd prefer being run on the integrated or discrete GPU, as live switching between renderers is still something that's out of the question on Linux.

    Wayland dual-GPU support, as well as support for the proprietary NVidia drivers are also things that will be worked on, probably by my colleagues though, as the graphics stack really isn't my field.

    And if the hardware becomes more widely available, we'll most certainly want to support hardware with hotpluggable graphics support (whether gaming laptop "power-ups" or workstation docks).

    Availability

    All the patches necessary to make this work are now available in GNOME git (targeted at GNOME 3.24), and backports are integrated in Fedora 25, due to be released shortly.

    October 25, 2016

    Tue 2016/Oct/25

    • Librsvg gets Rusty

      I've been wanting to learn Rust for some time. It has frustrated me for a number of years that it is quite possible to write GNOME applications in high-level languages, but for the libraries that everything else uses ("the GNOME platform"), we are pretty much stuck with C. Vala is a very nice effort, but to me it never seemed to catch much momentum outside of GNOME.

      After reading this presentation called "Rust out your C", I got excited. It *is* possible to port C code to Rust, small bits at a time! You rewrite some functions in Rust, make them linkable to the C code, and keep calling them from C as usual. The contortions you need to do to make C types accessible from Rust are no worse than for any other language.

      I'm going to use librsvg as a testbed for this.

      Librsvg is an old library. It started as an experiment to write a SAX-based parser for SVG ("don't load the whole DOM into memory; instead, stream in the XML and parse it as we go"), and a renderer with the old libart (what we used in GNOME for 2D vector rendering before Cairo came along). Later it got ported to Cairo, and that's the version that we use now.

      Outside of GNOME, librsvg gets used at Wikimedia to render the SVGs all over Wikipedia. We have gotten excellent bug reports from them!

      Librsvg has a bunch of little parsers for the mini-languages inside SVG's XML attributes. For example, within a vector path definition, "M10,50 h20 V10 Z" means, "move to the coordinate (10, 50), draw a horizontal line 20 pixels to the right, then a vertical line to absolute coordinate 10, then close the path with another line". There are state machines, like the one that transforms that path definition into three line segments instead of the PostScript-like instructions that Cairo understands. There are some pixel-crunching functions, like Gaussian blurs and convolutions for SVG filters.

      It should be quite possible to port those parts of librsvg to Rust, and to preserve the C API for general consumption.

      Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it's all due to using C. We've gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That's the kind of 1970s bullshit that Rust prevents.

      I also hope that this will make it easier to actually write unit tests for librsvg. Currently we have some pretty nifty black-box tests for the whole library, which essentially take in complete SVG files, render them, and compare the results to a reference image. These are great for smoke testing and guarding against regressions. However, all the fine-grained machinery in librsvg has zero tests. It is always a pain in the ass to make static C functions testable "from the outside", or to make mock objects to provide them with the kind of environment they expect.

      So, on to Rustification!

      I've started with a bit of the code from librsvg that is fresh in my head: the state machine that renders SVG markers.

      SVG markers

      This image with markers comes from the official SVG test suite:

      SVG reference image        with markers

      SVG markers let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

      In the example image above, this is what is happening. The SVG defines four marker types:

      • A purple square that always stays upright.
      • A green circle.
      • A blue triangle that always stays upright.
      • A blue triangle whose orientation depends on the node where it sits.

      The top row, with the purple squares, is a path (the black line) that says, "put the purple-square marker on all my nodes".

      The middle row is a similar path, but it says, "put the purple-square marker on my first node, the green-circle marker on my middle nodes, and the blue-upright-triangle marker on my end node".

      The bottom row has the blue-orientable-triangle marker on all the nodes. The triangle is defined to point to the right (look at the bottommost triangles!). It gets rotated 45 degrees at the middle node, and 90 degrees so it points up at the top-left node.

      This was all fine and dandy, until one day we got a bug about incorrect rendering when there are funny paths paths. What makes a path funny?

      SVG image with funny        arrows

      For the code that renders markers, a path is not in the "easy" case when it is not obvious how to compute the orientation of nodes. A node's orientation, when it is well-behaved, is just the average angle of the node's incoming and outgoing lines (or curves). But if a path has contiguous coincident vertices, or stray points that don't have incoming/outgoing lines (imagine a sequence of moveto commands), or curveto commands with Bézier control points that are coincident with the nodes... well, in those cases, librsvg has to follow the spec to the letter, for it says how to handle those things.

      In short, one has to walk the segments away from the node in question, until one finds a segment whose "directionality" can be computed: a segment that is an actual line or curve, not a coincident vertex nor a stray point.

      Librsvg's algorithm has two parts to it. The first part takes the linear sequence of PostScript-like commands (moveto, lineto, curveto, closepath) and turns them into a sequence of segments. Each segment has two endpoints and two tangent directions at those endpoints; if the segment is a line, the tangents point in the same direction as the line. Or, the segment can be degenerate and it is just a single point.

      The second part of the algorithm takes that list of segments for each node, and it does the walking-back-and-forth as described in the SVG spec. Basically, it finds the first non-degenerate segment on each side of a node, and uses the tangents of those segments to find the average orientation of the node.

      The path-to-segments code

      In the C code I had this:

      typedef struct {
          gboolean is_degenerate; /* If true, only (p1x, p1y) are valid.  If false, all are valid */
          double p1x, p1y;
          double p2x, p2y;
          double p3x, p3y;
          double p4x, p4y;
      } Segment;

      P1 and P4 are the endpoints of each Segment; P2 and P3 are, like in a Bézier curve, the control points from which the tangents can be computed.

      This translates readily to Rust:

      struct Segment {
          is_degenerate: bool, /* If true, only (p1x, p1y) are valid.  If false, all are valid */
          p1x: f64, p1y: f64,
          p2x: f64, p2y: f64,
          p3x: f64, p3y: f64,
          p4x: f64, p4y: f64
      }

      Then a little utility function:

      /* In C */
      	    
      #define EPSILON 1e-10
      #define DOUBLE_EQUALS(a, b) (fabs ((a) - (b)) < EPSILON)
      
      
      /* In Rust */
      	    
      const EPSILON: f64 = 1e-10;
      
      fn double_equals (a: f64, b: f64) -> bool {
          (a - b).abs () < EPSILON
      }

      And now, the actual code that transforms a cairo_path_t (a list of moveto/lineto/curveto commands) into a list of segments. I'll interleave C and Rust code with commentary.

      /* In C */
      
      typedef enum {
          SEGMENT_START,
          SEGMENT_END,
      } SegmentState;
      
      static void
      path_to_segments (const cairo_path_t *path,
                        Segment **out_segments,
                        int *num_segments)
      {
      
      
      /* In Rust */
      
      enum SegmentState {
          Start,
          End
      }
      
      fn path_to_segments (path: cairo::Path) -> Vec<Segment> {

      The enum is pretty much the same; Rust prefers CamelCase for enums instead of CAPITALIZED_SNAKE_CASE. The function prototype is much nicer in Rust. The cairo::Path is courtesy of gtk-rs, the budding Rust bindings for GTK+ and Cairo and all that goodness.

      The C version allocates the return value as an array of Segment structs, and returns it in the out_segments argument (... and the length of the array in num_segments). The Rust version returns a mentally easier vector of Segment structs.

      Now, the variable declarations at the beginning of the function:

      /* In C */
      
      {
          int i;
          double last_x, last_y;
          double cur_x, cur_y;
          double subpath_start_x, subpath_start_y;
          int max_segments;
          int segment_num;
          Segment *segments;
          SegmentState state;
      
      
      /* In Rust */
      
      {
          let mut last_x: f64;
          let mut last_y: f64;
          let mut cur_x: f64;
          let mut cur_y: f64;
          let mut subpath_start_x: f64;
          let mut subpath_start_y: f64;
          let mut has_first_segment : bool;
          let mut segment_num : usize;
          let mut segments: Vec<Segment>;
          let mut state: SegmentState;

      In addition to having different type names (double becomes f64), Rust wants you to say when a variable will be mutable, i.e. when it is allowed to change value after its initialization.

      Also, note that in C there's an "i" variable, which is used as a counter. There isn't a similar variable in the Rust version; there, we will use an iterator. Also, in the Rust version we have a new "has_first_segment" variable; read on to see its purpose.

          /* In C */
      
          max_segments = path->num_data; /* We'll generate maximum this many segments */
          segments = g_new (Segment, max_segments);
          *out_segments = segments;
      
          last_x = last_y = cur_x = cur_y = subpath_start_x = subpath_start_y = 0.0;
      
          segment_num = -1;
          state = SEGMENT_END;
      
      
          /* In Rust */
      	      
          cur_x = 0.0;
          cur_y = 0.0;
          subpath_start_x = 0.0;
          subpath_start_y = 0.0;
      
          has_first_segment = false;
          segment_num = 0;
          segments = Vec::new ();
          state = SegmentState::End;

      No problems here, just initializations. Note that in C we pre-allocate the segments array with a certain size. This is not the actual minimum size that the array will need; it is just an upper bound that comes from the way Cairo represents paths internally (it is not possible to compute the minimum size of the array without walking it first, so we use a good-enough value here that doesn't require walking). In the Rust version, we just create an empty vector and let it grow as needed.

      Note also that the C version initializes segment_num to -1, while the Rust version sets has_first_segment to false and segment_num to 0. Read on!

          /* In C */
      
          for (i = 0; i < path->num_data; i += path->data[i].header.length) {
              last_x = cur_x;
              last_y = cur_y;
      
      
          /* In Rust */
      
          for cairo_segment in path.iter () {
              last_x = cur_x;
              last_y = cur_y;

      We start iterating over the path's elements. Cairo, which is written in C, has a peculiar way of representing paths. path->num_data is the length of the path->data array. That array has elements in path->data[] that can be either commands, or point coordinates. Each command then specifies how many elements you need to "eat" to take in all its coordinates. Thus the "i" counter gets incremented on each iteration by path->data[i].header.length; this is the "how many to eat" magic value.

      The Rust version is more civilized. Get a path.iter() which feeds you Cairo path segments, and boom, you are done. That civilization is courtesy of the gtk-rs bindings. Onwards!

          /* In C */
      
              switch (path->data[i].header.type) {
              case CAIRO_PATH_MOVE_TO:
                  segment_num++;
                  g_assert (segment_num < max_segments);
      
      
      
          /* In Rust */
      
              match cairo_segment {
                  cairo::PathSegment::MoveTo ((x, y)) => {
                      if has_first_segment {
                          segment_num += 1;
                      } else {
                          has_first_segment = true;
                      }

      The C version switch()es on the type of the path segment. It increments segment_num, our counter-of-segments, and checks that it doesn't overflow the space we allocated for the results array.

      The Rust version match()es on the cairo_segment, which is a Rust enum (think of it as a tagged union of structs). The first match case conveniently destructures the (x, y) coordinates; we will use them below.

      If you recall from the above, the C version initialized segment_num to -1. This code for MOVE_TO is the first case in the code that we will hit, and that "segment_num++" causes the value to become 0, which is exactly the index in the results array where we want to place the first segment. Rust *really* wants you to use an usize value to index arrays ("unsigned size"). I could have used a signed size value starting at -1 and then incremented it to zero, but then I would have to cast it to unsigned — which is slightly ugly. So I introduce a boolean variable, has_first_segment, and use that instead. I think I could refactor this to have another state in SegmentState and remove the boolean variable.

              /* In C */
      
                  g_assert (i + 1 < path->num_data);
                  cur_x = path->data[i + 1].point.x;
                  cur_y = path->data[i + 1].point.y;
      
                  subpath_start_x = cur_x;
                  subpath_start_y = cur_y;
      
      
               /* In Rust */
      
                      cur_x = x;
                      cur_y = y;
      
                      subpath_start_x = cur_x;
                      subpath_start_y = cur_y;

      In the C version, I assign (cur_x, cur_y) from the path->data[], but first ensure that the index doesn't overflow. In the Rust version, the (x, y) values come from the destructuring described above.

              /* In C */
      
                  segments[segment_num].is_degenerate = TRUE;
      
                  segments[segment_num].p1x = cur_x;
                  segments[segment_num].p1y = cur_y;
      
                  state = SEGMENT_START;
      
                  break;
      
      
               /* In Rust */
      
                      let seg = Segment {
                          is_degenerate: true,
                          p1x: cur_x,
                          p1y: cur_y,
                          p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0 // these are set in the next iteration
                      };
      
                      segments.push (seg);
      
                      state = SegmentState::Start;
                  },

      This is where my lack of Rust idiomatic skills really starts to show. In C I put (cur_x, cur_y) in the (p1x, p1y) fields of the current segment, and since it is_degenerate, I'll know that the other p2/p3/p4 fields are not valid — and like any C programmer who wears sandals instead of steel-toed boots, I leave their memory uninitialized. Rust doesn't want me to have uninitialized values EVER, so I must fill a Segment structure and then push() it into our segments vector.

      So, the C version really wants to have a segment_num counter where I can keep track of which index I'm filling. Why is there a similar counter in the Rust version? We will see why in the next case.

              /* In C */
      
              case CAIRO_PATH_LINE_TO:
                  g_assert (i + 1 < path->num_data);
                  cur_x = path->data[i + 1].point.x;
                  cur_y = path->data[i + 1].point.y;
      
                  if (state == SEGMENT_START) {
                      segments[segment_num].is_degenerate = FALSE;
                      state = SEGMENT_END;
                  } else /* SEGMENT_END */ {
                      segment_num++;
                      g_assert (segment_num < max_segments);
      
                      segments[segment_num].is_degenerate = FALSE;
      
                      segments[segment_num].p1x = last_x;
                      segments[segment_num].p1y = last_y;
                  }
      
                  segments[segment_num].p2x = cur_x;
                  segments[segment_num].p2y = cur_y;
      
                  segments[segment_num].p3x = last_x;
                  segments[segment_num].p3y = last_y;
      
                  segments[segment_num].p4x = cur_x;
                  segments[segment_num].p4y = cur_y;
      
                  break;
      
      
               /* In Rust */
      
                  cairo::PathSegment::LineTo ((x, y)) => {
                      cur_x = x;
                      cur_y = y;
      
                      match state {
                          SegmentState::Start => {
                              segments[segment_num].is_degenerate = false;
                              state = SegmentState::End;
                          },
      
                          SegmentState::End => {
                              segment_num += 1;
      
                              let seg = Segment {
                                  is_degenerate: false,
                                  p1x: last_x,
                                  p1y: last_y,
                                  p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0  // these are set below
                              };
      
                              segments.push (seg);
                          }
                      }
      
                      segments[segment_num].p2x = cur_x;
                      segments[segment_num].p2y = cur_y;
      
                      segments[segment_num].p3x = last_x;
                      segments[segment_num].p3y = last_y;
      
                      segments[segment_num].p4x = cur_x;
                      segments[segment_num].p4y = cur_y;
                  },

      Whoa! Buts let's piece it apart bit by bit.

      First we set cur_x and cur_y from the path data, as usual.

      Then we roll the state machine. Remember we got a LINE_TO. If we are in the state START ("just have a single point, possibly a degenerate one"), then we turn the old segment into a non-degenerate, complete line segment. If we are in the state END ("we were already drawing non-degenerate lines"), we create a new segment and fill it in. I'll probably change the names of those states to make it more obvious what they mean.

      In C we had a preallocated array for "segments", so the idiom to create a new segment is simply "segment_num++". In Rust we grow the segments array as we go, hence the "segments.push (seg)".

      I will probably refactor this code. I don't like it that it looks like

          case move_to:
              start possibly-degenerate segment
      
          case line_to:
              are we in a possibly-degenerate segment?
                  yes: make it non-degenerate and remain in that segment...
      
                  no: create a new segment, switch to it, and fill its first fields...
      
      	... for both cases, fill in the last fields of the segment

      That is, the "yes" case fills in fields from the segment we were handling in the *previous* iteration, while the "no" case fills in fields from a *new* segment that we created in the present iteration. That asymmetry bothers me. Maybe we should build up the next-segment's fields in auxiliary variables, and only put them in a complete Segment structure once we really know that we are done with that segment? I don't know; we'll see what is more legible in the end.

      The other two cases, for CURVE_TO and CLOSE_PATH, are analogous, except that CURVE_TO handles a bunch more coordinates for the control points, and CLOSE_PATH goes back to the coordinates from the last point that was a MOVE_TO.

      And those tests you were talking about?

      Well, I haven't written them yet! This is my very first Rust code, after reading a pile of getting-started documents.

      Already in the case for CLOSE_PATH I think I've found a bug. It doesn't really create a segment for multi-line paths when the path is being closed. The reftests didn't catch this because none of the reference images with SVG markers uses a CLOSE_PATH command! The unit tests for this path_to_segments() machinery should be able to find this easily, and closer to the root cause of the bug.

      What's next?

      Learning how to link and call that Rust code from the C library for librsvg. Then I'll be able to remove the corresponding C code.

      Feeling safer already?

    darktable 2.0.7 released

    we're proud to announce the seventh bugfix release for the 2.0 series of darktable, 2.0.7!

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.7.

    as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

    a9226157404538183549079e3b8707c910fedbb669bd018106bdf584b88a1dab  darktable-2.0.7.tar.xz
    0b341f3f753ae0715799e422f84d8de8854d8b9956dc9ce5da6d5405586d1392  darktable-2.0.7.dmg

    and the changelog as compared to 2.0.6 can be found below.

    New Features

    • Filter-out some EXIF tags when exporting. Helps keep metadata size below max limit of ~64Kb
    • Support the new Canon EOS 80D {m,s}RAW format
    • Always show rendering intent selector in lighttable view
    • Clear elevation when clearing geo data in map view
    • Temperature module, invert module: add SSE vectorization for X-Trans
    • Temperature module: add keyboard shortcuts for presets

    Bugfixes

    • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
    • OpenCL: always use blocking memory transfer hostdevice
    • OpenCL: remove bogus static keyword in extended.cl
    • Fix crash with missing configured display profile
    • Histogram: always show aperture with one digit after dot
    • Show if OpenEXR is supported in --version
    • Rawspeed: use a non-deprecated way of getting OSX version
    • Don't show bogus message about local copy when trying to delete physically deleted image

    Base Support (newly added or small fixes)

    • Canon EOS 100D
    • Canon EOS 300D
    • Canon EOS 6D
    • Canon EOS 700D
    • Canon EOS 80D (sRaw1, sRaw2)
    • Canon PowerShot A720 IS (dng)
    • Fujifilm FinePix S100FS
    • Nikon D3400 (12bit-compressed)
    • Panasonic DMC-FZ300 (4:3)
    • Panasonic DMC-G8 (4:3)
    • Panasonic DMC-G80 (4:3)
    • Panasonic DMC-GX80 (4:3)
    • Panasonic DMC-GX85 (4:3)
    • Pentax K-70

    Base Support (fixes, was broken in 2.0.6, apologies for inconvenience)

    • Nikon 1 AW1
    • Nikon 1 J1 (12bit-compressed)
    • Nikon 1 J2 (12bit-compressed)
    • Nikon 1 J3
    • Nikon 1 J4
    • Nikon 1 S1 (12bit-compressed)
    • Nikon 1 S2
    • Nikon 1 V1 (12bit-compressed)
    • Nikon 1 V2
    • Nikon Coolpix A (14bit-compressed)
    • Nikon Coolpix P330 (12bit-compressed)
    • Nikon Coolpix P6000
    • Nikon Coolpix P7000
    • Nikon Coolpix P7100
    • Nikon Coolpix P7700 (12bit-compressed)
    • Nikon Coolpix P7800 (12bit-compressed)
    • Nikon D1
    • Nikon D3 (12bit-compressed, 12bit-uncompressed)
    • Nikon D3000 (12bit-compressed)
    • Nikon D3100
    • Nikon D3200 (12bit-compressed)
    • Nikon D3S (12bit-compressed, 12bit-uncompressed)
    • Nikon D4 (12bit-compressed, 12bit-uncompressed)
    • Nikon D5 (12bit-compressed, 12bit-uncompressed)
    • Nikon D50
    • Nikon D5100
    • Nikon D5200
    • Nikon D600 (12bit-compressed)
    • Nikon D610 (12bit-compressed)
    • Nikon D70
    • Nikon D7000
    • Nikon D70s
    • Nikon D7100 (12bit-compressed)
    • Nikon E5400
    • Nikon E5700 (12bit-uncompressed)

    We were unable to bring back these 4 cameras, because we have no samples.
    If anyone reading this owns such a camera, please do consider providing samples.

    • Nikon E8400
    • Nikon E8800
    • Nikon D3X (12-bit)
    • Nikon Df (12-bit)

    White Balance Presets

    • Pentax K-70

    Noise Profiles

    • Sony DSC-RX10

    Translations Updates

    • Catalan
    • German

    October 23, 2016

    Los Alamos Artists Studio Tour

    [JunkDNA Art at the LA Studio Tour] The Los Alamos Artists Studio Tour was last weekend. It was a fun and somewhat successful day.

    I was borrowing space in the studio of the fabulous scratchboard artist Heather Ward, because we didn't have enough White Rock artists signed up for the tour.

    Traffic was sporadic: we'd have long periods when nobody came by (I was glad I'd brought my laptop, and managed to get some useful development done on track management in pytopo), punctuated by bursts where three or four groups would show up all at once.

    It was fun talking to the people who came by. They all had questions about both my metalwork and Heather's scratchboard, and we had a lot of good conversations. Not many of them were actually buying -- I heard the same thing afterward from most of the other artists on the tour, so it wasn't just us. But I still sold enough that I more than made back the cost of the tour. (I hadn't realized, prior to this, that artists have to pay to be in shows and tours like this, so there's a lot of incentive to sell enough at least to break even.) Of course, I'm nowhere near covering the cost of materials and equipment. Maybe some day ...

    [JunkDNA Art at the LA Studio Tour]

    I figured snacks are always appreciated, so I set out my pelican snack bowl -- one of my first art pieces -- with brownies and cookies in it, next to the business cards.

    It was funny how wrong I was in predicting what people would like. I thought everyone would want the roadrunners and dragonflies; in practice, scorpions were much more popular, along with a sea serpent that had been sitting on my garage shelf for a month while I tried to figure out how to finish it. (I do like how it eventually came out, though.)

    And then after selling both my scorpions on Saturday, I rushed to make two more on Saturday night and Sunday morning, and of course no one on Sunday had the slightest interest in scorpions. Dave, who used to have a foot in the art world, tells me this is typical, and that artists should never make what they think the market will like; just go on making what you like yourself, and hope it works out.

    Which, fortunately, is mostly what I do at this stage, since I'm mostly puttering around for fun and learning.

    Anyway, it was a good learning experience, though I was a little stressed getting ready for it and I'm glad it's over. Next up: a big spider for the front yard, before Halloween.

    October 20, 2016

    CVE-2016-5195

    My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

    So, here are the graphs updated for the 668 CVEs known today:

    • Critical: 3 @ 5.2 years average
    • High: 44 @ 6.2 years average
    • Medium: 404 @ 5.3 years average
    • Low: 216 @ 5.5 years average

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    The Electoral College

    Episode 4 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

    A US presidential election year is a wondrous thing. There are few places around the world where the campaign for head of state begins in earnest 18 months before the winner will take office. We are now in the home straight, with the final Presidential debate behind us, and election day coming up in 3 weeks, on the Tuesday after the first Monday in November (this year, that’s November 8th). And as with every election cycle, much time will be spent explaining the electoral college. This great American institution is at the heart of how America elects its President. Every 4 years, there are calls to reform it, to move to a different system, and yet it persists. What is it, where did it come from, and why does it cause so much controversy?

    In the US, people do not vote for the President directly in November. Instead, they vote for electors – people who represent the state in voting for the President. A state gets a number of electoral votes equal to its number of senators (2) and its number of US representatives (this varies based on population). Sparsely populated states like Alaska and Montana get 3 electoral votes, while California gets 55. In total, there are 538 electors, and a majority of 270 electoral votes is needed to secure the presidency. What happens if the candidates fail to get a majority of the electors is outside the scope of this blog post, and in these days of a two party system, it is very unlikely (although not impossible).

    State parties nominate elector lists before the election, and on election day, voters vote for the elector slate corresponding to their preferred candidate. Electoral votes can be awarded differently from state to state. In Nebraska, for example, there are 2 statewide electors for the winner of the statewide vote, and one elector for each congressional district, while in most states, the elector lists are chosen on a winner take all basis. After the election, the votes are counted in the local county, and sent to the state secretary for certification.

    Once the election results are certified (which can take up to a month), the electors meet in their state in mid December to record their votes for president and vice president. Most states (but not all!) have laws restricting who electors are allowed to vote for, making this mostly a ceremonial position. The votes are then sent to the US senate and the national archivist for tabulation, and the votes are then cross referenced before being sent to a joint session of Congress in early January. Congress counts the electoral votes and declares the winner in the presidency. Two weeks later, the new President takes office (those 2 weeks are to allow for the process where no-one gets a majority in the electoral college).

    Because it is possible to win heavily in some states with few electoral votes, and lose narrowly in others with a lot of electoral votes, it is possible to win the presidency without having a majority of Americans vote for you (as George W. Bush did in 2000). In modern elections, the electoral college can result in a huge difference of attention between “safe” states, and “swing” states – the vast majority of campaigning is done in only a dozen or so states, while states like Texas and Massachusetts do not get as much attention.

    Why did the founding fathers of the US come up with such a convoluted system? Why not have people vote for the President directly, and have the counts of the states tabulated directly, without the pomp and ceremony of the electoral college vote?

    First, think back to 1787, when the US constitution was written. The founders of the state had an interesting set of principles and constraints they wanted to uphold:

    • Big states should not be able to dominate small states
    • Similarly, small states should not be able to dominate big states
    • No political parties existed (and the founding fathers hoped it would stay that way)
    • Added 2016-10-21: Different states wanted to give a vote to different groups of people (and states with slavery wanted slaves to count in the population)
    • In the interests of having presidents who represented all of the states, candidates should have support outside their own state – in an era where running a national campaign was impractical
    • There was a logistical issue of finding out what happened on election day and determining the winner

    To satisfy these constraints, a system was chosen which ensured that small states had a proportionally bigger say (by giving an electoral vote for each Senator), but more populous states still have a bigger say (by getting an electoral vote for each congressman). In the first elections, electors voted for 2 candidates, of which only one could be from their state, meaning that winning candidates had support from outside their state. The President was the person who got the most electoral votes, and the vice president was the candidate who came second – even if (as was the case with John Adams and Thomas Jefferson) they were not in the same party. It also created the possibility (as happened with Thomas Jefferson and Aaron Burr) that a vice presidential candidate could get the same number of electoral votes as the presidential candidate, resulting in Congress deciding who would be president. The modern electoral college was created with the 12th amendment to the US constitution in 1803.

    Another criticism of direct voting is that populist demagogues could be elected by the people, but electors (being of the political classes) could be expected to be better informed, and make better decisions, about who to vote for. Alexander Hamilton wrote in The Federalist #68 that: “It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.” These days, most states have laws which require their electors to vote in accordance with the will of the electorate, so that original goal is now mostly obsolete.

    A big part of the reason for having over two months between the election and the president taking office (and prior to 1934, it was 4 months) is, in part, due to the size of the colonial USA. The administrative unit for counting, the county, was defined so that every citizen could get to the county courthouse and home in a day’s ride – and after an appropriate amount of time to count the ballots, the results were sent to the state capital for certification, which could take up to 4 days in some states like Kentucky or New York. And then the electors needed to be notified, and attend the official elector count in the state capital. And then the results needed to be sent to Washington, which could take up to 2 weeks, and Congress (which was also having elections) needed to meet to ratify the results. All of these things took time, amplified by the fact that travel happened on horseback.

    So at least in part, the electoral college system is based on how long, logistically, it took to bring the results to Washington and have Congress ratify them. The inauguration used to be on March 4th, because that was how long it took for the process to run its course. It was not until 1934 and the 20th amendment to the constitution that the date was moved to January.

    Incidentally, two other constitutionally set constraints for election day are also based on constraints that no longer apply. Elections happen on a Tuesday, because of the need not to interfere with two key events: sabbath (Sunday) and market (Wednesday). And the elections were held in November primarily so as not to interfere with harvest. These dates and reasoning, set in stone in 1845, persist today.

    October 19, 2016

    FOSDEM SDN & NFV DevRoom Call for Content

    We are pleased to announce the Call for Participation in the FOSDEM 2017 Software Defined Networking and Network Functions Virtualization DevRoom!

    Important dates:

    • (Extended!) Nov 28: Deadline for submissions
    • Dec 1: Speakers notified of acceptance
    • Dec 5: Schedule published

    This year the DevRoom topics will cover two distinct fields:

    • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
    • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

    We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

    This year, the DevRoom will focus on low-level networking and high performance packet processing, network automation of containers and private cloud, and the management of telco applications to maintain very high availability and performance independent of whatever the world can throw at their infrastructure (datacenter outages, fires, broken servers, you name it).

    A representative list of the projects and topics we would like to see on the schedule are:

    • Low-level networking and switching: IOvisor, eBPF, XDP, fd.io, Open vSwitch, OpenDataplane, …
    • SDN controllers and overlay networking: OpenStack Neutron, Canal, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
    • NFV Management and Orchestration: Open-O, ManageIQ, Juju, OpenBaton, Tacker, OSM, network management, PNDA.io, …
    • NFV related features: Service Function Chaining, fault management, dataplane acceleration, security, …

    Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.

    Please include the following information when submitting a proposal:

    • Your name
    • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
    • Short abstract of one or two paragraphs
    • Short bio (with photo)

    The deadline for submissions is November 16th 2016. FOSDEM will be held on the weekend of February 4-5, 2017 and the SDN/NFV DevRoom will take place on Saturday, February 4, 2017 (Updated 2016-10-20: an earlier version incorrectly said the DevRoom was on Sunday). Please use the following website to submit your proposals: https://penta.fosdem.org/submission/FOSDEM17 (you do not need to create a new Pentabarf account if you already have one from past years).

    You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: network-devroom@lists.fosdem.org (subscription page: https://lists.fosdem.org/listinfo/network-devroom)

    – The Networking DevRoom 2016 Organization Team

    Security bug lifetime

    In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

    As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

    Patches_linux:
     break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2
    

    This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

    Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

    CVE lifetimes 2011-2016

    And here it is zoomed in to just Critical and High:

    Critical and High CVE lifetimes 2011-2016

    The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

    • Critical: 2 @ 3.3 years
    • High: 34 @ 6.4 years
    • Medium: 334 @ 5.2 years
    • Low: 186 @ 5.0 years

    This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

    While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

    (Edit: see my updated graphs that include CVE-2016-5195.)

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 18, 2016

    Microwave Time Remainder Temporal Disorientation, a definition

    Microwave Time Remainder Temporal Disorientation – definition: The disorientation experienced when the remaining cook time on a microwave display appears to be a feasible but inaccurate time of day.

    Example:

    1:15 PM: Suzie puts her leftover pork chops in the office microwave, enters 5:00, and hits Start. After 1 minutes and 17 seconds, she hears sizzling, opens the microwave door and takes her meal.

    1:25 PM: John walks by the microwave, sees 3:43 on the display and thinks: “What!? My life is slipping away from me!”

    October 16, 2016

    FreeCAD BIM development news

    Here goes a little report from the href="http://www.freecadweb.org">FreeCAD front, showing a couple of things I've been working on in the last weeks. Site As a follow-up of href="http://yorik.uncreated.net/guestblog.php?2016=269">this post, several new features have been added to the href="http://www.freecadweb.org/wiki/index.php?title=Arch_Site">Arch Site object. The most important is that the Site is now a Part object, which means it has a...

    October 13, 2016

    October 12, 2016

    Highlight Bloom and Photoillustration Look


    Highlight Bloom and Photoillustration Look

    Replicating a 'Lucisart'/Dave Hill type illustrative look

    Over in the forums community member Sebastien Guyader (@sguyader) posted a neat workflow for emulating a photo-illustrative look popularized by photographers like Dave Hill where the resulting images often seem to have a sort of hyper-real feeling to them. Some of this feeling comes from a local-contrast boost and slight ‘blooming’ of the lighter tones in the image (though arguably most of the look is due to lighting and compositing of multiple elements).

    To illustrate, here are a few representative samples of Dave Hill’s work that reflects this feeling:

    Dave Hill Cliff Dave Hill Finishline Lotion Dave Hill Track Dave Hill Nick Saban A collection of example images. ©Dave Hill

    A video of Dave presenting on how he brought together the idea and images for the series the first image above is from:

    This effect is also popularized in Photoshop® filters such as LucisArt in an effort to attain what some would (erroneously) call an “HDR” effect. Really what they likely mean is a not-so-subtle tone-mapping. In particular the exaggerated local contrasts is often what garners folks attention.

    We had previously posted about a method for exaggerating fine local contrasts and details using the “Freaky Details” method described by Calvin Hollywood. This workflow provides a similar idea but different results that many might find more appealing (it’s not as gritty as the Freaky Details approach).

    Sebastien produced some great looking preview images to give folks a feeling for what the process would produce:

    BMW IFA-F9 Fashion Woman Images from pixabay (CC0, public domain): Motorcycle, car, woman.

    Replicating a “Dave Hill”/“LucasArt” effect

    Sebastien’s approach relies only on having the always useful G’MIC plugin for GIMP. The general workflow is to do a high-pass frequency separation, and to apply some effects like local contrast enhancement and some smoothing on the residual low-pass layer. Then recombine the high+low pass layers to get the final result.

    1. Open the image.
    2. Duplicate the base layer.
      Rename it to “Lowpass”.
    3. With the top layer (“Lowpass”) active, open G’MIC.
    4. Use the Photocomix smoothing filter:

      Testing → Photocomix → Photocomix smoothing

      Set the Amplitude to 10. Apply.
      This is to taste, but a good startig place might be around 1% of the image dimensions (so a 2000px wide image - try using an Amplitude of 20).
    5. Change the “Lowpass” layer blend mode to Grain extract.
    6. Right-Click on the layer and choose New from visible.
      Rename this layer from “Visible“ to something more memorable like “Highpass” and set its layer mode to Grain merge.
      Turn off this layer visibility for now.
    7. Activate the “Lowpass” layer and set its layer blend mode back to Normal.
      The rest of the filters are applied to this “Lowpass” layer.
    8. Open G’MIC again.
      Apply the Simple local contrast filter:

      Details → Simple local contrast

      Using:
      • Edge Sensitivity to 25
      • Iterations to 1
      • Paint effect to 50
      • Post-gamma to 1.20
    9. Open G’MIC again.
      Now apply the Graphic novel filter:

      Artistic → Graphic novel

      Using:
      • check the Skip this step checkbox for Apply Local Normalization
      • Pencil size to 1
      • Pencil amplitude to 100-200
      • Pencil smoother sharpness/edge protection/smoothness
        to 0
      • Boost merging options Mixer to Soft light
      • Painter’s touch sharpness to 1.26
      • Painter’s edge protection flow to 0.37
      • Painter’s smoothness to 1.05
    10. Finally, make the “Highpass” layer visible again to bring back the fine details.

    Trying It Out!

    Let’s walk through the process. Sebastien got his sample images from the website https://pixabay.com, so I thought I would follow suit and find something suitable from there also. After some searching I found this neat image from Jerzy Gorecki licensed Create Commons 0/Public Domain.

    Model The base image (link).
    From pixabay, (CC0 - Public Domain): Jerzy Gorecki.

    Frequency Separation

    The first steps (1—7) are to create a High/Low pass frequency separation of the image. If you have a different method for obtaining the separation then feel free to use it. Sebastien uses the Photocomix smoothing filter to create his low-pass layer (other options might be Gaussian blur, bi-lateral smoothing, or even wavelets).

    The basic steps to do this are to duplicate the base layer, blur it, then set the layer blend mode to Grain extract and create a new layer from visible. The new layer will be the Highpass (high-frequency) details and should have its layer blend mode set to Grain merge. The original blurred layer is the Lowpass (low-frequency) information and should have its layer blend mode set back to Normal.

    So, following Sebastien’s steps, duplicate the base layer and rename the layer to “lowpass”. Then open G’MIC and apply:

    Testing → Photocomix → Photocomix smoothing

    with an amplitude of around 20. Change this to suit your own taste, but about 1% of the image width is a decent starting point. You’ll now have the base layer and the “lowpass” layer above it that has been smoothed:

    Photocomix Smoothing “lowpass” layer after Photocomix smoothing with Amplitude set to 20.

    Setting the “lowpass” layer blend mode to Grain extract will reveal the high-frequency details:

    Grain Extract HP The high-frequency details visible after setting the blurred “lowpass” layer blend mode to Grain extract.

    Now create a new layer from what is currently visible. Either right-click the “lowpass” layer and choose “New from visible” or from the menus:

    Layer → New from Visible

    Rename this new layer from “Visible” to “highpass” and set its layer blend mode to Grain merge. Select the “lowpass” layer and set its layer blend mode back to Normal.

    Layers

    The visible result should be back to what your starting image looked like. The rest of the steps for this tutorial will operate on the “lowpass” layer. You can leave the “highpass” filter visible during the rest of the steps to see what your results will look like.

    Modifying the Low-Frequency Layer

    These next steps will modify the underlying low-frequency image information to smooth it out and give it a bit of a contrast boost. First the “Simple local contrast” filter will separate tones and do some preliminary smoothing, while the “Graphic novel” filter will provide a nice boost to light tones along with further smoothing.

    Simple Local Contrast

    On the “lowpass” layer, open G’MIC and find the “Simple local contrast” filter:

    Details → Simple local contrast

    Change the following settings:

    • Edge Sensitivity to 25
    • Iterations to 1
    • Paint effect to 50
    • Post-gamma to 1.20

    This will smooth out overall tones while simultaneously providing a nice local contrast boost. This is the step that causes small lighting details to “pop”:

    Simple Local Contrast After applying the “Simple local contrast” filter.
    (Click to compare to the original image)

    The contrast increase provides a nice visual punch to the image. The addition of the “Graphic novel” filter will push the overall image much closer to a feeling of a photo-illustration.

    Graphic Novel

    Still on the “lowpass” layer, re-open G’MIC and open the “Graphic Novel” filter:

    Artistic → Graphic novel

    Change the following settings:

    • check the Skip this step checkbox for Apply Local Normalization
    • Pencil size to 1
    • Pencil amplitude to 100-200
    • Pencil smoother sharpness/edge protection/smoothness
      to 0
    • Boost merging options Mixer to Soft light
    • Painter’s touch sharpness to 1.26
    • Painter’s edge protection flow to 0.37
    • Painter’s smoothness to 1.05

    The intent with this filter is to further smooth the overall tones, simplify details, and to give a nice boost to the light tones of the image:

    Graphic Novel After applying the “Graphic novel” filter.
    (Click to compare to the local contrast result)

    The effect at 100% opacity can be a little strong. If so, simply adjust the opacity of the “lowpass” layer to taste. In some cases it would probably be desirable to mask areas you don’t want the effect applied to.

    I’ve included the GIMP .xcf.bz2 file of this image while I was working on it for this article. You can download the file here (34.9MB). I did each step on a new layer so if you want to see the results of each effect step-by-step, simply turn that layer on/off:

    Sample layers Example XCF layers

    Finally, a great big Thank You! to Sebastien Guyader (@sguyader) for sharing this with everyone in the community!

    A G’MIC Command

    Of course, this wouldn’t be complete if someone didn’t come along with the direct G’MIC commands to get a similar result! And we can thank Iain Fergusson (@Iain) for coming up with the commands:

    --gimp_anisotropic_smoothing[0] 10,0.16,0.63,0.6,2.35,0.8,30,2,0,1,1,0,1
    
    -sub[0] [1]
    
    -simplelocalcontrast_p[1] 25,1,50,1,1,1.2,1,1,1,1,1,1
    -gimp_graphic_novelfxl[1] 1,2,6,5,20,0,1,100,0,1,0,0.78,1.92,0,0,2,1,1,1,1.26,0.37,1.05
    -add
    -c 0,255
    

    October 11, 2016

    New Mexico LWV Voter Guides are here!

    [Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

    My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

    The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

    LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

    New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

    I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

    I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

    The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

    So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

    October 10, 2016

    Visualizing the raw (sensor) highlight clipping

    Have you ever over-exposed your images? Have you ever noticed that your images look flat and dull after you apply negative exposure compensation? Even though the over/underexposed warning says there is no overexposure? Have you ever wondered what is going on? Read on.

    the Problem

    First, why would you want to know which pixels are overexposed, clipped?

    Consider this image:
    rawoverexposed-0

     … Why is the sky so white? Why is the image so flat and dull?

    Let's enable overexposure indicator..
    rawoverexposed-0.5
    Nope, it does not indicate any part of the image to be overexposed.

    Now, let's see what happens if we disable the highlight reconstruction module
    rawoverexposed-1
    Eww, the sky is pink!

    An experienced person knows that it means the image was taken overexposed, and it is so dull and flat because a negative exposure compensation was applied via the exposure module.

    Many of you have sometimes unintentionally overexposed your images. As you know, it is hard to figure out exactly which part of the image is overexposed, clipped.

    But. What if it is actually very easy to figure out?

    I'll show you the end result, what darktable's new, raw-based overexposure indicator says about that image, and then we will discuss details:
    rawoverexposed-2

    digital image processing, mathematical background

    While modern sensors capture an astonishing dynamic range, they still can capture only so much.
    A Sensor consists of millions of pixel sensors, each pixel containing a photodetector and an active amplifier. Each of these pixels could be thought of as a bucket: there is some upper limit of photons it can capture.
    Which means, there is some point, above which the sensor can not distinguish how much light it received.

    Now, the pixel captured some photons, and the pixel now has some charge that can be measured. But it is an analog value. For better or worse, all modern cameras and software operate in the digital world. Thus, next step is the conversion of that charge via ADC into a digital signal.

    Most sane cameras that can save raw files, store those values of pixels as an array of unsigned integers.
    What can we tell about those values?

    • Sensor readout results in some noise (black noise + readout noise), meaning that even with the shortest exposure, the pixels will not have zero value.
      That is a black level.
      For Canon it is often between \(\mathbf{2000}\) and \(\mathbf{2050}\).
    • Due to the non-magical nature of photosensitive pixels and ADC, there is some upper limit on the value each pixel can have. That limit may be different for each pixel, be it due to the different CFA color, or just manufacturing tolerances. Most modern Canon cameras produce 14-bit raw images, which means each pixel may have a value between \(\mathbf{0}\) and \(\mathbf{{2^{14}}-1}\) (i.e. \(\mathbf{16383}\)).
      So the lowest maximal value that still can be represented by all the pixels is called the white level.
      For Canon it is often between \(\mathbf{13000}\) and \(\mathbf{16000}\).
    • Both of these parameters also often depend on ISO.

    why is the white level so low? (you can skip this)

    Disclaimer: this is just my understanding of the subject. my understanding may be wrong.

    You may ask, why white level is less than the maximal value that can be stored in the raw file (that is, e.g. for 14-bit raw images, less than \(\mathbf{{2^{14}}-1}\) (i.e. \(\mathbf{16383}\)))?
    I have intentionally skipped over one more component of the sensor - an active amplifier.
    It is the second most important component of the sensor (after the photodetectors themselves).

    The saturation point of the photodetector is much lower than the saturation point of the ADC. Also, due to the non-magical nature of ADC, it has a very specific voltage nominal range \(\mathbf{V_{RefLow}..V_{RefHi}}\), outside of which it can not work properly.
    E.g. photodetector may output an analog signal with an amplitude of (guess, general ballpark, not precise values) \(\mathbf{1..10}\) \(\mathbf{mV}\), while the ADC expects input analog signal to have an amplitude of \(\mathbf{1..10}\) \(\mathbf{V}\).
    So if we directly pass charge from the photodetector to the ADC, at best, we will get a very faint digital signal, with much smaller magnitude, compared to what ADC can produce, and thus with very bad (low) SNR.
    Also see: Signal conditioning.

    Thus, when quantifying non-amplified analog signal, we lose data, which can not be recovered later.
    Which means, the analog signal must be amplified, to equalize the output voltage levels of the photodetector and [expected] input voltage levels of the analog signal to the ADC. That is done by an amplifier. There may be more than just one amplifier, and more than one amplification step.

    Okay, what if we amplify an analog signal from photodetector by a magnitude of \(\mathbf{3}\)? I.e. we had \(\mathbf{5}\) \(\mathbf{mV}\), but now got \(\mathbf{5}\) \(\mathbf{V}\). At first all seems in order, the signal is within expected range.
    But we need to take into account one important detail: output voltage of a photodetector depends on the amount of light it received, and the exposure time.
    So for low light and low shutter speed it will output the minimal voltage (in our example, \(\mathbf{1}\) \(\mathbf{mV}\)), and if we amplify that, we get \(\mathbf{1}\) \(\mathbf{V}\), which is the \(\mathbf{V_{RefLow}}\) of ADC.
    Similarly, for bright light and long shutter speed it will output the maximal voltage (in our example, \(\mathbf{10}\) \(\mathbf{mV}\)), and if we amplify that, we get \(\mathbf{10}\) \(\mathbf{V}\), which is, again, the \(\mathbf{V_{RefHi}}\) of ADC.

    So there are obvious cases where with constant amplification factor we get bad signal range. Thus, we need multiple amplifiers, each of which with different gain, and we need to be able to toggle them separately, to control the amplification in finer steps.

    As you may have guessed by now, the signal amplification is the factor that results in the white level being at the e.g. \(\mathbf{16000}\), or some other value. Basically, this amplification is how the ISO level is implemented in hardware.

    TL;DR, so why?

    Because of the analog gain that was applied to the data to bring it into the nominal range and not blow (clip, make them bigger than \(\mathbf{16383}\)) the usable highlights. The gain is applied in finite discrete steps, it may be impossible to apply a finer gain, so that the white level is closer to \(\mathbf{16383}\).

    This is a very brief summary, for a detailed write-up i can direct you to the Magic Lantern's CMOS/ADTG/Digic register investigation on ISO.

    the first steps of processing a raw file

    All right, we got a sensor readout – an array of unsigned integers – how do we get from that to an image, that can be displayed?

    1. Convert the values from integer (most often 16-bit unsigned) to float (not strictly required, but best for precision reasons; we use 32-bit float)
    2. Subtract black level
    3. Normalize the pixels so that the white level is \(\mathbf{1.0}\)
      Simplest way to do that is to divide each value by \(\mathbf{({white level} - {black level})}\)
    4. These 3 steps are done by the raw black/white point module.

    5. Next, the white balance is applied. It is as simple as multiplying each separate CFA color by a specific coefficient. This so-called white balance vector can be acquired from several places:
      1. Camera may store it in the image's metadata.
        (That is what preset = camera does)
      2. If the color matrix for a given sensor is known, an approximate white balance (that is, which will only take the sensor into account, but will not adjust for illuminant) can be computed from that matrix.
        (That is what preset = camera neutral does)
      3. Taking a simple arithmetic mean (average) of each of the color channels may give good-enough inverted white-balance multiplier.
        IMPORTANT: the computed white balance will be good only if, on average, that image is gray.
        That is, it will correct white balance so that the average color becomes gray, so if average color is not neutrally gray (e.g. red), the image will look wrong.

        (That is what preset = spot white balance does)
      4. etc (user input, camera wb preset, ...)

      As you remember, in the previous step, we have scaled the data so that the white level is \(\mathbf{1.0}\), for every color channel.
      White balance coefficients scale each channel separately. For example, an example white balance vector may be \({\begin{pmatrix} 2.0 , 0.9 , 1.5 \end{pmatrix}}^{T}\). That is, Red channel will be scaled by \(\mathbf{2.0}\), Green channel will be scaled by \(\mathbf{0.9}\), and Blue channel will be scaled by \(\mathbf{1.5}\).
      In practice, however, the white balance vector is most often normalized so that the Green channel multiplier is \(\mathbf{1.0}\).

    6. That step is done by the white balance module.

    7. And last, highlight handling.
      As we know from definition, all the data values which are bigger than the white level are unusable, clipped. Without / before white balance correction, it is clear that all the values which are bigger than \(\mathbf{1.0}\) are the clipped values, and they are useless without some advanced processing.

      Now, what did the white balance correction do to the white levels? Correct, now, the white levels will be: \(\mathbf{2.0}\) for Red channel, \(\mathbf{0.9}\) for Green channel, and \(\mathbf{1.5}\) for Blue channel.

      As we all know, the white color is \({\begin{pmatrix} 1.0 , 1.0 , 1.0 \end{pmatrix}}^{T}\). But the maximal values (the per-channel white levels) are \({\begin{pmatrix} 2.0 , 0.9 , 1.5 \end{pmatrix}}^{T}\), so our "white" will not be white, but, as experienced users may guess, purple-ish. What do we do?

      Since for white color, all the components have exact the same value – \(\mathbf{1.0}\) – we just need to make sure that the maximal values are the same value. We can not scale each of the channels separately, because that would change white balance. We simply need to pick the minimal white level – \(\mathbf{0.9}\) – in our case, and clip all the data to that level. I.e. all the data which had a value of less than or equal to that threshold, will retain the same value; and all the pixels with the value greater than the threshold will have the value of threshold – \(\mathbf{0.9}\).

      Alternatively, one could try to recover these highlights, see highlight reconstruction module; and
      Color Reconstruction
      (though this last one only guesses color based on surroundings, does not actually reconstruct the channels, and is a bit too late in the pipe).

      If you don't do highlight handling, you get what you have seen in the third image in this article - ugly, unnaturally looking, discolored, highlights.

    Note: you might know that there are more steps required (namely: demosaicing, base curve, input color profile, output color profile; there may be others.), but for the purpose of detection and visualization of highlight clipping, they are unimportant, so i will not talk about them here.

    From that list, it should now be clear that all the pixels which have a value greater than the minimal per-channel white level right before the highlight reconstruction module, are the clipped pixels.

    the Solution

    But a technical problem arises: we need to visualize the clipped pixels on top of the fully processed image, while we only know whether the pixel is clipped or not in the input buffer of highlight reconstruction module.
    And we can not visualize clipping in the highlight reconstruction module itself, because the data is still mosaiced, and other modules will be applied after that anyway.

    The problem was solved by back-transforming the given white balance coefficients and the white level, and then comparing the values of original raw buffer produced by camera with that threshold. And, back-transforming output pixel coordinates through all the geometric distortions to figure out which pixel in the original input buffer needs to be checked.

    This seems to be the most flexible solution so far:

    • We can visualize overexposure on top of final, fully-processed image. That means, no module messes with the visualization
    • We do sample the original input buffer. That means we can actually know whether a given pixel is clipped or not

    Obviously, this new raw-based overexposure indicator depends on the specific sensor pattern.
    Good news is, it just works for both the Bayer, and X-Trans sensors!

    modes of operation

    rawoverexposed-ui

    The raw-based overexposure indicator has 3 different modes of operation:

    1. mark with CFA color

      • If the clipped pixel was Red, a Red pixel will be displayed.
      • If the clipped pixel was Green, a Green pixel will be displayed.
      • If the clipped pixel was Blue, a Blue pixel will be displayed.

      Sample output, X-Trans image.
      There are some Blue, Green and Red pixels clipped (counting to the centre)
      rawoverexposed-xtrans-mode-cfa

    2. mark with solid color

      • If the raw pixel was clipped, it will be displayed in a given color (one of: red, green, blue, black)

      Same area, with color scheme = black.
      The more black dots the area contains, the more clipped pixels there are in that area.
      rawoverexposed-xtrans-mode-solid-black

    3. false color

      • If the clipped pixel was Red, the Red channel for current pixel will be set to \(\mathbf{0.0}\)
      • If the clipped pixel was Green, the Green channel for current pixel will be set to \(\mathbf{0.0}\)
      • If the clipped pixel was Blue, the Blue channel for current pixel will be set to \(\mathbf{0.0}\)

      Same area.
      rawoverexposed-xtrans-mode-falsecolor

    understanding raw overexposure visualization

    So, let's go back to the fourth image in this article:
    rawoverexposed-2
    This is mode = mark with CFA color.

    What does it tell us?

    • Most of the sky is indeed clipped.
    • In the top-right portion of the image, only the Blue channel is clipped.
    • In the top-left portion of the image, Blue and Red channels are clipped.
    • No Green channel clipping.

    Now you know that, you:

    1. Will know better than to over-expose so much next time :) (hint to myself, mostly)
    2. Could try to recover from clipping a bit

      1. either by not applying negative exposure compensation in exposure module
      2. or using highlight reconstruction module with mode = reconstruct in LCh
      3. or using highlight reconstruction module with mode = reconstruct color, though it is known to produce artefacts
      4. or using color reconstruction module

    an important note about sensor clipping vs. color clipping

    By default, the module visualizes the color clipping, NOT the sensor clipping.
    The colors may be clipped, while the sensor is still not clipping.
    Example:
    rawoverexposed-2

    Let's enable indicator...
    rawoverexposed-3
    The visualization says that Red and Blue channels are clipped.

    But now let's disable the white balance module, while keeping indicator active:
    rawoverexposed-4

    Interesting, isn't it? So actually there is no sensor-level clipping, but the image is still overexposed, because after the white balance is applied, the channels do clip.

    While there, i wanted to show highlight reconstruction module, mode = reconstruct in LCh.
    If you ever used it, you know that it used to produce pretty useless results.
    But not anymore:
    highlight-reconstruction-reconstruct-in-lch
    As you can compare that with the first version of this image in this block, the highlights, although they are clipped, are actually somewhat reconstructed, so the image is not so flat and dull, there is some gradient to it.

    Too boring? :)

    With sufficiently exposed image (or just set black levels to \(\mathbf{0}\) and white level to \(\mathbf{1}\) in raw black/white point module; and clipping threshold = \(\mathbf{0.0}\), mode = mark with CFA color in raw overexposure indicator), and a lucky combination of image size, output size and zoom level, produces a familiar-looking pattern :)
    rawoverexposed-bayer-pattern
    That is basically an artefact due to the downscaling for display. Though, feedback may ask to actually properly implement this as a feature...

    Now, what if we enable the lens correction module? :)
    rawoverexposed-bayer-pattern-and-lens-correction
    So we could even create glitch-art with this thing!
    Technically, that is some kind of visualization of lens distortion.

    October 08, 2016

    Bullet 2.85 released : pybullet and Virtual Reality support for HTC Vive and Oculus Rift

    bullet_pybullet_vrWe have been making a lot of progress in higher quality physics simulation for robotics, games and visual effects. To make our physics simulation easier to use, especially for roboticist and machine learning experts, we created Python bindings, see examples/pybullet. In addition, we added Virtual Reality support for HTC Vive and Oculus Rift using the openvr sdk. See attached youtube movie. Updated documentation will be added soon, as well as possible show-stopper bug-fixes, so the actual release tag may bump up to 2.85.x. Download the release from github here.

    .

    October 05, 2016

    Play notes, chords and arbitrary waveforms from Python

    Reading Stephen Wolfram's latest discussion of teaching computational thinking (which, though I mostly agree with it, is more an extended ad for Wolfram Programming Lab than a discussion of what computational thinking is and why we should teach it) I found myself musing over ideas for future computer classes for Los Alamos Makers. Students, and especially kids, like to see something other than words on a screen. Graphics and games good, or robotics when possible ... but another fun project a novice programmer can appreciate is music.

    I found myself curious what you could do with Python, since I hadn't played much with Python sound generation libraries. I did discover a while ago that Python is rather bad at playing audio files, though I did eventually manage to write a music player script that works quite well. What about generating tones and chords?

    A web search revealed that this is another thing Python is bad at. I found lots of people asking about chord generation, and a handful of half-baked ideas that relied on long obsolete packages or external program. But none of it actually worked, at least without requiring Windows or relying on larger packages like fluidsynth (which looked worth exploring some day when I have more time).

    Play an arbitrary waveform with Pygame and NumPy

    But I did find one example based on a long-obsolete Python package called Numeric which, when rewritten to use NumPy, actually played a sound. You can take a NumPy array and play it using a pygame.sndarray object this way:

    import pygame, pygame.sndarray
    
    def play_for(sample_wave, ms):
        """Play the given NumPy array, as a sound, for ms milliseconds."""
        sound = pygame.sndarray.make_sound(sample_wave)
        sound.play(-1)
        pygame.time.delay(ms)
        sound.stop()
    

    Then you just need to calculate the waveform you want to play. NumPy can generate sine waves on its own, while scipy.signal can generate square and sawtooth waves. Like this:

    import numpy
    import scipy.signal
    
    sample_rate = 44100
    
    def sine_wave(hz, peak, n_samples=sample_rate):
        """Compute N samples of a sine wave with given frequency and peak amplitude.
           Defaults to one second.
        """
        length = sample_rate / float(hz)
        omega = numpy.pi * 2 / length
        xvalues = numpy.arange(int(length)) * omega
        onecycle = peak * numpy.sin(xvalues)
        return numpy.resize(onecycle, (n_samples,)).astype(numpy.int16)
    
    def square_wave(hz, peak, duty_cycle=.5, n_samples=sample_rate):
        """Compute N samples of a sine wave with given frequency and peak amplitude.
           Defaults to one second.
        """
        t = numpy.linspace(0, 1, 500 * 440/hz, endpoint=False)
        wave = scipy.signal.square(2 * numpy.pi * 5 * t, duty=duty_cycle)
        wave = numpy.resize(wave, (n_samples,))
        return (peak / 2 * wave.astype(numpy.int16))
    
    # Play A (440Hz) for 1 second as a sine wave:
    play_for(sine_wave(440, 4096), 1000)
    
    # Play A-440 for 1 second as a square wave:
    play_for(square_wave(440, 4096), 1000)
    

    Playing chords

    That's all very well, but it's still a single tone, not a chord.

    To generate a chord of two notes, you can add the waveforms for the two notes. For instance, 440Hz is concert A, and the A one octave above it is double the frequence, or 880 Hz. If you wanted to play a chord consisting of those two As, you could do it like this:

    play_for(sum([sine_wave(440, 4096), sine_wave(880, 4096)]), 1000)
    

    Simple octaves aren't very interesting to listen to. What you want is chords like major and minor triads and so forth. If you google for chord ratios Google helpfully gives you a few of them right off, then links to a page with a table of ratios for some common chords.

    For instance, the major triad ratios are listed as 4:5:6. What does that mean? It means that for a C-E-G triad (the first C chord you learn in piano), the E's frequency is 5/4 of the C's frequency, and the G is 6/4 of the C.

    You can pass that list, [4, 5, 5] to a function that will calculate the right ratios to produce the set of waveforms you need to add to get your chord:

    def make_chord(hz, ratios):
        """Make a chord based on a list of frequency ratios."""
        sampling = 4096
        chord = waveform(hz, sampling)
        for r in ratios[1:]:
            chord = sum([chord, sine_wave(hz * r / ratios[0], sampling)])
        return chord
    
    def major_triad(hz):
        return make_chord(hz, [4, 5, 6])
    
    play_for(major_triad(440), length)
    

    Even better, you can pass in the waveform you want to use when you're adding instruments together:

    def make_chord(hz, ratios, waveform=None):
        """Make a chord based on a list of frequency ratios
           using a given waveform (defaults to a sine wave).
        """
        sampling = 4096
        if not waveform:
            waveform = sine_wave
        chord = waveform(hz, sampling)
        for r in ratios[1:]:
            chord = sum([chord, waveform(hz * r / ratios[0], sampling)])
        return chord
    
    def major_triad(hz, waveform=None):
        return make_chord(hz, [4, 5, 6], waveform)
    
    play_for(major_triad(440, square_wave), length)
    

    There are still some problems. For instance, sawtooth_wave() works fine individually or for pairs of notes, but triads of sawtooths don't play correctly. I'm guessing something about the sampling rate is making their overtones cancel out part of the sawtooth wave. Triangle waves (in scipy.signal, that's a sawtooth wave with rising ramp width of 0.5) don't seem to work right even for single tones. I'm sure these are solvable, perhaps by fiddling with the sampling rate. I'll probably need to add graphics so I can look at the waveform for debugging purposes.

    In any case, it was a fun morning hack. Most chords work pretty well, and it's nice to know how to to play any waveform I can generate.

    The full script is here: play_chord.py on GitHub.

    security things in Linux v4.8

    Previously: v4.7. Here are a bunch of security things I’m excited about in Linux v4.8:

    SLUB freelist ASLR

    Thomas Garnier continued his freelist randomization work by adding SLUB support.

    x86_64 KASLR text base offset physical/virtual decoupling

    On x86_64, to implement the KASLR text base offset, the physical memory location of the kernel was randomized, which resulted in the virtual address being offset as well. Due to how the kernel’s “-2GB” addressing works (gcc‘s “-mcmodel=kernel“), it wasn’t possible to randomize the physical location beyond the 2GB limit, leaving any additional physical memory unused as a randomization target. In order to decouple the physical and virtual location of the kernel (to make physical address exposures less valuable to attackers), the physical location of the kernel needed to be randomized separately from the virtual location. This required a lot of work for handling very large addresses spanning terabytes of address space. Yinghai Lu, Baoquan He, and I landed a series of patches that ultimately did this (and in the process fixed some other bugs too). This expands the physical offset entropy to roughly $physical_memory_size_of_system / 2MB bits.

    x86_64 KASLR memory base offset

    Thomas Garnier rolled out KASLR to the kernel’s various statically located memory ranges, randomizing their locations with CONFIG_RANDOMIZE_MEMORY. One of the more notable things randomized is the physical memory mapping, which is a known target for attacks. Also randomized is the vmalloc area, which makes attacks against targets vmalloced during boot (which tend to always end up in the same location on a given system) are now harder to locate. (The vmemmap region randomization accidentally missed the v4.8 window and will appear in v4.9.)

    x86_64 KASLR with hibernation

    Rafael Wysocki (with Thomas Garnier, Borislav Petkov, Yinghai Lu, Logan Gunthorpe, and myself) worked on a number of fixes to hibernation code that, even without KASLR, were coincidentally exposed by the earlier W^X fix. With that original problem fixed, then memory KASLR exposed more problems. I’m very grateful everyone was able to help out fixing these, especially Rafael and Thomas. It’s a hard place to debug. The bottom line, now, is that hibernation and KASLR are no longer mutually exclusive.

    gcc plugin infrastructure

    Emese Revfy ported the PaX/Grsecurity gcc plugin infrastructure to upstream. If you want to perform compiler-based magic on kernel builds, now it’s much easier with CONFIG_GCC_PLUGINS! The plugins live in scripts/gcc-plugins/. Current plugins are a short example called “Cyclic Complexity” which just emits the complexity of functions as they’re compiled, and “Sanitizer Coverage” which provides the same functionality as gcc’s recent “-fsanitize-coverage=trace-pc” but back through gcc 4.5. Another notable detail about this work is that it was the first Linux kernel security work funded by Linux Foundation’s Core Infrastructure Initiative. I’m looking forward to more plugins!

    If you’re on Debian or Ubuntu, the required gcc plugin headers are available via the gcc-$N-plugin-dev package (and similarly for all cross-compiler packages).

    hardened usercopy

    Along with work from Rik van Riel, Laura Abbott, Casey Schaufler, and many other folks doing testing on the KSPP mailing list, I ported part of PAX_USERCOPY (the basic runtime bounds checking) to upstream as CONFIG_HARDENED_USERCOPY. One of the interface boundaries between the kernel and user-space are the copy_to_user()/copy_from_user() family of functions. Frequently, the size of a copy is known at compile-time (“built-in constant”), so there’s not much benefit in checking those sizes (hardened usercopy avoids these cases). In the case of dynamic sizes, hardened usercopy checks for 3 areas of memory: slab allocations, stack allocations, and kernel text. Direct kernel text copying is simply disallowed. Stack copying is allowed as long as it is entirely contained by the current stack memory range (and on x86, only if it does not include the saved stack frame and instruction pointers). For slab allocations (e.g. those allocated through kmem_cache_alloc() and the kmalloc()-family of functions), the copy size is compared against the size of the object being copied. For example, if copy_from_user() is writing to a structure that was allocated as size 64, but the copy gets tricked into trying to write 65 bytes, hardened usercopy will catch it and kill the process.

    For testing hardened usercopy, lkdtm gained several new tests: USERCOPY_HEAP_SIZE_TO, USERCOPY_HEAP_SIZE_FROM, USERCOPY_STACK_FRAME_TO,
    USERCOPY_STACK_FRAME_FROM, USERCOPY_STACK_BEYOND, and USERCOPY_KERNEL. Additionally, USERCOPY_HEAP_FLAG_TO and USERCOPY_HEAP_FLAG_FROM were added to test what will be coming next for hardened usercopy: flagging slab memory as “safe for copy to/from user-space”, effectively whitelisting certainly slab caches, as done by PAX_USERCOPY. This further reduces the scope of what’s allowed to be copied to/from, since most kernel memory is not intended to ever be exposed to user-space. Adding this logic will require some reorganization of usercopy code to add some new APIs, as PAX_USERCOPY’s approach to handling special-cases is to add bounce-copies (copy from slab to stack, then copy to userspace) as needed, which is unlikely to be acceptable upstream.

    seccomp reordered after ptrace

    By its original design, seccomp filtering happened before ptrace so that seccomp-based ptracers (i.e. SECCOMP_RET_TRACE) could explicitly bypass seccomp filtering and force a desired syscall. Nothing actually used this feature, and as it turns out, it’s not compatible with process launchers that install seccomp filters (e.g. systemd, lxc) since as long as the ptrace and fork syscalls are allowed (and fork is needed for any sensible container environment), a process could spawn a tracer to help bypass a filter by injecting syscalls. After Andy Lutomirski convinced me that ordering ptrace first does not change the attack surface of a running process (unless all syscalls are blacklisted, the entire ptrace attack surface will always be exposed), I rearranged things. Now there is no (expected) way to bypass seccomp filters, and containers with seccomp filters can allow ptrace again.

    That’s it for v4.8! The merge window is open for v4.9…

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 04, 2016

    Working with GIS, terrains and #FreeCAD

    Or, how to build a precise 3D terrain from any place of the world. Again not much visually significant FreeCAD development to show this week, so here is another interesting subject, that I started looking at in an earlier post. We architects should really begin to learn about GIS. GIS stands for Geographic information system and begins to...

    October 03, 2016

    security things in Linux v4.7

    Previously: v4.6. Onward to security things I found interesting in Linux v4.7:

    KASLR text base offset for MIPS

    Matt Redfearn added text base address KASLR to MIPS, similar to what’s available on x86 and arm64. As done with x86, MIPS attempts to gather entropy from various build-time, run-time, and CPU locations in an effort to find reasonable sources during early-boot. MIPS doesn’t yet have anything as strong as x86’s RDRAND (though most have an instruction counter like x86’s RDTSC), but it does have the benefit of being able to use Device Tree (i.e. the “/chosen/kaslr-seed” property) like arm64 does. By my understanding, even without Device Tree, MIPS KASLR entropy should be as strong as pre-RDRAND x86 entropy, which is more than sufficient for what is, similar to x86, not a huge KASLR range anyway: default 8 bits (a span of 16MB with 64KB alignment), though CONFIG_RANDOMIZE_BASE_MAX_OFFSET can be tuned to the device’s memory, giving a maximum of 11 bits on 32-bit, and 15 bits on EVA or 64-bit.

    SLAB freelist ASLR

    Thomas Garnier added CONFIG_SLAB_FREELIST_RANDOM to make slab allocation layouts less deterministic with a per-boot randomized freelist order. This raises the bar for successful kernel slab attacks. Attackers will need to either find additional bugs to help leak slab layout information or will need to perform more complex grooming during an attack. Thomas wrote a post describing the feature in more detail here: Randomizing the Linux kernel heap freelists. (SLAB is done in v4.7, and SLUB in v4.8.)

    eBPF JIT constant blinding

    Daniel Borkmann implemented constant blinding in the eBPF JIT subsystem. With strong kernel memory protections (CONFIG_DEBUG_RODATA) in place, and with the segregation of user-space memory execution from kernel (i.e SMEP, PXN, CONFIG_CPU_SW_DOMAIN_PAN), having a place where user-space can inject content into an executable area of kernel memory becomes very high-value to an attacker. The eBPF JIT was exactly such a thing: the use of BPF constants could result in the JIT producing instruction flows that could include attacker-controlled instructions (e.g. by directing execution into the middle of an instruction with a constant that would be interpreted as a native instruction). The eBPF JIT already uses a number of other defensive tricks (e.g. random starting position), but this added randomized blinding to any BPF constants, which makes building a malicious execution path in the eBPF JIT memory much more difficult (and helps block attempts at JIT spraying to bypass other protections).

    Elena Reshetova updated a 2012 proof-of-concept attack to succeed against modern kernels to help provide a working example of what needed fixing in the JIT. This serves as a thorough regression test for the protection.

    The cBPF JITs that exist in ARM, MIPS, PowerPC, and Sparc still need to be updated to eBPF, but when they do, they’ll gain all these protections immediatley.

    Bottom line is that if you enable the (disabled-by-default) bpf_jit_enable sysctl, be sure to set the bpf_jit_harden sysctl to 2 (to perform blinding even for root).

    fix brk ASLR weakness on arm64 compat

    There have been a few ASLR fixes recently (e.g. ET_DYN, x86 32-bit unlimited stack), and while reviewing some suggested fixes to arm64 brk ASLR code from Jon Medhurst, I noticed that arm64’s brk ASLR entropy was slightly too low (less than 1 bit) for 64-bit and noticeably lower (by 2 bits) for 32-bit compat processes when compared to native 32-bit arm. I simplified the code by using literals for the entropy. Maybe we can add a sysctl some day to control brk ASLR entropy like was done for mmap ASLR entropy.

    LoadPin LSM

    LSM stacking is well-defined since v4.2, so I finally upstreamed a “small” LSM that implements a protection I wrote for Chrome OS several years back. On systems with a static root of trust that extends to the filesystem level (e.g. Chrome OS’s coreboot+depthcharge boot firmware chaining to dm-verity, or a system booting from read-only media), it’s redundant to sign kernel modules (you’ve already got the modules on read-only media: they can’t change). The kernel just needs to know they’re all coming from the correct location. (And this solves loading known-good firmware too, since there is no convention for signed firmware in the kernel yet.) LoadPin requires that all modules, firmware, etc come from the same mount (and assumes that the first loaded file defines which mount is “correct”, hence load “pinning”).

    That’s it for v4.7. Prepare yourself for v4.8 next!

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    October 01, 2016

    Zsh magic: remove all raw photos that don't have a corresponding JPEG

    Lately, when shooting photos with my DSLR, I've been shooting raw mode but with a JPEG copy as well. When I triage and label my photos (with pho and metapho), I use only the JPEG files, since they load faster and there's no need to index both. But that means that sometimes I delete a .jpg file while the huge .cr2 raw file is still on my disk.

    I wanted some way of removing these orphaned raw files: in other words, for every .cr2 file that doesn't have a corresponding .jpg file, delete the .cr2.

    That's an easy enough shell function to write: loop over *.cr2, change the .cr2 extension to .jpg, check whether that file exists, and if it doesn't, delete the .cr2.

    But as I started to write the shell function, it occurred to me: this is just the sort of magic trick zsh tends to have built in.

    So I hopped on over to #zsh and asked, and in just a few minutes, I had an answer:

    rm *.cr2(e:'[[ ! -e ${REPLY%.cr2}.jpg ]]':)
    

    Yikes! And it works! But how does it work? It's cheating to rely on people in IRC channels without trying to understand the answer so I can solve the next similar problem on my own.

    Most of the answer is in the zshexpn man page, but it still took some reading and jumping around to put the pieces together.

    First, we take all files matching the initial wildcard, *.cr2. We're going to apply to them the filename generation code expression in parentheses after the wildcard. (I think you need EXTENDED_GLOB set to use that sort of parenthetical expression.)

    The variable $REPLY is set to the filename the wildcard expression matched; so it will be set to each .cr2 filename, e.g. img001.cr2.

    The expression ${REPLY%.cr2} removes the .cr2 extension. Then we tack on a .jpg: ${REPLY%.cr2}.jpg. So now we have img001.jpg.

    [[ ! -e ${REPLY%.cr2}.jpg ]] checks for the existence of that jpg filename, just like in a shell script.

    So that explains the quoted shell expression. The final, and hardest part, is how to use that quoted expression. That's in section 14.8.7 Glob Qualifiers. (estring) executes string as shell code, and the filename will be included in the list if and only if the code returns a zero status.

    The colons -- after the e and before the closing parenthesis -- are just separator characters. Whatever character immediately follows the e will be taken as the separator, and anything from there to the next instance of that separator (the second colon, in this case) is taken as the string to execute. Colons seem to be the character to use by convention, but you could use anything. This is also the part of the expression responsible for setting $REPLY to the filename being tested.

    So why the quotes inside the colons? They're because some of the substitutions being done would be evaluated too early without them: "Note that expansions must be quoted in the string to prevent them from being expanded before globbing is done. string is then executed as shell code."

    Whew! Complicated, but awfully handy. I know I'll have lots of other uses for that.

    One additional note: section 14.8.5, Approximate Matching, in that manual page caught my eye. zsh can do fuzzy matches! I can't think offhand what I need that for ... but I'm sure an idea will come to me.

    security things in Linux v4.6

    Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella.

    seccomp support for parisc

    Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working.

    x86 32-bit mmap ASLR vs unlimited stack fixed

    Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. “ulimit -s unlimited“) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead.

    x86 execute-only memory

    Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for “execute only” memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I’m looking forward to either emulated QEmu support or access to one of these fancy CPUs.

    CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86

    Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel.

    On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar’s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we’ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it’s reasonable to continue to leave an “out” for developers that find themselves tripping over it.

    arm64 KASLR text base offset

    Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the “/chosen/kaslr-seed” property) or from UEFI (via EFI_RNG_PROTOCOL), so if you’re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader.

    zero-poison after free

    Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity’s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn’t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called “sanity checking”), which can catch another small subset of flaws.

    To understand the pieces of this, it’s worth describing that the kernel’s higher level allocator, the “page allocator” (e.g. __get_free_pages()) is used by the finer-grained “slab allocator” (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator).

    Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn’t worth the performance trade-off.

    Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you’re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well.

    To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:

    CONFIG_DEBUG_PAGEALLOC=n
    CONFIG_PAGE_POISONING=y
    CONFIG_PAGE_POISONING_NO_SANITY=y
    CONFIG_PAGE_POISONING_ZERO=y
    CONFIG_SLUB_DEBUG=y

    and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:

    page_poison=on slub_debug=P

    To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add “F” to slub_debug as “slub_debug=PF“.

    read-only after init

    I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity’s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above).

    Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared “const” in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked “__init“) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc.

    As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it’s trivial to declare a new data section (“.data..ro_after_init“) and add it to the existing read-only data section (“.rodata“). Kernel structures can be annotated with the new section (via the “__ro_after_init” macro), and they’ll become read-only once boot has finished.

    The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel’s attack surface can be made read-only for the majority of its lifetime.

    As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these).

    That’s it for v4.6, next up will be v4.7!

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    September 28, 2016

    security things in Linux v4.5

    Previously: v4.4. Some things I found interesting in the Linux kernel v4.5:

    CONFIG_IO_STRICT_DEVMEM

    The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled. (And if you have a system where you discover you need IO memory access from userspace, you can boot with “iomem=relaxed” to disable this at runtime.)

    If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled).

    ptrace fsuid checking

    Jann Horn fixed some corner-cases in how ptrace access checks were handled on special files in /proc. For example, prior to this fix, if a setuid process temporarily dropped privileges to perform actions as a regular user, the ptrace checks would not notice the reduced privilege, possibly allowing a regular user to trick a privileged process into disclosing things out of /proc (ASLR offsets, restricted directories, etc) that they normally would be restricted from seeing.

    ASLR entropy sysctl

    Daniel Cashman standardized the way architectures declare their maximum user-space ASLR entropy (CONFIG_ARCH_MMAP_RND_BITS_MAX) and then created a sysctl (/proc/sys/vm/mmap_rnd_bits) so that system owners could crank up entropy. For example, the default entropy on 32-bit ARM was 8 bits, but the maximum could be as much as 16. If your 64-bit kernel is built with CONFIG_COMPAT, there’s a compat version of the sysctl as well, for controlling the ASLR entropy of 32-bit processes: /proc/sys/vm/mmap_rnd_compat_bits.

    Here’s how to crank your entropy to the max, without regard to what architecture you’re on:

    for i in "" "compat_"; do f=/proc/sys/vm/mmap_rnd_${i}bits; n=$(cat $f); while echo $n > $f ; do n=$(( n + 1 )); done; done
    

    strict sysctl writes

    Two years ago I added a sysctl for treating sysctl writes more like regular files (i.e. what’s written first is what appears at the start), rather than like a ring-buffer (what’s written last is what appears first). At the time it wasn’t clear what might break if this was enabled, so a WARN was added to the kernel. Since only one such string showed up in searches over the last two years, the strict writing mode was made the default. The setting remains available as /proc/sys/kernel/sysctl_writes_strict.

    seccomp UM support

    Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures!

    seccomp NNP vs TSYNC fix

    Jann Horn noticed and fixed a problem where if a seccomp filter was already in place on a process (after being installed by a privileged process like systemd, a container launcher, etc) then the setting of the “no new privs” flag could be bypassed when adding filters with the SECCOMP_FILTER_FLAG_TSYNC flag set. Bypassing NNP meant it might be possible to trick a buggy setuid program into doing things as root after a seccomp filter forced a privilege drop to fail (generally referred to as the “sendmail setuid flaw”). With NNP set, a setuid program can’t be run in the first place.

    That’s it! Next I’ll cover v4.6

    Edit: Added notes about “iomem=…”

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    September 27, 2016

    security things in Linux v4.4

    Previously: v4.3. Continuing with interesting security things in the Linux kernel, here’s v4.4. As before, if you think there’s stuff I missed that should get some attention, please let me know.

    seccomp Checkpoint/Restore-In-Userspace

    Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) “this feature gives me the creeps.” :)

    x86 W^X detection

    Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this in v4.3 and added CONFIG_DEBUG_WX in v4.4 which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody’s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6).

    x86_64 vsyscall CONFIG

    I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn’t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with “vsyscall=emulate”.

    That’s it for v4.4. Tune in tomorrow for v4.5!

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    September 26, 2016

    Obtendo mapas de São Paulo

    No FISL do ano passado, ouvimos uma palestra sobre o geosampa bem interessante. Pouco tempo depois, o site já estava funcionando, e acabei de dar uma olhada agora, está ficando impressionante. Basicamente, é um site mantido pela prefeitura de São Paulo, que disponibiliza de maneira aberta e gratuita mapas de todo qual tipo da cidade. Veja...

    security things in Linux v4.3

    When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :)

    A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features.

    So, to that end, here are things I found interesting in v4.3:

    CONFIG_CPU_SW_DOMAIN_PAN

    Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow.

    This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink.

    While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally.

    To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features.

    Ambient Capabilities

    Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex “least privilege” execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier.

    For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn’t enough, there is a self-test available in tools/testing/selftests/capabilities/ too.

    PowerPC and Tile support for seccomp filter

    Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly.

    That’s it for v4.3. If I missed stuff you found interesting, please let me know! I’m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend. Next: v4.4.

    © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

    Unclaimed Alcoholic Beverages

    Dave was reading New Mexico laws regarding a voter guide issue we're researching, and he came across this gem in Section 29-1-14 G of the "Law Enforcement: Peace Officers in General: Unclaimed Property" laws:

    Any alcoholic beverage that has been unclaimed by the true owner, is no longer necessary for use in obtaining a conviction, is not needed for any other public purpose and has been in the possession of a state, county or municipal law enforcement agency for more than ninety days may be destroyed or may be utilized by the scientific laboratory division of the department of health for educational or scientific purposes.

    We can't decide which part is more fun: contemplating what the "other public purposes" might be, or musing on the various "educational or scientific purposes" one might come up with for a month-old beverage that's been sitting in the storage locker ... I'm envisioning a room surrounded by locked chain-link containing dusty shelves containing rows of half-full martini and highball glasses.

    Working with terrain in #FreeCAD

    Since I have not much new FreeCAD-related development to show this week, I'll showcase an existing feature that has been around for some time, which is an external workbench named geodata, programmed by the long-time FreeCAD community member and guru Microelly2. That workbench is part of the FreeCAD addons collection, which is a collection of additional...

    September 25, 2016

    Why an open Web is important when sea levels are rising

    Cory Doctorow speaking on episode 221 of the excellent Changelog podcast:

    “[t]here are things that are way more important than [whether in the internet should or shouldn’t be free]. There’s fundamental issues of economic justice, there’s climate change, there’s questions of race and gender and gender orientation, that are a lot more urgent than the future of the internet, but […] every one of those fights is going to be won or lost on the internet.”

    September 22, 2016

    Comments about OARS and CSM age ratings

    I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

    Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

    Suggestions (especially references) welcome. Thanks!

    September 21, 2016

    GNOME Software and Age Ratings

    After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.

    screenshot-from-2016-09-21-10-22-36

    The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

    At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

    I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

    September 20, 2016

    WebKitGTK+ 2.14

    These six months has gone so fast and here we are again excited about the new WebKitGTK+ stable release. This is a release with almost no new API, but with major internal changes that we hope will improve all the applications using WebKitGTK+.

    The threaded compositor

    This is the most important change introduced in WebKitGTK+ 2.14 and what kept us busy for most of this release cycle. The idea is simple, we still render everything in the web process, but the accelerated compositing (all the OpenGL calls) has been moved to a secondary thread, leaving the main thread free to run all other heavy tasks like layout, JavaScript, etc. The result is a smoother experience in general, since the main thread is no longer busy rendering frames, it can process the JavaScript faster improving the responsiveness significantly. For all the details about the threaded compositor, read Yoon’s post here.

    So, the idea is indeed simple, but the implementation required a lot of important changes in the whole graphics stack of WebKitGTK+.

    • Accelerated compositing always enabled: first of all, with the threaded compositor the accelerated mode is always enabled, so we no longer enter/exit the accelerating compositing mode when visiting pages depending on whether the contents require acceleration or not. This was the first challenge because there were several bugs related to accelerating compositing being always enabled, and even missing features like the web view background colors that didn’t work in accelerated mode.
    • Coordinated Graphics: it was introduced in WebKit when other ports switched to do the compositing in the UI process. We are still doing the compositing in the web process, but being in a different thread also needs coordination between the main thread and the compositing thread. We switched to use coordinated graphics too, but with some modifications for the threaded compositor case. This is the major change in the graphics stack compared to the previous one.
    • Adaptation to the new model: finally we had to adapt to the threaded model, mainly due to the fact that some tasks that were expected to be synchronous before became asyncrhonous, like resizing the web view.

    This is a big change that we expect will drastically improve the performance of WebKitGTK+, especially in embedded systems with limited resources, but like all big changes it can also introduce new bugs or issues. Please, file a bug report if you notice any regression in your application. If you have any problem running WebKitGTK+ in your system or with your GPU drivers, please let us know. It’s still possible to disable the threaded compositor in two different ways. You can use the environment variable WEBKIT_DISABLE_COMPOSITING_MODE at runtime, but this will disable accelerated compositing support, so websites requiring acceleration might not work. To disable the threaded compositor and bring back the previous model you have to recompile WebKitGTK+ with the option ENABLE_THREADED_COMPOSITOR=OFF.

    Wayland

    WebKitGTK+ 2.14 is the first release that we can consider feature complete in Wayland. While previous versions worked in Wayland there were two important features missing that made it quite annoying to use: accelerated compositing and clipboard support.

    Accelerated compositing

    More and more websites require acceleration to properly work and it’s now a requirement of the threaded compositor too. WebKitGTK+ has supported accelerated compositing for a long time, but the implementation was specific to X11. The main challenge is compositing in the web process and sending the results to the UI process to be rendered on the actual screen. In X11 we use an offscreen redirected XComposite window to render in the web process, sending the XPixmap ID to the UI process that renders the window offscreen contents in the web view and uses XDamage extension to track the repaints happening in the XWindow. In Wayland we use a nested compositor in the UI process that implements the Wayland surface interface and a private WebKitGTK+ protocol interface to associate surfaces in the UI process to the web pages in the web process. The web process connects to the nested Wayland compositor and creates a new surface for the web page that is used to render accelerated contents. On every swap buffers operation in the web process, the nested compositor in the UI process is automatically notified through the Wayland surface protocol, and  new contents are rendered in the web view. The main difference compared to the X11 model, is that Wayland uses EGL in both the web and UI processes, so what we have in the UI process in the end is not a bitmap but a GL texture that can be used to render the contents to the screen using the GPU directly. We use gdk_cairo_draw_from_gl() when available to do that, falling back to using glReadPixels() and a cairo image surface for older versions of GTK+. This can make a huge difference, especially on embedded devices, so we are considering to use the nested Wayland compositor even on X11 in the future if possible.

    Clipboard

    The WebKitGTK+ clipboard implementation relies on GTK+, and there’s nothing X11 specific in there, however clipboard was read/written directly by the web processes. That doesn’t work in Wayland, even though we use GtkClipboard, because Wayland only allows clipboard operations between compositor clients, and web processes are not Wayland clients. This required to move the clipboard handling from the web process to the UI process. Clipboard handling is now centralized in the UI process and clipboard contents to be read/written are sent to the different WebKit processes using the internal IPC.

    Memory pressure handler

    The WebKit memory pressure handler is a monitor that watches the system memory (not only the memory used by the web engine processes) and tries to release memory under low memory conditions. This is quite important feature in embedded devices with memory limitations. This has been supported in WebKitGTK+ for some time, but the implementation is based on cgroups and systemd, that is not available in all systems, and requires user configuration. So, in practice nobody was actually using the memory pressure handler. Watching system memory in Linux is a challenge, mainly because /proc/meminfo is not pollable, so you need manual polling. In WebKit, there’s a memory pressure handler on every secondary process (Web, Plugin and Network), so waking up every second to read /proc/meminfo from every web process would not be acceptable. This is not a problem when using cgroups, because the kernel interface provides a way to poll an EventFD to be notified when memory usage is critical.

    WebKitGTK+ 2.14 has a new memory monitor, used only when cgroups/systemd is not available or configured, based on polling /proc/meminfo to ensure the memory pressure handler is always available. The monitor lives in the UI process, to ensure there’s only one process doing the polling, and uses a dynamic poll interval based on the last system memory usage to read and parse /proc/meminfo in a secondary thread. Once memory usage is critical all the secondary processes are notified using an EventFD. Using EventFD for this monitor too, not only is more efficient than using a pipe or sending an IPC message, but also allows us to keep almost the same implementation in the secondary processes that either monitor the cgroups EventFD or the UI process one.

    Other improvements and bug fixes

    Like in all other major releases there are a lot of other improvements, features and bug fixes. The most relevant ones in WebKitGTK+ 2.14 are:

    • The HTTP disk cache implements speculative revalidation of resources.
    • The media backend now supports video orientation.
    • Several bugs have been fixed in the media backend to prevent deadlocks when playing HLS videos.
    • The amount of file descriptors that are kept open has been drastically reduced.
    • Fix the poor performance with the modesetting intel driver and DRI3 enabled.

    Frogs on the Rio, and Other Amusements

    Saturday, a friend led a group hike for the nature center from the Caja del Rio down to the Rio Grande.

    The Caja (literally "box", referring to the depth of White Rock Canyon) is an area of national forest land west of Santa Fe, just across the river from Bandelier and White Rock. Getting there involves a lot of driving: first to Santa Fe, then out along increasingly dicey dirt roads until the road looks too daunting and it's time to get out and walk.

    [Dave climbs the Frijoles Overlook trail] From where we stopped, it was only about a six mile hike, but the climb out is about 1100 feet and the day was unexpectedly hot and sunny (a mixed blessing: if it had been rainy, our Rav4 might have gotten stuck in mud on the way out). So it was a notable hike. But well worth it: the views of Frijoles Canyon (in Bandelier) were spectacular. We could see the lower Bandelier Falls, which I've never seen before, since Bandelier's Falls Trail washed out below the upper falls the summer before we moved here. Dave was convinced he could see the upper falls too, but no one else was convinced, though we could definitely see the red wall of the maar volcano in the canyon just below the upper falls.

    [Canyon Tree Frog on the Rio Grande] We had lunch in a little grassy thicket by the Rio Grande, and we even saw a few little frogs, well camouflaged against the dirt: you could even see how their darker brown spots imitated the pebbles in the sand, and we wouldn't have had a chance of spotting them if they hadn't hopped. I believe these were canyon treefrogs (Hyla arenicolor). It's always nice to see frogs -- they're not as common as they used to be. We've heard canyon treefrogs at home a few times on rainy evenings: they make a loud, strange ratcheting noise which I managed to record on my digital camera. Of course, at noon on the Rio the frogs weren't making any noise: just hanging around looking cute.

    [Chick Keller shows a burdock leaf] Sunday we drove around the Pojoaque Valley following their art tour, then after coming home I worked on setting up a new sandblaster to help with making my own art. The hardest and least fun part of welded art is cleaning the metal of rust and paint, so it's exciting to finally have a sandblaster to help with odd-shaped pieces like chains.

    Then tonight was a flower walk in Pajarito Canyon, which is bursting at the seams with flowers, especially purple aster, goldeneye, Hooker's evening primrose and bahia. Now I'll sign off so I can catalog my flower photos before I forget what's what.

    September 18, 2016

    #FreeCAD news and Arch workflow

    So, let's continue to post more often about FreeCAD. I'm beginning to organize a bit better, gathering screenshots and ideas during the week, so I'll try to keep this going. This week has seen many improvements, specially because we've been doing intense FreeCAD work with OpeningDesign. Like everytime you make intense use of FreeCAD or...

    September 14, 2016

    LVFS and ODRS are down

    The LVFS firmware server and ODRS reviews server are down because my credit card registered with OpenShift expired. I’ve updated my credit card details, paid the pending invoice and still can’t start any server. I rang customer service who asked me to send an email and have heard nothing back.

    screenshot-from-2016-09-14-17-34-59

    I have backups a few days old, but this whole situation is terrible on so many levels.

    EDIT: cdaley has got everything back working again, it appears I found a corner case in the code that deals with payments.

    bicycle, node, network, design

    This Monday ignite berlin took place and I did a fun, five minute, pecha kucha talk that also contained some systems analysis and a design insight. For a full transcript, read on.

    ring ring

    There are two things that you need to know about me. The first is that I am dutch and the second is that I am becoming a sentimental old fool. I combine the two when I do cycling holidays in Holland:

    cycling in the dutch fields my partner Carmen leads the way

    For this we use the fietsroutenetwerk, the bicycle route network of the Netherlands. This was designed for recreational cycling in the countryside. It was rolled out between 2003 and 2012. The network is point‐to‑point:

    two points connected by a line, arrows pointing both ways

    Between two neighbouring nodes there is complete signage—with no gaps—to get you from one to the other. And this in both directions. Here are some of these signs:

    several roadside routing signposts sources: fietsen op de fiets, het groene woud, gps.nl

    The implicit promise is that these are nice routes. That means: away from cars as much as possible. And scenic—through fields, heath and forrest.

    Using the nodes, local networks have been designed and built:

    a network of nodes on a local map

    These networks are purely infrastructural; there is no preconception of what is ‘proper’ or ‘typical’ usage. They accommodate routes of any shape and any length.

    At every node, one finds a local map, with the network:

    on-location display of the local map source: wikimedia commons

    It can be used for planning, reference and simply reassurance. Besides that, there are old‑fashioned maps and plenty of apps and websites for planning and sharing of routes.

    The local networks were knitted together to form a national network:

    a dense network covers the whole country

    Looking at this map I see interesting differences in patterns and densities. I don’t think this only reflects the geography, but also the character of the locals; what they consider proper cycling infrastructure and scenic routes.

    The network was not always nation-wide. It was rolled out over a period of nine years, one local network at the time. I still remember crossing a province border and (screech!) there was no more network. It was back to old‑fashioned map reading and finding the third street on the left.

    not invented here

    I was shocked to find out that the Dutch did not invent this network system. We have to go back to the 1980s, north‐east Belgium: all the coal mines are closing. Mining engineer Hugo Bollen proposes to create a recreational cycling network, in order to initiate economic regeneration of the region. Here’s Hugo:

    Hugo Bollen rides a bike in nature source: toerisme limburg

    He designed the network rules explained in this blog post. The Belgians actually had to build(!) all of the cycling infrastructure, so it took them to 1995 to open the first local network. It now brings in 16.5 million Euro a year to the region.

    how many?

    I got curious about the total number of network nodes in Holland. I could not find this number on the internet. The net is really quite short on stats and data of the cycling network. So I needed to find out by myself. What I did was take one of my maps—

    a traditional cycling map that covers a part of holland

    And I counted all the nodes—there were 309. I multiplied this with the number of maps that cover all of Holland. Then I took 75% of that number to deal with map overlaps and my own over‐enthusiasm. The result: I estimate that the dutch network consists of 9270 nodes.

    in awe

    The reason I got curious about that number is that every time I use the network, I am impressed by a real‐genius design decision (and I don’t get to say that very often). It makes all the difference, when using the network in anger.

    All these nearly‐ten thousand nodes are identified by a two‑digit number. Not the four (or more future‐proof, five) one would expect. All the nodes are simply numbered 1 through 99, and then they start at one again. And shorter is much better:

    cycling route signage with direction for node 02 source: recreatieschap westfriesland

    Two digits is much faster to read and write down. It is easier to memorise, short‐term. It is instant to compare and confirm. Remember, most of these actions are performed while riding a bike at a nice cruising speed.

    but…

    Pushing through this two‑digit design must have been asking for trouble. Most of us can just imagine the bike‐shedding: ‘what if cyclists really need to be able to uniquely identify a node in the whole nation?’ Or: ‘will cyclists get confused by these repeating numbers?’

    This older cycling signpost system has a five‑digit identification number:

    a clycling signpost showing directions to nearby villages and towns source: dirk de baan

    This number takes several steps to process. Two‑digit numbers are humane numbers. They exploit that way‐finding is a very local activity—although one can cover 130km a day on a bike.

    whatchamacallit?

    Wrapping up, the cycling network is a distributed network:

    three graphs: a centralised, a decentralised and a distributed network source: j4n

    All nodes are equal and so are all routes. Cyclist route themselves. In that way the network works quite like… the internet.

    We could call it the democratic network, because it treats everyone as equals. Or we could call it the liberal network (that would be very dutch). Or—in a post‐modern way—we could call it the atomised network.

    I simply call it the bicycle route network of the Netherlands.

    a vista over dutch fields with a calf and two cyclists

    September 12, 2016

    Art on display at the Bandelier Visitor Center

    As part of the advertising for next month's Los Alamos Artists Studio Tour (October 15 & 16), the Bandelier Visitor Center in White Rock has a display case set up, and I have two pieces in it.

    [my art on display at Bandelier]

    The Velociraptor on the left and the hummingbird at right in front of the sweater are mine. (Sorry about the reflections in the photo -- the light in the Visitor Center is tricky.)

    The turtle at front center is my mentor David Trujillo's, and I'm pretty sure the rabbit at far left is from Richard Swenson.

    The lemurs just right of center are some of Heather Ward's fabulous scratchboard work. You may think of scratchboard as a kids' toy (I know I used to), but Heather turns it into an amazing medium for wildlife art. I'm lucky enough to get to share her studio for the art tour: we didn't have a critical mass of artists in White Rock, just two of us, so we're borrowing space in Los Alamos for the tour.

    September 09, 2016

    Click Hooks

    After being asked about what I like about Click hooks I thought it would be nice to write up a little bit of the why behind them in a blog post. The precursor to this story is that I told Colin Watson that he was wrong to build hooks like this; he kindly corrected me and helped me fix my code to match but I still wasn't convinced. Now today I see some of the wisdom in the Click hook design and I'm happy to share it.

    The standard way to think about hooks is as a way to react to the changes to the system. If a new application is installed then the hook gets information about the application and responds to the new data. This is how most libraries work with providing signals about the data that they maintain, and we apply that same logic to thinking about filesystem hooks. But filesystem hooks are different because the coherent state is harder to query. In your library you might respond the signal for a few things, but in many code paths the chances are you'll just go through the list of original objects to do operations. With filesystem hooks that complete state is almost never used, only the caches are that are created by the hooks themselves.

    Click hooks work by creating a directory of symbolic links that matches the current state of the system, and then asks you to ensure your cache matches that state of the system. This seems inefficient because you have to determine which parts of your cache need to change, which get removed and which get added. But it results in better software because your software, including your hooks, has errors in it. I'm sorry to be the first one to tell you, but there are bugs. If your software is 99% correct, there is still something it is doing wrong. When you have delta updates that update the cache that error compounds and never gets completely corrected with each update because the complete state is never examined. So slowly the quality of your cache gets worse, not awful, but worse. By transferring the current system state to the cache each time you get the error rate of your software in the cache, but you don't get the compounded error rate of each delta. This adds up.

    The design of the hooks system in Click might feel wrong as you start to implement one, but I think that after you create a few hooks you'll find there is wisdom in it. And as you use other hook systems in other platforms think about checking the system state to ensure you're always creating the best cache possible, even if the hook system there didn't force you to do it.