November 28, 2016

A Masashi Wakui look with GIMP


A Masashi Wakui look with GIMP

A color bloom fit for night urban landscapes

This tutorial explains how to achieve an effect based on the post processing by photographer Masashi Wakui. His primary subjects appear as urban landscape views of Japan where he uses some pretty and aggressive color toning to complement his scenes along with a soft ‘bloom’ effect on the highlights. The results evoke a strong feeling of an almost cyberpunk or futuristic aesthetic (particularly for fans of Bladerunner or Akira!).

Untitled Untitled Untitled

This tutorial started its life in the pixls.us forum, which was inspired by a forum post seeking assistance on replicating the color grading and overall look/feel of Masashi’s photography.

Prerequisites

To follow along will require a couple of plugins for GIMP.

The Luminosity Mask filter will be used to target color grading to specific tones. You can find out more about luminosity masks in GIMP at Pat David’s blog post and his follow-up blog post. If you need to install the script, directions can be found (along with the scripts) at the PIXLS.US GIMP scripts git repository.

You will also need the Wavelet decompose plugin. The easiest way to get this plugin is to use the one available in G’MIC. As a bonus you’ll get access to many other incredible filters as well! Once you’ve installed G’MIC the filter can be found under
Details → Split details [wavelets].

We will do some basic toning and then apply Gimp’s wavelet decompose filter to do some magic. Two things will be used from the wavelet decompose results:

  • the residual
  • the coarsest wavelet scale (number 8 in this case)

The basic idea is to use the residual of the the wavelet decompose filter to color the image. What this does is average and blur the colors. The trick strengthens the effect of the surroundings being colored by the lights. The number of wavelet scales to use depends on the pixel size of the picture; the relative size of the coarsest wavelet scale compared to the picture is the defining parameter. The wavelet scale 8 will then produce overemphasised local contrasts, which will accentuate the lights further. This works nicely in pictures with lights as the brightest areas will be around lights. Used on daytime picture this effect will also accentuate brighter areas which will lead to a kind of “glow” effect. I tried this as well and it does look good on some pictures while on others it looks just wrong. Try it!

We will be applying all the following steps to this picture, taken in Akihabara, Tokyo.

The unaltered photograph The starting image (download full resolution).
  1. Apply the luminosity mask filter to the base picture. We will use this later.

    Filters → Generic → Luminosity Masks

  2. Duplicate the base picture (Ctrl+Shift+D).

    Layer → Duplicate Layer

  3. Tone the shadows of the duplicated picture using the tone curve by lowering the reds in the shadows. If you want your shadows to be less green, slightly raise the blues in the shadows.

    Colors → Curves

    The toning curves
    The photograph with the toning curve applied
  4. Apply a layer mask to the duplicated and toned picture. Choose the DD luminosity mask from a channel.

    Layer → Mask → Add Layer Mask

    Luminosity Mask Added
  5. With both layers visible, create a new layer from what is visible. Call this layer the “blended” layer.

    Layer → New from Visible

    The photograph after the blended layer
  6. Apply the wavelet decompose filter to the “blended” layer and choose 9 as number of detail scales. Set the G’MIC output mode to “New layer(s)” (see below).

    Filters → G’MIC
    Details → Split Details [wavelets]

    G'MIC Split Details Wavelet Decompose dialog Remember to set G’MIC to output the results on New Layer(s).
  7. Make the blended and blended [residual] layers visible. Then set the mode of the blended [residual] layer to color. This will give you a picture with averaged, blurred colors.

    The fully colored photograph
  8. Turn the opacity of the blended [residual] down to 70%, or any other value to your taste, to bring back some color detail.

    The partially colored photograph
  9. Turn on the blended [scale #8] layer, set the mode to grain merge, and see how the lights start shining. Adjust opacity to taste.

    The augmented contrast layer
  10. Optional: Turn the wavelet scale 3 (or any other) on to sharpen the picture and blend to taste.

  11. Make sure the following layers are visible:

    • blended
    • residual
    • wavelet scale 8
    • Any other wavelet scale you want to use for sharpening
  12. Make a new layer from visible

    Layer → New from Visible

  13. Raise and slightly crush the shadows using the tone curve.

    Raise the shadow curve
  14. Optional: Adjust saturation to taste. If there are predominantly white lights and the colors come mainly from other objects, the residual will be washed out, as is the case with this picture.

    I noticed that the reds and yellows were very dominant compared to greens and blues. So using the Hue-Saturation dialog I raised the master saturation by +70 and lowered the yellow saturation by -50 and lowered the red saturation by -40 all using an overlap of 60.

The final result:

The final image! The final result. (Click to compare to original.)
Download the full size result.

Linux communities, we need your help!

There are a lot of Linux communities all over the globe filled with really nice people who just want to help others. Typically these people either can’t (or don’t feel comfortable) coding, and I’d love to harness some of that potential by adding a huge number of new application reviews to the ODRS. At the moment we have about 1100 reviews, mostly covering the more popular applications, and also mostly written in English.

What I would love is for a few groups of people to come together for their next LUG/outreach/InstallFest and sit down together somewhere cozy and write a few reviews. Bonus points if you use a less-well-known application, and even more points if you can write in a language other than English. Submitting a review is easy; just open up GNOME Software, find the application, and click ‘Write a Review‘ at the bottom of the page.

Application reviews help new users what to install, and the star ratings you give means we can return useful search results full of great applications. Please write an email, ask about helping the ODRS, and perhaps you can help a lot of new users next time you meet with your Linuxy friends.

Thanks!

November 26, 2016

FreeCAD Arch development news

There is quite some time I don't write about Arch development, so here goes a little overview of what's been going on during the last weeks. As always, I'll be describing mostly what I've been doing myself, but many other people are very actively working on FreeCAD too, much more is going on. The best...

November 24, 2016

Watching org.libelektra with Qt

libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.

Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:

Declare a callback function in your Qt class header:

public slots:
 void configChanged( QString msg );

Add the QtDBus API in your sources:

#include <QtDBus/QtDBus>

Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:

if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(),
 this, SLOT( configChanged( QString ) )) )
 fprintf(stderr, "=================== Done connect\n" );

In your callback arrive the org.libelektra signals:

void Synnefo::configChanged( QString msg )
{
 fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );
};

As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:

// init a gate keeper in the class constructor:
acceptDBusUpdate = true;

void Synnefo::configChanged( QString msg )
{
  // allow the first message to ping
  if(acceptDBusUpdate == false) return;
  // block more messages
  acceptDBusUpdate = false;

  // update the view slightly later and avoid trouble
  QTimer::singleShot(250, this, SLOT( update() ));
};

void Synnefo::update()
{
  // clear the Oyranos settings cache (Oyranos CMS specific)
  oyGetPersistentStrings( NULL );

  // the data model reading from libelektra and GUI update
  // code ...

  // open the door for more messages to come
  acceptDBusUpdate = true;
}

The above code works for both Qt4 and Qt5.

String freeze for the upcoming 2.2 series

This is a call for all our translators, now is the time to bring your .po file in the master branch up to date. We will not ship any translation that is not relatively complete, the exact threshold is still to be determined.

As a quick reminder, these are the steps to update the translation if you are working from git. language_code is not the whole filename of the po file but just the first part of it. For example, when for Italian the language code is it while the filename is it.po. You also have to compile darktable before updating your .po file as some of the translated files are auto-generated.

cd /path/to/your/darktable/checkout/
git checkout master
git pull
./build.sh
cd po/
intltool-update <language_code>
<edit language_code.po>

If you don't have a build environment set up to compile darktable you can also use this .pot file.

November 23, 2016

darktable 2.2.0rc1 released

we're proud to announce the second release candidate of darktable 2.2.0, with some fixes over the previous release candidate. the most important one might be bringing back read support for very old xmp files (~4 years).

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc1.

as always, please don't use the tarball autogenerated by github, but only our .tar.xz with the following sha256sum:

0612163b0020bc3326909f6d7f7cbd8cfb5cff59b8e0ed1a9e2a2aa17d8f308e  darktable-2.2.0~rc1.tar.xz

the changelog vs. the stable 2.0.x series is below:

  • Well over 2k commits since 2.0.0

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes (no less than 256Kb). That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in "more modules" so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Add basic undo/redo support for the darkroom (masks are not accounted !)
  • Support the Exif date and time when importing photos from camera
  • Input color profile module, when profile is just matrix (and linear curve), is 1/3 faster now.
  • Rudimentary CYGM and RGBE color filter array support
  • Nicer web gallery exporter -- now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Hotpixels module: make it actually work for X-Trans

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar XMP schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks (shift+click)
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Feathering size in some mask shapes can be set with shift+scroll
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even debian stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.

A special note to all the darktable Fedora users: Fedora-provided darktable packages are intentionally built with Lua disabled. Thus, Lua scripting will not work. This breaks e.g. darktable-gimp integration. Please bug Fedora. In the mean time you could fix that by self-compiling darktable (pass -DDONT_USE_INTERNAL_LUA=OFF to cmake in order to enable use of bundled Lua5.2.4).

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Panasonic DMC-LX10 (3:2)
  • Panasonic DMC-LX15 (3:2)
  • Panasonic DMC-LX9 (3:2)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCE-6300

We were unable to bring back these 3 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800
  • Nikon D3X (12-bit)

White Balance Presets

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-1
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus E-PL6
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-S2
  • Ricoh GR
  • Sony DSLR-A900
  • Sony DSC-RX10
  • Sony SLT-A37

New Translations

  • Hebrew
  • Slovenian

Updated Translations

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Russian
  • Slovak
  • Spanish
  • Swedish

November 22, 2016

Giving Thanks


Giving Thanks

For an awesome community!

Here in the U.S., we have a big holiday coming up this week: Thanksgiving. Serendipitously, this holiday also happens to fall when a few neat things are happening around the community, and what better time is there to recognize some folks and to give thanks of our own? No time like the present!

A Special Thanks

I feel a special “Thank You” should first go to a photographer and fantastic supporter of the community, Dimitrios Psychogios. Last year for our trip to Libre Graphics Meeting, London he stepped up with an awesome donation to help us bring some fun folks together.

LGM2016 Dinner Fun folks together.
Mairi, the darktable nerds, a RawTherapee nerd, and a PhotoFlow nerd.
(and the nerd taking the photo, patdavid)

This year he was incredibly kind by offering a donation to the community (completely unsolicited) that covers our hosting and infrastructure costs for an entire year! So on behalf of the community, Thank You for your support, Dimitrios!

I’ll be creating a page soon that will list our supporters as a means of showing our gratitude. Speaking of supporters and a new page on the site…

A Support Page

Someone had asked about the possibility of donating to the community on a post. We were talking about providing support in darktable for using a midi controller deck and the costs for some of the options weren’t too extravagant. This got us thinking that enough small donations could probably cover something like this pretty easily, and if it was community hardware we could make sure it got passed around to each of the projects that would be interested in creating support for it.

KORG NanoControl2 An example midi-controller that we might get support
for in darktable and other projects.

That conversation had me thinking about ways to allow folks to support the community. In particular, ways to make it easy to provide support on an on-going basis if possible (in addition to simple, single donations). There are goal-oriented options out there that folks are probably already familiar with (Kickstarter, Indiegogo and others) but the model for us is less goal-oriented and more about continuous support.

Patreon was an option as well (and I already had a skeleton Patreon account set up), but the fees were just too much in the end. They wanted a flat 5% along with the regular PayPal fees. The general consensus among the staff was that we wanted to maximize the funds getting to the community.

The best option in the end was to create a merchant account on PayPal and manually set up the various payment options. I’ve set them up similar to how a service like Patreon might run with four different recurring funding levels and an option for a single one-time payment of whatever a user would like. Recurring levels are nice because they make it easier to plan with.

We’re Not Asking

Our requirements for the infrastructure of the site are modest and we haven’t actively pursued support or donations for the site before. That hasn’t changed.

We’re not asking for support now. The best way that someone can help the community is by being an active part of it.

Engaging others, sharing what you’ve done or learned, and helping other users out wherever you can. This is the best way to support the community.

I purposely didn’t talk about funding before because I don’t want folks to have to worry or think about it. And before you ask: no, we are not and will not run any advertising on the site. I’d honestly rather just keep paying for things out of my pocket instead.

We’re not asking for support, but we’ll accept it.

With that being said, I understand that there’s still some folks that would like to contribute to the infrastructure or help us to get hardware to add support in projects and more. So if you do want to contribute, the page for doing so can be found here:

https://pixls.us/support

There are four recurring funding levels of $1, $3, $5, and $10 per month. There is also a one-time contribution option as well.

We also have an Amazon Affiliate link option. If you’re not familiar with it, you simply click the link to go to Amazon.com. Then anything you buy for the next 24 hours will give us some small percentage of your purchase price. It doesn’t affect the price of what you’re buying at all. So if you were going to purchase something from Amazon anyway, and don’t mind - then by all means use our link first to help out!


1000 Users

This week we also finally hit 1,000 users registered on discuss! Which is just bananas to me. I am super thankful for each and every member of the community that has taken the time to participate, share, and generally make one of the better parts of my day catching up on what’s been going on. You all rock!

While we’re talking about a number “1” with bunch of zeros after it, we recently made some neat improvements to the forums…

100 Megabytes

We are a photography community and it seemed stupid to have to restrict users from uploading full quality images or raw files. Previously it was a concern because the server the forums are hosted on have limited disk space (40GB). Luckily, Discourse has an option for storing all uploads to the forum on Amazon S3 buckets.

I went ahead and created some S3 buckets so that any uploads to the forums will now be hosted on Amazon instead of taking up precious space on the server. The costs are quite reasonable (around $0.30/GB right now), and it also means that I’ve been able to bump the upload size to 100MB for forum posts! You can now just drag and drop full resolution raw files directly into the post editor to include the file!

Drag and Drop files in discuss 70MB GIMP .xcf file? Just drag-and-drop to upload, no problem! :)

Travis CI Automation

On a slightly geekier note, did you know that the code for the entire website is available on Github? It’s also licensed liberally (CC-BY-SA), so no reason not to come and fiddle with things with us! One of the features of using Github is integration with Travis CI (Continuous Integration).

What this basically means is that every commit to the Github repo for the website gets picked up by Travis and built to test that everything is working ok. You can actually see the history of the website builds there.

I’ve now got it set up so that when a build is successful on Travis, it will automatically publish the results to the main webserver and make it live. Our build system, Metalsmith, is a static site generator. This means that we build the entire website on our local computers when we make changes, and then publish all of those changes to the webserver. This change automates that process for us now by handling the building and publishing if everything is ok.

In fact, if everything is working the way I think it should, this very blog post will be the first one published using the new automated system! Hooray!

You can poke me or @paperdigits on discuss if you want more details or feel like playing with the website.

Mica

Speaking of @paperdigits, I want to close this blog post with a great big “Thank You!“ to him as well. He’s the only other person insane enough to try and make sense of all the stuff I’ve done building the site so far, and he’s been extremely helpful hacking at the website code, writing articles, make good infrastructure suggestions, taking the initiative on things (t-shirts and github repos), and generally being awesome all around.

November 21, 2016

Last batch of ColorHugALS

I’ve got 9 more ColorHugALS devices in stock and then when they are sold they will be no more for sale. With all the supplier costs going recently up my “sell at cost price” has turned into “make a small loss on each one” which isn’t sustainable. It’s all OpenHardware, both hardware design and the firmware itself so if someone wanted to start building them for sale they would be doing it with my blessing. Of course, I’m happy to continue supporting the existing sold devices into the distant future.

colorhug-als1-large

In part the original goal is fixed, the kernel and userspace support for the new SensorHID protocol works great and ambient light functionality works out of the box for more people on more hardware. I’m slightly disappointed more people didn’t get involved in making the ambient lighting algorithms more smart, but I guess it’s quite a niche area of development.

Plus, in the Apple product development sense, killing off one device lets me start selling something else OpenHardware in the future. :)

November 18, 2016

Solar diagrams in FreeCAD

New feature in FreeCAD: Arch Sites can now display a solar diagram: More info at http://forum.freecadweb.org/viewtopic.php?f=23&p=145036#p145036

Miyazaki Tribute

I am dono, CG freelancer from Paris, France. I use Blender as my main tool for both personal and professional work.

My workflow was a bit hectic during the creation of my tribute to Hayao Miyazaki short. There’s a ton of ways to produce such film anyway, and everyone has its own workflow, so the best I can do is to simply share how I personally did it.

I always loved the work of Hayao Miyazaki. I already had a lot of references from blu-ray, art books, mangas and such, so I didn’t spend a lot of time searching for references, but all I can say is that’s quite an important task at the beginning of a project. Having good references can save a lot of time.

I simply started the project as a modeling and texturing exercise, just to practice. After modeling the bath of “Spirited Away”, I thought it could be cool to do something more evolved.

miazaki_tribute_01

So I first did a layout with very low poly meshes to have a realtime preview of the camera’s movements. I also extracted frames from the movies using blurays footage to make two different quality versions. One version used low res JPGs to use for realtime preview in 3D viewport. The second one used raw PNGs for final renders.

miazaki_tribute_02

I used realtime previews to edit it all together using Blender’s sequencer. I wanted to find a good tempo and feeling for the music, and with realtime in Blender’s viewport, it was easy and smooth to built up. I edited directly the 3D viewport, by linking the scene in the sequencer, so I didn’t need to render anything!

miazaki_tribute_03

Next, I did the rotoscoping in Blender frame by frame. Having used realtime previews for the editing, I already knew exactly how many frames I had to rotoscope. That way I didn’t wasted any time rotoscoping unecessary footage, which was crucial because rotoscoping is very, very time consuming. The very important thing when you do a rotoscoping is to separate parts. You do not want to have everything in one part. Having separated layers makes it more flexible and faster.

miazaki_tribute_04

Then, I modeled and unwraped the assets in blender, textured them in Blender and Gimp. I used one blend for each asset to limit blends file size, and used linking to bring everything together in one scene. I also created a blend file that contained a lot of materials (different kind of metal, wood), so I could link them and reuse them at will. It was worth it since it having a modular workflow often really saves time.

miazaki_tribute_05

For the smoke, I used the blender smoke, directly rendered in openGL in Blender Internal. You can see and correct very easily any mistakes. I did also some dust and fog pass with it.

miazaki_tribute_06

Ocean was done using ocean modifier in Blender. I baked an image sequence in EXR, and used these images to do the wave displacement and foam.

miazaki_tribute_07

For rendering I used Octane since I wanted to try a new renderer for this project, but it could have been done using Cycles without any troubles. I rendered layers separatly: characters, sets, backgrounds and fxs. It was very good to have rendered things separatly: the render is more fast, you can have more bigger scene with more polys, and mostly, you can render again a part, if necessary (and it was very often the case) without to render the whole image all over again. Renders were saved in PNG 16 bits for the layer color, and in EXR 32 bits for the Z layer pass. I also rendered some masks and ID mask. This allowed to correct details very quickly during compositing without having to render again the whole image. The rendering time for one frame was from 4 minutes to 15 minutes.

miazaki_tribute_08

I finished the compo with Natron, added glow, vignetting, motion blur. The Z layer pass was used to add some fog, and ID mask to correct some objects colors. When you have a lot of layer pass from blender, it is very easy to do compositing and tweak things very quickly. I remember when I used to do everything in one single pass at the time. I did renders over and over to fixe errors and it was very time consuming. Sozap, a friend of mine and a very talented artist taught me to use separate layers. It was a really great tip and thanks to him I could work more efficiently.

miazaki_tribute_09

During the production, I showed wip to my friends, because they could provide a new and fresh look on my work. Sometimes, it is hard to have critics, but it is important to listen as they can help you a lot to to improve your work. Without critics, my short most certainly wouldn’t have looked as it does now. Thanks again to Blackschmoll, Boby, Christophe, Clouclou, Cremuss, David, Félicia, Frenchman, Sozap, Stéphane, Virgil! And Thanks to Ton Roosendaal, the Blender community, developers of Blender, Gimp and Natron!

miazaki_tribute_10

Check out the making of video!

November 16, 2016

Wed 2016/Nov/16

  • Debugging Rust code inside a C library

    An application that uses librsvg is in fact using a C library that has some Rust code inside it. We can debug C code with gdb as usual, but what about the Rust code?

    Fortunately, Rust generates object code! From the application's viewpoint, there is no difference between the C parts and the Rust parts: they are just part of the librsvg.so to which it linked, and that's it.

    Let's try this. I'll use the rsvg-view-3 program that ships inside librsvg — this is a very simple program that opens a window and displays an SVG image in it. If you build the rustification branch of librsvg (clone instructions at the bottom of that page), you can then run this in the toplevel directory of the librsvg source tree:

    tlacoyo:~/src/librsvg-latest (rustification)$ libtool --mode=execute gdb ./rsvg-view-3

    Since rsvg-view-3 is an executable built with libtool, we can't plainly run gdb on it. We need to invoke libtool with the incantation for "do your magic for shared library paths and run gdb on this binary".

    Gdb starts up, but no shared libraries are loaded yet. I like to set up a breakpoint in main() and run the program with its command-line arguments, so its shared libs will load, and then I can start setting breakpoints:

    (gdb) break main
    Breakpoint 1 at 0x40476c: file rsvg-view.c, line 583.
    
    (gdb) run tests/fixtures/reftests/bugs/340047.svg
    Starting program: /home/federico/src/librsvg-latest/.libs/rsvg-view-3 tests/fixtures/reftests/bugs/340047.svg
    
    ...
    
    Breakpoint 1, main (argc=2, argv=0x7fffffffdd48) at rsvg-view.c:583
    583         int retval = 1;
    (gdb)

    Okay! Now the rsvg-view-3 binary is fully loaded, with all its initial shared libraries. We can set breakpoints.

    But what does Rust call the functions we defined? The functions we exported to C code with the #[no_mangle] attribute of course get the name we expect, but what about internal, Rust-only functions? Let's ask gdb!

    Finding mangled names

    I have a length.rs file which defines an RsvgLength structure with a "parse" constructor: it takes a string which is a CSS length specifier, and returns an RsvgLength structure. I'd like to debug that RsvgLength::parse(), but what is it called in the object code?

    The gdb command to list all the functions it knows about is "info functions". You can pass a regexp to it to narrow down your search. I want a regexp that will match something-something-length-something-parse, so I'll use "ength.*parse". I skip the L in "Length" because I don't know how Rust mangles CamelCase struct names.

    (gdb) info functions ength.*parse
    All functions matching regular expression "ength.*parse":
    
    File src/length.rs:
    struct RsvgLength rsvg_internals::length::rsvg_length_parse(i8 *, enum class LengthDir);
    static struct RsvgLength rsvg_internals::length::{{impl}}::parse(struct &str, enum class LengthDir);

    All right! The first one, rsvg_length_parse(), is a function I exported from Rust so that C code can call it. The second one is the mangled name for the RsvgLength::parse() that I am looking for.

    Printing values

    Let's cut and paste the mangled name, set a breakpoint in it, and continue the execution:

    (gdb) break rsvg_internals::length::{{impl}}::parse
    Breakpoint 2 at 0x7ffff7ac6297: file src/length.rs, line 89.
    
    (gdb) cont
    Continuing.
    [New Thread 0x7fffe992c700 (LWP 26360)]
    [New Thread 0x7fffe912b700 (LWP 26361)]
    
    Thread 1 "rsvg-view-3" hit Breakpoint 2, rsvg_internals::length::{{impl}}::parse (string=..., dir=Both) at src/length.rs:89
    89              let (mut value, rest) = strtod (string);
    (gdb)

    Can we print values? Sure we can. I'm interested in the case where the incoming string argument contains "100%" — this will be parse()d into an RsvgLength value with length.length=1.0 and length.unit=Percent. Let's print the string argument:

    89              let (mut value, rest) = strtod (string);
    (gdb) print string
    $2 = {data_ptr = 0x8bd8e0 "12.0\377\177", length = 4}

    Rust strings are different from null-terminated C strings; they have a pointer to the char data, and a length value. Here, gdb is showing us a string that contains the four characters "12.0". I'll make this a conditional breakpoint so I can continue the execution until string comes in with a value of "100%", but I'll cheat: I'll use the C function strncmp() to test those four characters in string.data_ptr; I can't use strcmp() as the data_ptr is not null-terminated.

    (gdb) cond 2 strncmp (string.data_ptr, "100%", 4) == 0
    (gdb) cont
    Continuing.
    
    Thread 1 "rsvg-view-3" hit Breakpoint 2, rsvg_internals::length::{{impl}}::parse (string=..., dir=Vertical) at src/length.rs:89
    89              let (mut value, rest) = strtod (string);
    (gdb) p string
    $8 = {data_ptr = 0x8bd8e0 "100%", length = 4}

    All right! We got to the case we wanted. Let's execute this next line that has "let (mut value, rest) = strtod (string); in it, and print out the results:

    (gdb) next
    91              match rest.as_ref () {
    (gdb) print value
    $9 = 100
    (gdb) print rest
    $10 = {data_ptr = 0x8bd8e3 "%", length = 1}

    What type did "value" get assigned?

    (gdb) ptype value
    type = f64 

    A floating point value, as expected.

    You can see that the value of rest indicates that it is a string with "%" in it. The rest of the parse() function will decide that in fact it is a CSS length specified as a percentage, and will translate our value of 100 into a normalized value of 1.0 and a length.unit of LengthUnit.Percent.

    Summary

    Rust generates object code with debugging information, which gets linked into your C code as usual. You can therefore use gdb on it.

    Rust creates mangled names for methods. Inside gdb, you can find the mangled names with "info functions"; pass it a regexp that is close enough to the method name you are looking for, unless you want tons of function names from the whole binary and all its libraries.

    You can print Rust values in gdb. Strings are special because they are not null-terminated C strings.

    You can set breakpoints, conditional breakpoints, and do pretty much do all the gdb magic that you expect.

    I didn't have to do anything for gdb to work with Rust. The version that comes in openSUSE Tumbleweed works fine. Maybe it's because Rust generates standard object code with debugging information, which gdb readily accepts. In any case, it works out of the box and that's just as it should be.

Responsive HTML with CSS and Javscript

In this article you can learn how to make a minimalist web page readable on different format readers like larger desktop screens and handhelds. The ingredients are HTML, with CSS and few JavaScript. The goals for my home page are:

  • most of the layout resides in CSS in a stateless way
  • minimal JavaScript
  • on small displays – single column layout
  • on wide format displays – division of text in columns
  • count of columns adapts to browser window width or screen size
  • combine with markdown

CSS:

h1,h2,h3 {
  font-weight: bold;
  font-style: normal;
}

@media (min-width: 1000px) {
  .tiles {
    display: flex;
    justify-content: space-between;
    flex-wrap: wrap;
    align-items: flex-start;
    width: 100%;
  }
  .tile {
    flex: 0 1 49%;
  }
  .tile2 {
    flex: 1 280px
  }
  h1,h2,h3 {
    font-weight: normal;
  }
}
@media (min-width: 1200px) {
  @supports ( display: flex ) {
    .tile {
      flex: 0 1 24%;
    }
  }
}

The content in class=”tile” is shown as one column up to 4 columns. tile2 has a fixed with and picks its column count by itself. All flex boxes behave like one normal column. With @media (min-width: 1000px) { a bigger screen is assumed. Very likely there is a overlapping width for bigger handhelds, tablets and smaller laptops. But the layout works reasonable and performs well on shrinking the web browser on a desktop or viewing fullscreen and is well readable. Expressing all tile stuff in flex: syntax helps keeping compatibility with non flex supporting layout engines like in e.g. dillo.

For reading on High-DPI monitors on small it is essential to set font size properly. I found no way to do that in CSS so far.

JavaScript:

function make_responsive () {
  if( typeof screen != "undefined" ) {
    var fontSize = "1rem";
    if( screen.width < 400 ) {
      fontSize = "2rem";
    }
    else if( screen.width < 720 ) {
      fontSize = "1.5rem";
    }
    else if( screen.width < 1320 ) {
      fontSize = "1rem";
    }
    if( typeof document.children === "object" ) {
      var obj = document.children[0]; // html node
      obj.style["font-size"] = fontSize;
    } else if( typeof document.body != "undefined" ) {
      document.body.style.fontSize = fontSize;
    }
  }
}
document.addEventListener( "DOMContentLoaded", make_responsive, false );
window.addEventListener( "orientationchange", make_responsive, false );

The above JavaScript checks carefully if various browser attributes and scales the font size to compensate for small screens and make it readable. It works in all tested browsers (FireFox, Chrome, Konqueror, IE) beside dillo and on all platforms (Linux/KDE, Android, WP8.1).

Below some markdown to illustrate the approach.

HTML:

<div class="tiles">
<div class="tile"> My first text goes here. </div>
<div class="tile"> Second text goes here. </div>
<div class="tile"> Third text goes here. </div>
<div class="tile"> Fourth text goes here. </div>
</div>

In my previous articles you can read about using CSS3 for Translation and Web Open Font Format (WOFF) for Web Documents.

November 15, 2016

Fedora Hubs and Meetbot: A Recursive Tale

Fedora Hubs

Hubs and Chat Integration Basics

One of the planned features of Fedora Hubs that I am most excited about is chat integration with Fedora development chat rooms. As a mentor and onboarder of designers and other creatives into the Fedora project, I’ve witnessed IRC causing a lot of unnecessary pain and delay in the onboarding experience. The idea we have for Hubs is to integrate Fedora’s IRC channels into the Hubs web UI, requiring no IRC client installation and configuration on the part of users in order to be able to participate. The model is meant to be something like this:

Diagram showing individual hubs mapping to individual IRC channels / privmsgs.

By default, any given hub won’t have an IRC chat window. And whether or not a chat window appears on the hub is configurable by the hub admin (they can choose to not display the chat widget.) However, the hub admin may map their hub to a specific channel – whatever is appropriate for their team / project / self – and the chat widget on their hub will give visitors the possibility to interact with that team via chat, right in the web interface. Early mockups depict this feature looking something like this, for inclusion on a team or project hub (a PM window for user hubs):

mockup showing an irc widget for #fedora-design on the design team hub

Note this follows our general principle of enabling new contributors while not uprooting our existing ones. We followed this with HyperKitty – if you prefer to interact with mailing lists on the web, you can, but if you’ve got your own email-based workflow and client that you don’t want to change at all, HyperKitty doesn’t affect you. Same principle here: if you’ve got an IRC client you like, no change for you. This is just an additional interface by which new folks can interact with you in the same places you already are.

Implementation is planned to be based on waartaa, for which the lead Hubs developer Sayan Chowdhury is also an upstream developer.

Long-term, we (along with waartaa upstream) have been thinking about matrix as a better chat protocol that waartaa could support or be ported to in the future. (I personally have migrated from HexChat to Riot.im – popular matrix web + smartphone client – as my only client to connect to Freenode. The experiment has gone quite well. I access my usual freenode channels using Riot.im’s IRC bridges.) So when we think about implementing chat, we also keep in mind the protocol underneath may change at some point.

That’s a high-level explanation of how we’re thinking about integrating chat into Hubs.

Next Level: HALP!!1

As of late, Aurélien Bompard has been investigating the “Help/Halp” feature of feature hubs. (https://pagure.io/fedora-hubs/issue/98)

The general idea is to have a widget that aggregates all help requests (created using the meetbot #help command while meeting minutes are being recorded) across all teams / meetings and have a single place to sort through them. Folks (particularly new contributors) looking for things they can help out with can refer to it as a nice, timely bucket of tasks that are needed with clear suggestions for how to get started. (Timely, because new contributors want to help with tasks that are needed now and not waste their time on requests that are stale and are no longer needed or already fixed. On the other side, the widget helps bring some attention to the requests people in need of help are making, hopefully increasing the chances they’ll get the help they are looking for.

The mechanism for generating the list of help requests is to gather #help requests from meeting minutes and display them from most recent to least recent. The chances you’ll find a task that is actually needed now are high. As the requests age, they scroll further and further back into the backlog until they are no longer displayed (the idea being, if enough time has passed, the help is likely no longer needed or has already been provided.) The contact point for would-be helpers is easy – the person who ran the #help command in the meeting is listed as a contact for you to sync up with to get started.)

The mockups are available in the ticket, but are shown below as well for purposes of illustration:

Main help widget, showing active help requests across various Fedora teams

Main help widget, showing active help requests across various Fedora teams

Mockup showing UI panel where someone can volunteer to help someone with a request.

Mockup showing UI panel where someone can volunteer to help someone with a request.

An issue that came up has to do with the mapping we talked about earlier. Many Fedora team meetings occur in #fedora-meeting-*; e.g., #fedora-meeting, #fedora-meeting-1, etc. Occasionally, Fedora meetings occur in a team channel (e.g., #fedora-design) that may not map up with the team’s ‘namespace’ in other applications (e.g., our mailing list is design-team. Our pagure.io repo is ‘/design’.) Based on how Fedora teams use IRC and how meetbot works, we cannot rely on the channel name to get the correct namespace / hub name for a team making a request during a meeting using the meetbot #help command.

Meetbot does also have a mechanism to set a topic for a meeting, and many teams use this to identify the team meeting – in fact, it’s required to start a meeting now – but depending on who is running the meeting, this freeform field can vary. (For instance – the design team has meetings marked fedora_design, fedora-design, designteam, design-team, design, etc. etc.) So the topic field in the fedmsg meetbot puts out may also not be reliable for pointing to a hub / team.

One idea we talked about in our meeting a couple of weeks ago as well as last week’s meeting was having some kind of lookup table to map a team to all of its various namespaces in different applications. The problem with this is that because meetbot issues the fedmsgs used to generate the halp widget list of requests as soon as the #help command is issued – it is meetbot itself that would need to lookup the mapping so that it had the correct team name issued in its fedmsg. We couldn’t write some kind of script or something to reconcile things after the meeting concluded. Meetbot itself needs to be changed for this to work – for the #help requests put out on fedmsg by meetbot to have the correct team names associated with them.

Which Upstream is Less Decomposed?

Do you see dead upstreams? Zombie image

Zombie artwork credit: Zombies Silhouette by GDJ on OpenClipArt.

We determined we needed to make a change to meetbot. meetbot is a plugin to an IRC bot called supybot. Fedora infrastructure doesn’t actually use supybot to run meetbot, though. (There haven’t been any commits to supybot for about 2 years.) Instead, we use a fork called limnoria that is Python 3-based and has various enhancements applied to it.

How about meetbot? Well, meetbot hasn’t been touched by its upstream since 2009 (7 years ago.) I believe Fedora carries some local patches to it. In talking with Kevin Fenzi, we discovered there is a newer fork of meetbot maintained by the upstream OpenStack team. That hadn’t seen activity in 3 years, according to github.

Aurélien contacted the upstream OpenStack folks and discovered that, pending a modification to implement file-based configs to enable deployment using tools like Ansible, they were looking to port their supybot plugins (including meetbot) to errbot and migrate to that. So we had a choice – we could implement what we needed on top of their newer meetbot as is and they would be willing to work with us, or we could join their team in migrating to errbot, participate in the meetbot porting process, and use errbot going forward. Errbot appears to have a very active upstream with many plugins available already.

How Far Down the Spiral Do We Go?

To unravel ourselves a bit from the spiral of recursion here… remember, we’re trying to implement a simple Help widget for Fedora Hubs. As we’ve discovered, the technology upon which the features we need to interact with to make the feature happen are a bit zombiefied. What to do?

We agreed that the overall mission of Fedora Hubs as a project is to make collaboration in Fedora more efficient and easy for everyone. In this situation specifically, we decided that migrating to errbot and upgrading a ported meetbot to allow for mapping team namespaces to meeting minutes would be the right way to go. It’s definitely not the easy way, but we think it’s the right way.

It’s our hope in general that as we work our way through implementing Hubs as a unified interface for collaboration in Fedora, we expose deficiencies present in the underlying apps and are able to identify and correct them as we go. This hopefully will result in a better experience for everyone using those apps, whether or not they are Hubs users.

Want to Help?

we need your help!

Does this sound interesting? Want to help us make it happen? Here’s what you can do:

  • Come say hi on the hubs-devel mailing list, introduce yourself, read up on our past meeting minutes.
  • Join us during our weekly meetings on Tuesdays at 15:00 UTC in #fedora-hubs on irc.freenode.net.
  • Reach out to Aurélien and coordinate with him if you’d like to help with the meetbot porting effort to errbot. You may want to check out those codebases as well.
  • Reach out to Sayan if you’d like to help with the implementation of waartaa to provide IRC support in Fedora Hubs!
  • Hit me up if you’ve got ideas or would like to help out with any of the UX involved!

Ideas, feedback, questions, etc. provided in a respectful manner are welcome in the comments.

CSS3 for Translation

Years ago I used a CMS to bring content to a web page. But with evolving CSS, markdown syntax and comfortable git hosting, publication of smaller sites can be handled without a CMS. My home page is translated. Thus I liked to express page translations in a stateless language. The ingredients are simple. My requirements are:

  • stateless CSS, no javascript
  • integrable with markdown syntax (html tags are ok’ish)
  • default language shall remain visible, when no translation was found
  • hopefully searchable by robots (Those need to understand CSS.)

CSS:

/* hide translations initially */
.hide {
  display: none
}
/* show a browser detected translation */
:lang(de) { display: block; }
li:lang(de) { display: list-item; }
a:lang(de) { display: inline; }
em:lang(de) { display: inline; }
span:lang(de) { display: inline; }

/* hide default language, if a translation was found */
:lang(de) ~ [lang=en] {
 display: none;
}

The CSS uses the display property of the element, which was returned by the :lang() selector. However the selectors for different display: types are somewhat long. Which is not so short as I liked.

Markdown:

<span lang="de" class="hide"> Hallo _Welt_. </span>
<span lang="en"> Hello _World_. </span>

Even so the plain markdown text looks not as straight forward as before. But it is acceptable IMO.

Hiding the default language uses the sibling elements combinator E ~ F and selects a element containing the lang=”en” attribute. Matching elements are hidden (display: none;). This is here the default language string “Hello _World_.” with the lang=”en” attribute. This approach works fine in FireFox(49), Chrome Browser(54), Konqueror(4.18 khtml&WebKit) and WP8.1 with Internet Explorer. Dillo(3.0.5) does not show the translation, only the english text, which is correct as fallback for a non :lang() supporting engine.

On my search I found approaches for content swapping with CSS: :lang()::before { content: xxx; } . But those where not well accessible. Comments and ideas welcome.

Lyon GNOME Bug day #1

Last Friday, both a GNOME bug day and a bank holiday, a few of us got together to squash some bugs, and discuss GNOME and GNOME technologies.

Guillaume, a new comer in our group, tested the captive portal support for NetworkManager and GNOME in Gentoo, and added instructions on how to enable it to their Wiki. He also tested a gateway related configuration problem, the patch for which I merged after a code review. Near the end of the session, he also rebuilt WebKitGTK+ to test why Google Docs was not working for him anymore in Web. And nobody believed that he could build it that quickly. Looks like opinions based on past experiences are quite hard to change.

Mathieu worked on removing jhbuild's .desktop file as nobody seems to use it, and it was creating the Sundry category for him, in gnome-shell. He also spent time looking into the tracker blocker that is Mozilla's Focus, based on disconnectme's block lists. It's not as effective as uBlock when it comes to blocking adverts, but the memory and performance improvements, and the slow churn rate, could make it a good default blocker to have in Web.

Haïkel looked into using Emeus, potentially the new GTK+ 4.0 layout manager, to implement the series properties page for Videos.

Finally, I added Bolso to jhbuild, and struggled to get gnome-online-accounts/gnome-keyring to behave correctly in my installation, as the application just did not want to log in properly to the service. I also discussed Fedora's privacy policy (inappropriate for Fedora Workstation, as it doesn't cover the services used in the default installation), a potential design for Flatpak support of joypads and removable devices in general, as well as the future design of the Network panel.

November 14, 2016

João Almeida's darktable Presets


João Almeida's darktable Presets

A gorgeous set of film emulation for darktable

I realize that I’m a little late to this, but photographer João Almeida has created a wonderful set of film emulation presets for darktable that he uses in his own workflow for personal and commisioned work. Even more wonderful is that he has graciously released them for everyone to use.

These film emulations started as a personal side project for João, and he adds a disclaimer to them that he did not optimize them all for each brand or model of his cameras. His end goal was for these to be as simple as possible by using a few darktable modules. He describes it best on his blog post about them:

The end goal of these presets is to be as simple as possible by using few Darktable modules, it works solely by manipulating Lab Tone Curves for color manipulation, black & white films rely heavily on Channel Mixer. Since I what I was aiming for was the color profiles of each film, other traits related with processing, lenses and others are unlikely to be implemented, this includes: grain, vignetting, light leaks, cross-processing, etc.

Some before/after samples from his blog post:

João Almeida Portra 400 sample João Portra 400
(Click to compare to original)
João Alemida Kodachrome 64 sample João Kodachrome 64
(Click to compare to original)
João Alemida Velvia 50 sample João Velvia 50
(Click to compare to original)

You can read more on João’s website and you can see many more images on Flickr with the #t3mujinpack tag. The full list of film emulations included with his pack:

  • AGFA APX 25, 100
  • Fuji Astia 100F
  • Fuji Neopan 1600, Acros 100
  • Fuji Pro 160C, 400H, 800Z
  • Fuji Provia 100F, 400F, 400X
  • Fuji Sensia 100
  • Fuji Superia 100, 200, 400, 800, 1600, HG 1600
  • Fuji Velvia 50, 100
  • Ilford Delta 100, 400, 3200
  • Ilford FP4 125
  • Ilford HP5 Plus 400
  • Ilford XP2
  • Kodak Ektachrome 100 GX, VS
  • Kodak Ektar 100
  • Kodak Elite Chrome 400
  • Kodak Kodachrome 25, 64, 200
  • Kodak Portra 160 NC, VC
  • Kodak Portra 400 NC, UC, VC
  • Kodak Portra 800
  • Kodak T-Max 3200
  • Kodak Tri-X 400

If you see João around the forums stop and say hi (and maybe a thank you). Even better, if you find these useful, consider buying him a beer (donation link is on his blog post)!

Mon 2016/Nov/14

  • Exposing Rust objects to C code

    When librsvg parses an SVG file, it will encounter elements that generate path-like objects: lines, rectangles, polylines, circles, and actual path definitions. Internally, librsvg translates all of these into path definitions. For example, librsvg will read an element from the SVG that defines a rectangle like

    <rect x="20" y="30" width="40" height="50" style="..."></rect> 

    and translate it into a path definition with the following commands:

    move_to (20, 30)
    line_to (60, 30)
    line_to (60, 80)
    line_to (20, 80)
    line_to (20, 30)
    close_path ()

    But where do those commands live? How are they fed into Cairo to actually draw a rectangle?

    Get your Cairo right here

    One of librsvg's public API entry points is rsvg_handle_render_cairo():

    gboolean rsvg_handle_render_cairo (RsvgHandle * handle, cairo_t * cr);

    Your program creates an appropriate Cairo surface (a window, an off-screen image, a PDF surface, whatever), obtains a cairo_t drawing context for the surface, and passes the cairo_t to librsvg using that rsvg_handle_render_cairo() function. It means, "take this parsed SVG (the handle), and render it to this cairo_t drawing context".

    SVG files may look like an XML-ization of a tree of graphical objects: here is a group which contains a blue rectangle and a green circle, and here is a closed Bézier curve with a black outline and a red fill. However, SVG is more complicated than that; it allows you to define objects once and recall them later many times, it allows you to use CSS cascading rules for applying styles to objects ("all the objects in this group are green unless they define another color on their own"), to reference other SVG files, etc. The magic of librsvg is that it resolves all of that into drawing commands for Cairo.

    Feeding a path into Cairo

    This is easy enough: Cairo provides an API for its drawing context with functions like

    void cairo_move_to (cairo_t *cr, double x, double y);
    
    void cairo_line_to (cairo_t *cr, double x, double y);
    
    void cairo_close_path (cairo_t *cr);
    
    /* Other commands ommitted */

    Librsvg doesn't feed paths to Cairo as soon as it parses them from the XML; that is done until rendering time. In the meantime, librsvg has to keep an intermediate representation of path data.

    Librsvg uses an RsvgPathBuilder object to hold on to this path data for as long as needed. The API is simple enough:

    pub struct RsvgPathBuilder {
       ...
    }
    
    impl RsvgPathBuilder {
        pub fn new () -> RsvgPathBuilder { ... }
    
        pub fn move_to (&mut self, x: f64, y: f64) { ... }
    
        pub fn line_to (&mut self, x: f64, y: f64) { ... }
    
        pub fn curve_to (&mut self, x2: f64, y2: f64, x3: f64, y3: f64, x4: f64, y4: f64) { ... }
    
        pub fn close_path (&mut self) { ... }
    }

    This mimics the sub-API of cairo_t to build paths, except that instead of feeding them immediately into the Cairo drawing context, RsvgPathBuilder builds an array of path commands that it will later replay to a given cairo_t. Let's look at the methods of RsvgPathBuilder.

    "pub fn new () -> RsvgPathBuilder" - this doesn't take a self parameter; you could call it a static method in languages that support classes. It is just a constructor.

    "pub fn move_to (&mut self, x: f64, y: f64)" - This one is a normal method, as it takes a self parameter. It also takes (x, y) double-precision floating point values for the move_to command. Note the "&mut self": this means that you must pass a mutable reference to an RsvgPathBuilder, since the method will change the builder's contents by adding a move_to command. It is a method that changes the state of the object, so it must take a mutable object.

    The other methods for path commands are similar to move_to. None of them have return values; if they did, they would have a "-> ReturnType" after the argument list.

    But that RsvgPathBuilder is a Rust object! And it still needs to be called from the C code in librsvg that hasn't been ported over to Rust yet. How do we do that?

    Exporting an API from Rust to C

    C doesn't know about objects with methods, even though you can fake them pretty well with structs and pointers to functions. Rust doesn't try to export structs with methods in a fancy way; you have to do that by hand. This is no harder than writing a GObject implementation in C, fortunately.

    Let's look at the C header file for the RsvgPathBuilder object, which is entirely implemented in Rust. The C header file is rsvg-path-builder.h. Here is part of that file:

    typedef struct _RsvgPathBuilder RsvgPathBuilder;
    
    G_GNUC_INTERNAL
    void rsvg_path_builder_move_to (RsvgPathBuilder *builder,
                                    double x,
                                    double y);
    G_GNUC_INTERNAL
    void rsvg_path_builder_line_to (RsvgPathBuilder *builder,
                                    double x,
                                    double y);

    Nothing special here. RsvgPathBuilder is an opaque struct; we declare it like that just so we can take a pointer to it as in the rsvg_path_builder_move_to() and rsvg_path_builder_line_to() functions.

    How about the Rust side of things? This is where it gets more interesting. This is part of path-builder.rs:

    extern crate cairo;                                                         // 1
    
    pub struct RsvgPathBuilder {                                                // 2
        path_segments: Vec<cairo::PathSegment>,
    }
    
    impl RsvgPathBuilder {                                                      // 3
        pub fn move_to (&mut self, x: f64, y: f64) {                            // 4
            self.path_segments.push (cairo::PathSegment::MoveTo ((x, y)));      // 5
        }
    }
    
    #[no_mangle]                                                                    // 6
    pub extern fn rsvg_path_builder_move_to (raw_builder: *mut RsvgPathBuilder,     // 7
                                             x: f64,
                                             y: f64) {
        assert! (!raw_builder.is_null ());                                          // 8
    
        let builder: &mut RsvgPathBuilder = unsafe { &mut (*raw_builder) };         // 9
    
        builder.move_to (x, y);                                                     // 10
    }

    Let's look at the numbered lines:

    1. We use the cairo crate from the excellent gtk-rs, the Rust binding for GTK+ and Cairo.

    2. This is our Rust structure. Its fields are not important for this discussion; they are just what the struct uses to store Cairo path commands.

    3. Now we begin implementing methods for that structure. These are Rust-side methods, not visible from C. In 4 and 5 we see the implementation of ::move_to(); it just creates a new cairo::PathSegment and pushes it to the vector of segments.

    6. The "#[no_mangle]" line instructs the Rust compiler to put the following function name in the .a library just as it is, without any name mangling. The function name without name mangling looks just like rsvg_path_builder_move_to to the linker, as we expect. A name-mangled Rust function looks like _ZN14rsvg_internals12path_builder15RsvgPathBuilder8curve_to17h1b8f49042ff19daaE — you can explore these with "objdump -x rust/target/debug/librsvg_internals.a"

    7. "pub extern fn rsvg_path_builder_move_to (raw_builder: *mut RsvgPathBuilder". This is a public function with an exported symbol in the .a file, not an internal one, as it will be called from the C code. And the "raw_builder: *mut RsvgPathBuilder" is Rust-ese for "a pointer to an RsvgPathBuilder with mutable contents". If this were only an accessor function, we would use a "*const RsvgPathBuilder" argument type.

    8. "assert! (!raw_builder.is_null ());". You can read this as "g_assert (raw_builder != NULL);" if you come from GObject land.

    9. "let builder: &mut RsvgPathBuilder = unsafe { &mut (*raw_builder) }". This declares a builder variable, of type &mut RsvgPathBuilder, which is a reference to a mutable path builder. The variable gets intialized with the result of "&mut (*raw_builder)": first we de-reference the raw_builder pointer with the asterisk, and convert that to a mutable reference with the &mut. De-referencing pointers that come from who-knows-where is an unsafe operation in Rust, as the compiler cannot guarantee their validity, and so we must wrap that operation with an unsafe{} block. This is like telling the compiler, "I acknowledge that this is potentially unsafe". Already this is better than life in C, where *every* de-reference is potentially dangerous; in Rust, only those that "bring in" pointers from the outside are potentially dangerous.

    10. Now we have a Rust-side reference to an RsvgPathBuilder object, and we can call the builder.move_to() method as in regular Rust code.

    Those are methods. And the constructor/destructor?

    Excellent question! We defined an absolutely conventional method, but we haven't created a Rust object and sent it over to the C world yet. And we haven't taken a Rust object from the C world and destroyed it when we are done with it.

    Construction

    Here is the C prototype for the constructor, exactly as you would expect from a GObject library:

    G_GNUC_INTERNAL
    RsvgPathBuilder *rsvg_path_builder_new (void);

    And here is the corresponding implementation in Rust:

    #[no_mangle]
    pub unsafe extern fn rsvg_path_builder_new () -> *mut RsvgPathBuilder {    // 1
        let builder = RsvgPathBuilder::new ();                                 // 2
    
        let boxed_builder = Box::new (builder);                                // 3
    
        Box::into_raw (boxed_builder)                                          // 4
    }

    1. Again, this is a public function with an exported symbol. However, this whole function is marked as unsafe since it returns a pointer, a *mut RsvgPathBuilder. To Rust this declaration means, "this pointer will be out of your control", hence the unsafe. With that we acknowledge our responsibility in handling the memory to which the pointer refers.

    2. We instantiate an RsvgPathBuilder with normal Rust code...

    3. ... and ensure that that object is put in the heap by Boxing it. This is a common operation in garbage-collected languages. Boxing is Rust's primitive for putting data in the program's heap; it allows the object in question to outlive the scope where it got created, i.e. the duration of the rsvg_path_builder_new()function.

    4. Finally, we call Box::into_raw() to ask Rust to give us a pointer to the contents of the box, i.e. the actual RsvgPathBuilder struct that lives there. This statement doesn't end in a semicolon, so it is the return value for the function.

    You could read this as "builder = g_new (...); initialize (builder); return builder;". Allocate something in the heap and initialize it, and return a pointer to it. This is exactly what the Rust code is doing.

    Destruction

    This is the C prototype for the destructor. This not a reference-counted GObject; it is just an internal thing in librsvg, which does not need reference counting.

    G_GNUC_INTERNAL
    void rsvg_path_builder_destroy (RsvgPathBuilder *builder);

    And this is the implementation in Rust:

    #[no_mangle]
    pub unsafe extern fn rsvg_path_builder_destroy (raw_builder: *mut RsvgPathBuilder) {    // 1
        assert! (!raw_builder.is_null ());                                                  // 2
    
        let _ = Box::from_raw (raw_builder);                                                // 3
    }

    1. Same as before; we declare the whole function as public, exported, and unsafe since it takes a pointer from who-knows-where.

    2. Same as in the implementation for move_to(), we assert that we got passed a non-null pointer.

    3. Let's take this bit by bit. "Box::from_raw (raw_builder)" is the counterpart to Box::into_raw() from above; it takes a pointer and wraps it with a Box, which Rust knows how to de-reference into the actual object it contains. "let _ =" is to have a variable binding in the current scope (the function we are implementing). We don't care about the variable's name, so we use _ as a default name. The variable is now bound to a reference to an RsvgPathBuilder. The function terminates, and since the _ variable goes out of scope, Rust frees the memory for the RsvgPathBuilder. You can read this idiom as "g_free (builder)".

    Recapitulating

    Make your object. Box it. Take a pointer to it with Box::into_raw(), and send it off into the wild west. Bring back a pointer to your object. Unbox it with Box::from_raw(). Let it go out of scope if you want the object to be freed. Acknowledge your responsibilities with unsafe and that's all!

    Making the functions visible to C

    The code we just saw lives in path-builder.rs. By convention, the place where one actually exports the visible API from a Rust library is a file called lib.rs, and here is part of that file's contents in librsvg:

    pub use path_builder::{
        rsvg_path_builder_new,
        rsvg_path_builder_destroy,
        rsvg_path_builder_move_to,
        rsvg_path_builder_line_to,
        rsvg_path_builder_curve_to,
        rsvg_path_builder_close_path,
        rsvg_path_builder_arc,
        rsvg_path_builder_add_to_cairo_context
    };
    
    mod path_builder; 

    The mod path_builder indicates that lib.rs will use the path_builder sub-module. The pub use block exports the functions listed in it to the outside world. They will be visible as symbols in the .a file.

    The Cargo.toml (akin to a toplevel Makefile.am) for my librsvg's little sub-library has this bit:

    [lib]
    name = "rsvg_internals"
    crate-type = ["staticlib"]

    This means that the sub-library will be called librsvg_internals.a, and it is a static library. I will link that into my master librsvg.so. If this were a stand-alone shared library entirely implemented in Rust, I would use the "cdylib" crate type instead.

    Linking into the main .so

    In librsvg/Makefile.am I have a very simplistic scheme for building the librsvg_internals.a library with Rust's tools, and linking the result into the main librsvg.so:

    RUST_LIB = rust/target/debug/librsvg_internals.a
    
    .PHONY: rust/target/debug/librsvg_internals.a
    rust/target/debug/librsvg_internals.a:
    	cd rust && \
    	cargo build --verbose
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_CPPFLAGS = ...
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_CFLAGS = ...
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_LDFLAGS = ...
    
    librsvg_@RSVG_API_MAJOR_VERSION@_la_LIBADD = \
    	$(LIBRSVG_LIBS) 	\
    	$(LIBM)			\
    	$(RUST_LIB)

    This uses a .PHONY target for librsvg_internals.a, so "cargo build" will always be called on it. Cargo already takes care of dependency tracking; there is no need for make/automake to do that.

    I put the filename of my library in a RUST_LIB variable, which I then reference from LIBADD. This gets librsvg_internals.a linked into the final librsvg.so.

    When you run "cargo build" just like that, it creates a debug build in a target/debug subdirectory. I haven't looked for a way to make it play together with Automake when one calls "cargo build --release": that one puts things in a different directory, called target/release. Rust's tooling is more integrated that way, while in the Autotools world I'm expected to pass any CFLAGS for compilation by hand, depending on whether I'm doing a debug build or a release build. Any ideas for how to do this cleanly are appreciated.

    I don't have any code in configure.ac to actually detect if Rust is present. I'm just assuming that it is for now; fixes are appreciated :)

    Using the Rust functions from C

    There is no difference from what we had before! This comes from rsvg-shapes.c:

    static RsvgPathBuilder *
    _rsvg_node_poly_create_builder (const char *value,
                                    gboolean close_path)
    {
        RsvgPathBuilder *builder;
    
        ...
    
        builder = rsvg_path_builder_new ();
    
        rsvg_path_builder_move_to (builder, pointlist[0], pointlist[1]);
    
        ...
    
        return builder;
    }

    Note that we are calling rsvg_path_builder_new() and rsvg_path_builder_move_to(), and returning a pointer to an RsvgPathBuilder structure as usual. However, all of those are implemented in the Rust code. The C code has no idea!

    This is the magic of Rust: it allows you to move your C code bit by bit into a safe language. You don't have to do a whole rewrite in a single step. I don't know any other languages that let you do that.

November 08, 2016

November 06, 2016

darktable 2.2.0rc0 released

we’re proud to announce the first release candidate for the upcoming 2.2 series of darktable, 2.2.0rc0!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.0rc0.

as always, please don’t use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

a084ef367b1a1b189ad11a6300f7e0cadb36354d11bf0368de7048c6a0732229 darktable-2.2.0~rc0.tar.xz

and the changelog as compared to 2.0.0 can be found below.

  • Well over 2 thousand commits since 2.0.0

The Big Ones:

Quite Interesting Changes:

  • Split the database into a library containing images and a general one with styles, presets and tags. That allows having access to those when for example running with a :memory: library
  • Support running on platforms other than x86 (64bit little-endian, currently ARM64 only) (https://www.darktable.org/2016/04/running-on-non-x86-platforms/)
  • darktable is now happy to use smaller stack sizes. That should allow using musl libc
  • Allow darktable-cli to work on directories
  • Allow to import/export tags from Lightroom keyword files
  • Allow using modifier keys to modify the step for sliders and curves. Defaults: Ctrl - x0.1; Shift - x10
  • Allow using the [keyboard] cursor keys to interact with sliders, comboboxes and curves; modifiers apply too
  • Support presets in “more modules” so you can quickly switch between your favorite sets of modules shown in the GUI
  • Add range operator and date compare to the collection module
  • Support the Exif date and time when importing photos from camera
  • Rudimentary CYGM and RGBE color filter array support
  • Preview pipe now does run demosaic module too, and its input is no longer pre-demosaiced, but is just downscaled without demosaicing it at the same time.
  • Nicer web gallery exporter – now touch friendly!
  • OpenCL implementation of VNG/VNG4 demosaicing methods
  • OpenCL implementation of Markesteijn demosaicing method for X-Trans sensors
  • Filter-out some useless EXIF tags when exporting, helps keep EXIF size under ~64Kb
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some partially-working OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don’t need it.
  • Hotpixels module: make it actually work for X-Trans

Some More Changes, Probably Not Complete:

  • Drop darktable-viewer tool in favor of slideshow view
  • Remove gnome keyring password backend, use libsecret instead
  • When using libsecret to store passwords then put them into the correct collection
  • Hint via window manager when import/export is done
  • Quick tagging searches anywhere, not just at the start of tags
  • The sidecar Xmp schema for history entries is now more consistent and less error prone
  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • Give the choice of equidistant and proportional feathering when using elliptical masks
  • Add geolocation to watermark variables
  • Fix some crashes with missing configured ICC profiles
  • Support greyscale color profiles
  • OSX: add trash support (thanks to Michael Kefeder for initial patch)
  • Attach Xmp data to EXR files
  • Several fixes for HighDPI displays
  • Use Pango for text layout, thus supporting RTL languages
  • Many bugs got fixed and some memory leaks plugged
  • The usermanual was updated to reflect the changes in the 2.2 series

Changed Dependencies:

  • CMake 3.0 is now required.
  • In order to compile darktable you now need at least gcc-4.7+/clang-3.3+, but better use gcc-5.0+
  • Drop support for OS X 10.6
  • Bump required libexiv2 version up to 0.24
  • Bump GTK+ requirement to gtk-3.14. (because even Debian/stable has it)
  • Bump GLib requirement to glib-2.40.
  • Port to OpenJPEG2
  • SDL is no longer needed.

A special note to all the darktable Fedora users: Fedora-provided darktable packages are intentionally built with Lua disabled. Thus, Lua scripting will not work. This breaks e.g. darktable-gimp integration. Please bug Fedora. In the mean time you could fix that by self-compiling darktable (pass -DDONT_USE_INTERNAL_LUA=OFF to cmake in order to enable use of bundled Lua5.2.4).

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 5D Mark IV
  • Canon EOS 80D
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot A720 IS (dng)
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Canon PowerShot SD450 (dng)
  • Canon PowerShot SX130 IS (dng)
  • Canon PowerShot SX260 HS (dng)
  • Canon PowerShot SX510 HS (dng)
  • Fujifilm FinePix S100FS
  • Fujifilm X-Pro2
  • Fujifilm X-T2
  • Fujifilm X70
  • Fujifilm XQ2
  • GITUP GIT2 (chdk-a, chdk-b)
  • (most nikon cameras here are just fixes, and they were supported before already)
  • Nikon 1 AW1 (12bit-compressed)
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3 (12bit-compressed)
  • Nikon 1 J4 (12bit-compressed)
  • Nikon 1 J5 (12bit-compressed, 12bit-uncompressed)
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2 (12bit-compressed)
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2 (12bit-compressed)
  • Nikon 1 V3 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P340 (12bit-compressed, 12bit-uncompressed)
  • Nikon Coolpix P6000 (12bit-uncompressed)
  • Nikon Coolpix P7000 (12bit-uncompressed)
  • Nikon Coolpix P7100 (12bit-uncompressed)
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1 (12bit-uncompressed)
  • Nikon D100 (12bit-compressed, 12bit-uncompressed)
  • Nikon D1H (12bit-compressed, 12bit-uncompressed)
  • Nikon D1X (12bit-compressed, 12bit-uncompressed)
  • Nikon D200 (12bit-compressed, 12bit-uncompressed)
  • Nikon D2H (12bit-compressed, 12bit-uncompressed)
  • Nikon D2Hs (12bit-compressed, 12bit-uncompressed)
  • Nikon D2X (12bit-compressed, 12bit-uncompressed)
  • Nikon D3 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D300 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D300S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3100 (12bit-compressed)
  • Nikon D3200 (12bit-compressed)
  • Nikon D3300 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3400 (12bit-compressed)
  • Nikon D3S (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D3X (14bit-compressed, 14bit-uncompressed)
  • Nikon D4 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D40 (12bit-compressed, 12bit-uncompressed)
  • Nikon D40X (12bit-compressed, 12bit-uncompressed)
  • Nikon D4S (14bit-compressed)
  • Nikon D5 (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D50 (12bit-compressed)
  • Nikon D500 (14bit-compressed, 12bit-compressed)
  • Nikon D5000 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5100 (14bit-compressed, 14bit-uncompressed)
  • Nikon D5200 (14bit-compressed)
  • Nikon D5300 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D5500 (12bit-uncompressed, 14bit-compressed, 14bit-uncompressed)
  • Nikon D60 (12bit-compressed, 12bit-uncompressed)
  • Nikon D600 (14bit-compressed, 12bit-compressed)
  • Nikon D610 (14bit-compressed, 12bit-compressed)
  • Nikon D70 (12bit-compressed)
  • Nikon D700 (12bit-compressed, 12bit-uncompressed, 14bit-compressed)
  • Nikon D7000 (14bit-compressed, 12bit-compressed)
  • Nikon D70s (12bit-compressed)
  • Nikon D7100 (14bit-compressed, 12bit-compressed)
  • Nikon D80 (12bit-compressed, 12bit-uncompressed)
  • Nikon D800 (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D800E (14bit-compressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon D90 (12bit-compressed, 12bit-uncompressed)
  • Nikon Df (14bit-compressed, 14bit-uncompressed, 12bit-compressed, 12bit-uncompressed)
  • Nikon E5400 (12bit-uncompressed)
  • Nikon E5700 (12bit-uncompressed)
  • Olympus PEN-F
  • OnePlus One (dng)
  • Panasonic DMC-FZ150 (1:1, 16:9)
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Panasonic DMC-LX3 (1:1)
  • Pentax K-1
  • Pentax K-70
  • Samsung GX20 (dng)
  • Sony DSC-F828
  • Sony DSC-RX10M3
  • Sony DSLR-A380
  • Sony ILCA-68
  • Sony ILCE-6300

White Balance Presets

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS 5D Mark IV
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Canon EOS 80D
  • Canon EOS M10
  • Canon EOS-1D X Mark II
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-Pro2
  • Fujifilm X-T10
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus PEN-F
  • Pentax K-70
  • Pentax K-S1
  • Pentax K-S2
  • Sony ILCA-68
  • Sony ILCE-6300

Noise Profiles

  • Canon EOS 5DS R
  • Canon EOS 80D
  • Canon PowerShot G15
  • Canon PowerShot S100
  • Canon PowerShot SX50 HS
  • Fujifilm X-T10
  • Fujifilm X-T2
  • Fujifilm X100T
  • Fujifilm X20
  • Fujifilm X70
  • Nikon 1 V3
  • Nikon D5500
  • Olympus E-PL6
  • Olympus PEN-F
  • Panasonic DMC-FZ1000
  • Panasonic DMC-GF7
  • Pentax K-S2
  • Ricoh GR
  • Sony DSC-RX10
  • Sony SLT-A37

New Translations

  • Hebrew
  • Slovenian

Updated Translations

  • Catalan
  • Czech
  • Danish
  • Dutch
  • French
  • German
  • Hungarian
  • Russian
  • Slovak
  • Spanish
  • Swedish

Stellarium 0.12.7 discussion

Thank you Alexander! This will keep a few old computers happy...

G.

Stellarium 0.12.7

Stellarium 0.12.7 has been released today!

Yes, the series 0.12 is LTS for owners of old computers (old with weak graphics cards). This release has ports of some features from the series 1.x/0.15:
- textures for deep-sky objects
- star catalogues
- fixes for MPC search tool in Solar System Editor plugin

November 04, 2016

Aligning Images with Hugin


Aligning Images with Hugin

Easily process your bracketed exposures

Hugin is an excellent tool for for aligning and stitching images. In this article, we’ll focus on aligning a stack of images. Aligning a stack of images can be useful for achieving several results, such as:

  • bracketed exposures to make an HDR or fused exposure (using enfuse/enblend), or manually blending the images together in an image editor
  • photographs taken at different focal distances to extend the depth of field, which can be very useful when taking macros
  • photographs taken over a period of time to make a time-lapse movie

For the example images included with this tutorial, the focal length is 12mm and the focal length multiplier is 1. A big thank you to @isaac for providing these images.

You can download a zip file of all of the sample Beach Umbrellas images here:

Download Outdoor_Beach_Umbrella.zip (62MB)

Other sample images to try with this tutorial can be found at the end of the post.

These instructions were adapted from the original forum post by @Carmelo_DrRaw; many thanks to him as well.

We’re going to align these bracked exposures so we can blend them:

Blend Examples
  1. Select InterfaceExpert to set the interface to Expert mode. This will expose all of the options offered by Hugin.

  2. Select the Add images… button to load your bracketed images. Select your images from the file chooser dialog and click Open.

  3. Set the optimal setting for aligning images:

    • Feature Matching Settings: Align image stack
    • Optimize Geometric: Custom parameters
    • Optimize Photometric: Low dynamic range
  4. Select the Optimizer tab.

  5. In the Image Orientation section, select the following variables for each image:

    • Roll
    • X (TrX) [horizontal translation]
    • Y (TrY) [vertical translation]

    You can Ctrl + left mouse click to enable or disable the variables.

    roll x y Hugin

    Note that you do not need to select the parameters for the anchor image:

    Hugin anchor image
  6. Select Optimize now! and wait for the software to finish the calculations. Select Yes to apply the changes.

  7. Select the Stitcher tab.

  8. Select the Calculate Field of View button.

  9. Select the Calculate Optimal Size button.

  10. Select the Fit Crop to Images button.

  11. To have the maximum number of post-processing options, select the following image outputs:

    • Panorama Outputs: Exposure fused from any arrangement
      • Format: TIFF
      • Compression: LZW
    • Panorama Outputs: High dynamic range
      • Format: EXR
    • Remapped Images: No exposure correction, low dynamic range

      Hugin Image Export
  12. Select the Stitch! button and choose a place to save the files. Since Hugin generates quite a few temporary images, save the PTO file in it’s own folder.

Hugin will output the following images:

  • a tif file blended by enfuse/enblend
  • an HDR image in the EXR format
  • the individual images after remapping and without any exposure correction that you can import into the GIMP as layers and blend manually.

You can see the result of the image blended with enblend/enfuse:

Beach Umbrella Fused

With the output images, you can:

  • edit the enfuse/enblend tif file further in the GIMP or RawTherapee
  • tone map the EXR file in LuminanceHDR
  • manually blend the remapped tif files in the GIMP or PhotoFlow

Image files

  • Camera: Olympus E-M10 mark ii
  • Lens: Samyang 12mm F2.0

Indoor_Guitars

Download Indoor_Guitars.zip (75MB)

  • 5 brackets
  • ±0.3 EV increments
  • f5.6
  • focus at about 1m
  • center priority metering
  • exposed for guitars, bracketed for the sky, outdoor area, and indoor area
  • manual mode (shutter speed recorded in EXIF)
  • shot in burst mode, handheld

Outdoor_Beach_Umbrella

Download Outdoor_Beach_Umbrella.zip (62MB)

  • 3 brackets
  • ±1 EV increments
  • f11
  • focus at infinity
  • center priority metering
  • exposed for the water, bracketed for umbrella and sky
  • manual mode (shutter speed recorded in EXIF)
  • shot in burst mode, handheld

Outdoor_Sunset_Over_Ocean

Download Outdoor_Sunset_Over_Ocean.zip (60MB)

  • 3 brackets
  • ±1 EV increments
  • f11
  • focus at infinity
  • center priority metering
  • exposed for the darker clouds, bracketed for darker water and lighter sky areas and sun
  • manual mode (shutter speed recorded in EXIF)
  • shot in burst mode, handheld

Licencing Information

November 03, 2016

Thu 2016/Nov/03

  • Refactoring C to make Rustification easier

    In SVG, the sizes and positions of objects are not just numeric values or pixel coordinates. You can actually specify physical units ("this rectangle is 5 cm wide"), or units relative to the page ("this circle's X position is at 50% of the page's width, i.e. centered"). Librsvg's machinery for dealing with this is in two parts: parsing a length string from an SVG file into an RsvgLength structure, and normalizing those lengths to final units for rendering.

    How RsvgLength is represented

    The RsvgLength structure used to look like this:

    typedef struct {
        double length;
        char factor;
    } RsvgLength;

    The parsing code would then do things like

    RsvgLength
    _rsvg_css_parse_length (const char *str)
    {
        RsvgLength out;
    
        out.length = ...; /* parse a number with strtod() and friends */
    
        if (next_token_is ("pt")) { /* points */
            out.length /= 72;
    	out.factor = 'i';
        } else if (next_token_is ("in")) { /* inches */
            out.factor = 'i';
        } else if (next_token_is ("em")) { /* current font's Em size */
            out.factor = 'm';
        } else if (next_token_is ("%")) { /* percent */
            out.factor = 'p';
        } else {
            out.factor = '\0';
        }
    }

    That is, it uses a char for the length.factor field, and then uses actual characters to indicate each different type. This is pretty horrible, so I changed it to use an enum:

    typedef enum {
        LENGTH_UNIT_DEFAULT,
        LENGTH_UNIT_PERCENT,
        LENGTH_UNIT_FONT_EM,
        LENGTH_UNIT_FONT_EX,
        LENGTH_UNIT_INCH,
        LENGTH_UNIT_RELATIVE_LARGER,
        LENGTH_UNIT_RELATIVE_SMALLER
    } LengthUnit;
    
    typedef struct {
        double length;
        LengthUnit unit;
    } RsvgLength;

    We have a nice enum instead of chars, but also, the factor field is now renamed to unit. This ensures that code like

    if (length.factor == 'p')
        ...

    will no longer compile, and I can catch all the uses of "factor" easily. I replace them with unit as appropriate, and ensure that simply changing the chars for enums as appropriate is the right thing.

    When would it not be the right thing? I'm just replacing 'p' for LENGTH_UNIT_PERCENT, right? Well, it turns out that in a couple of hacky places in the rsvg-filters code, that code put an 'n' by hand in foo.factor to really mean, "this foo length value was not specified in the SVG data".

    That pattern seemed highly specific to the filters code, so instead of adding an extra LENGTH_UNIT_UNSPECIFIED, I added an extra field to the FilterPrimitive structures: when they used 'n' for primitive.foo.factor, instead they now have a primitive.foo_specified boolean flag, and the code checks for that instead of essentially monkey-patching the RsvgLength structure.

    Normalizing lengths for rendering

    At rendering time, these RsvgLength with their SVG-specific units need to be normalized to units that are relative to the current transformation matrix. There is a function used all over the code, called _rsvg_css_normalize_length(). This function gets called in an interesting way: one has to specify whether the length in question refers to a horizontal measure, or vertical, or both. For example, an RsvgNodeRect represents a rectangle shape, and it has x/y/w/h fields that are of type RsvgLength. When librsvg is rendering such an RsvgNodeRect, it does this:

    static void
    _rsvg_node_rect_draw (RsvgNodeRect *self, RsvgDrawingCtx *ctx)
    {
        double x, y, w, h;
    
        x = _rsvg_css_normalize_length (&rect->x, ctx, 'h');
        y = _rsvg_css_normalize_length (&rect->y, ctx, 'v');
    
        w = fabs (_rsvg_css_normalize_length (&rect->w, ctx, 'h'));
        h = fabs (_rsvg_css_normalize_length (&rect->h, ctx, 'v'));
    
        ...
    }

    Again with the fucking chars. Those 'h' and 'v' parameters are because lengths in SVG need to be resolved relative to the width or the height (or both) of something. Sometimes that "something" is the size of the current object's parent group; sometimes it is the size of the whole page; sometimes it is the current font size. The _rsvg_css_normalize_length() function sees if it is dealing with a LENGTH_UNIT_PERCENT, for example, and will pick up page_size->width if the requested value is 'h'orizontal, or page_size->height if it is 'v'ertical. Of course I replaced all of those with an enum.

    This time I didn't find hacky code like the one that would stick an 'n' in the length.factor field. Instead, I found an actual bug; a horizontal unit was using 'w' for "width", instead of 'h' for "horizontal". If these had been enums since the beginning, this bug would probably not be there.

    While I appreciate the terseness of 'h' instead of LINE_DIR_HORIZONTAL, maybe we can later refactor groups of coordinates into commonly-used patterns. For example, instead of

    patternx = _rsvg_css_normalize_length (&rsvg_pattern->x, ctx, LENGTH_DIR_HORIZONTAL);
    patterny = _rsvg_css_normalize_length (&rsvg_pattern->y, ctx, LENGTH_DIR_VERTICAL);
    patternw = _rsvg_css_normalize_length (&rsvg_pattern->width, ctx, LENGTH_DIR_HORIZONTAL);
    patternh = _rsvg_css_normalize_length (&rsvg_pattern->height, ctx, LENGTH_DIR_VERTICAL);

    perhaps we can have

    normalize_lengths_for_x_y_w_h (ctx,
                                   &rsvg_pattern->x,
                                   &rsvg_pattern->y,
                                   &rsvg_pattern->width,
                                   &rsvg_pattern->height);

    since those x/y/width/height groups get used all over the place.

    And in Rust?

    This is all so that when that code gets ported to Rust, it will be easier. Librsvg is old code, and it has a bunch of C-isms that either don't translate well to Rust, or are kind of horrible by themselves and could be turned into more robust C — to make the corresponding rustification obvious.

Searching in GNOME Software

I’ve spent a few days profiling GNOME Software on ARM, mostly for curiosity but also to help our friends at Endless. I’ve merged a few patches that make the existing --profile code more useful to profile start up speed. Already there have been some big gains, over 200ms of startup time and 12Mb of RSS, but there’s plenty more that we want to fix to make GNOME Software run really nicely on resource constrained devices.

One of the biggest delays is constructing the search token cache at startup. This is where we look at all the fields of the .desktop files, the AppData files and the AppStream files and split them in a UTF8-sane way into search tokens, adding them into a big hash table after stemming them. We do it with 4 threads by default as it’s trivially parallelizable. With the search cache, when we search we just ask all the applications in the store “do you have this search term” and if so it gets added to the search results and ordered according to how good the match is. This takes 225ms on my super-fast Intel laptop (and much longer on ARM), and this happens automatically the very first time you search for anything in GNOME Software.

At the moment we add (for each locale, including fallbacks) the package name, the app ID, the app name, app single line description, the app keywords and the application long description. The latter is the multi-paragraph long description that’s typically prose. We use 90% of the time spent loading the token cache just splitting and adding the words in the description. As the description is prose, we have to ignore quite a few words e.g. “and”, “the”, “is” and “can” are some of the most frequent, useless words. Just the nature of the text itself (long non-technical prose) it doesn’t actually add many useful keywords to the search cache, and the ones that is does add are treated with such low priority other more important matches are ordered before them.

My proposal: continue to consume everything else for the search cache, and drop using the description. This means we start way quicker, use less memory, but it does require upstream actually adds some [localized] Keywords=foo;bar;baz in either the desktop file or <keywords> in the AppData file. At the moment most do, especially after I sent ~160 emails to the maintainers that didn’t have any defined keywords in the Fedora 25 Alpha, so I think it’s fairly safe at this point. Comments?

November 02, 2016

Casa Natureza

USA, 2016 Em projeto Uma casa modernista da grande tradição brasileira do ideal das casas modernistas, que...

The Royal Photographic Society Journal


The Royal Photographic Society Journal

Who let us in here?

The Journal of the Photographic Society is the journal for one of oldest photographic societies in the world: the Royal Photographic Society. First published in 1853, the RPS Journal is the oldest photographic periodical in the world (just edging out the British Journal of Photography by about a year).

So you can imagine my doubt when confronted with an email about using some material from pixls.us for their latest issue…


If the name sounds familiar to anyone it may be from a recent post by Joe McNally who is featured prominently in the September 2016 issue. He was also just inducted as a fellow into the society!

RPS Journal 2016-09 Cover

It turns out my initial doubts were completely unfounded, and they really wanted to run a page based off one of our tutorials. The editors liked the Open Source Portrait tutorial. In particular, the section on using Wavelet Decompose to touch up the skin tones:

RPS Journal 2016-11 PD Yay Mairi!

How cool is that? I actually searched the archive and the only other mention I can find of GIMP (or any other F/OSS) is from a “Step By Step” article written by Peter Gawthrop (Vol. 149, February 2009). I think it’s pretty awesome that we can promote a little more exposure for Free Software alternatives. Especially in more mainstream publications and to a broader audience!

November 01, 2016

Tue 2016/Nov/01

  • Bézier curves, markers, and SVG's concept of directionality

    SVG reference image        with markers

    In the first post in this series I introduced SVG markers, which let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

    In that post and in the second one, I started porting some of the code in librsvg that renders SVG markers from C to Rust. So far I've focused on the code and how it looks in Rust vs. C, and on some initial refactorings to make it feel more Rusty. I have casually mentioned Bézier segments and their tangents, and you may have an idea that SVG paths are composed of Bézier curves and straight lines, but I haven't explained what this code is really about. Why not simply walk over all the nodes in the path, and slap a marker at each one?

    Aragorn        does not simply walk a degenerate path

    (Sorry. Couldn't resist.)

    SVG paths

    If you open an illustration program like Inkscape, you can draw paths based on Bézier curves.

    Path of Bézier        segments, nodes, and control points

    Each segment is a cubic Bézier curve and can be considered independently. Let's focus on the middle segment there.

    Single Bézier        segment with control points

    At each endpoint, the tangent direction of the curve is determined by the corresponding control point. For example, at endpoint 1 the curve goes out in the direction of control point 2, and at endpoint 4 the curve comes in from the direction of control point 3. The further away the control points are from the endpoints, the larger "pull" they will have on the curve.

    Tangents at the endpoints

    Let's consider the tangent direction of the curve at the endpoints. What cases do we have, especially when some of the control points are in the same place as the endpoints?

    Directions at the endpoints of Bézier segments

    When the endpoints and the control points are all in different places (upper-left case), the tangents are easy to compute. We just subtract the vectors P2-P1 and P4-P3, respectively.

    When just one of the control points coincides with one of the endpoints (second and third cases, upper row), the "missing" tangent just goes to the other control point.

    In the middle row, we have the cases where both endpoints are coincident. If the control points are both in different places, we just have a curve that loops back. If just one of the control points coincides with the endpoints, the "curve" turns into a line that loops back, and its direction is towards the stray control point.

    Finally, if both endpoints and both control points are in the same place, the curve is just a degenerate point, and it has no tangent directions.

    Here we only care about the direction of the curve at the endpoints; we don't care about the magnitude of the tangent vectors. As a side note, Bézier curves have the nice property that they fit completely inside the convex hull of their control points: if you draw a non-crossing quadrilateral using the control points, then the curve fits completely inside that quadrilateral.

    Convex hulls of Bézier segments

    How SVG represents paths

    SVG uses a representation for paths that is similar to that of PDF and its precursor, the PostScript language for printers. There is a pen with a current point. The pen can move in a line or in a curve to another point while drawing, or it can lift up and move to another point without drawing.

    To create a path, you specify commands. These are the four basic commands:

    • move_to (x, y) - Change the pen's current point without drawing, and begin a new subpath.
    • line_to (x, y) - Draw a straight line from the current point to another point.
    • curve_to (x2, y2, x3, y3, x4, y4) - Draw a Bézier curve from the current point to (x4, y4), with the control points (x2 y2) and (x3, y3).
    • close_path - Draw a line from the current point back to the beginning of the current subpath (i.e. the position of the last move_to command).

    For example, this sequence of commands draws a closed square path:

    move_to (0, 0)
    line_to (10, 0)
    line_to (10, 10)
    line_to (0, 10)
    close_path

    If we had omitted the close_path, we would have an open C shape.

    SVG paths provide secondary commands that are built upon those basic ones: commands to draw horizontal or vertical lines without specifying both coordinates, commands to draw quadratic curves instead of cubic ones, and commands to draw elliptical or circular arcs. All of these can be built from, or approximated from, straight lines or cubic Bézier curves.

    Let's say you have a path with two disconnected sections: move_to (0, 0), line_to (10, 0), line_to (10, 10), move_to (20, 20), line_to (30, 20).

    Bézier path        with two open subpaths

    These two sections are called subpaths. A subpath begins with a move_to command. If there were a close_path command somewhere, it would draw a line from the current point back to where the current subpath started, i.e. to the location of the last move_to command.

    Markers at nodes

    Repeating ourselves a bit: for each path, SVG lets you define markers. A marker is a symbol that can be automatically placed at each node along a path. For example, here is a path composed of line_to segments, and which has an arrow-shaped marker at each node:

    Bézier path with        markers

    Here, the arrow-shaped marker is defined to be orientable. Its anchor point is at the V shaped concavity of the arrow. SVG specifies the angle at which orientable markers should be placed: given a node, the angle of its marker is the average of the incoming and outgoing angles of the path segments that meet at that node. For example, at node 5 above, the incoming line comes in at 0° (Eastwards) and the outgoing line goes out at 90° (Southwards) — so the arrow marker at 5 is rotated so it points at 45° (South-East).

    In the following picture we see the angle of each marker as the bisection of the incoming and outgoing angles of the respective nodes:

    Bézier path with        markers and directions

    The nodes at the beginning and end of subpaths only have one segment that meets that node. So, the marker uses that segment's angle. For example, at node 6 the only incoming segment goes Southward, so the marker points South.

    Converting paths into Segments

    The path above is simple to define. The path definition is

    move_to (1)
    line_to (2)
    line_to (3)
    line_to (4)
    line_to (5)
    line_to (6)

    (Imagine that instead of those numbers, which are just for illustration purposes, we include actual x/y coordinates.)

    When librsvg turns that path into Segments, they more or less look like

    line from 1, outgoing angle East,       to 2, incoming angle East
    line from 2, outgoing angle South-East, to 3, incoming angle South-East
    line from 3, outgoing angle North-East, to 4, incoming angle North-East
    line from 4, outgoing angle East,       to 5, incoming angle East
    line from 5, outgoing angle South,      to 6, incoming angle South

    Obviously, straight line segments (i.e. from a line_to) have the same angles at the start and the end of each segment. In contrast, curve_to segments can have different tangent angles at each end. For example, if we had a single curved segment like this:

    move_to (1)
    curve_to (2, 3, 4)

    Bézier curve with directions

    Then the corresponding single Segment would look like this:

    curve from 1, outgoing angle North, to 4, incoming angle South-East

    Now you know what librsvg's function path_to_segments() does! It turns a sequence of move_to / line_to / curve_to commands into a sequence of segments, each one with angles at the start/end nodes of the segment.

    Paths with zero-length segments

    Let's go back to our path made up of line segments, the one that looks like this:

    Bézier path with        markers

    However, imagine that for some reason the path contains duplicated, contiguous nodes. If we specified the path as

    move_to (1)
    line_to (2)
    line_to (3)
    line_to (3)
    line_to (3)
    line_to (3)
    line_to (4)
    line_to (5)
    line_to (6)

    Then our rendered path would look the same, with duplicated nodes at 3:

    Bézier path with        duplicated nodes

    But now when librsvg turns that into Segments, they would look like

      line from 1, outgoing angle East,       to 2, incoming angle East
      line from 2, outgoing angle South-East, to 3, incoming angle South-East
      line from 3, to 3, no angles since this is a zero-length segment
    * line from 3, to 3, no angles since this is a zero-length segment
      line from 3, outgoing angle North-East, to 4, incoming angle North-East
      line from 4, outgoing angle East,       to 5, incoming angle East
      line from 5, outgoing angle South,      to 6, incoming angle South

    When librsvg has to draw the markers for this path, it has to compute the marker's angle at each node. However, in the starting node for the segment marked with a (*) above, there is no angle! In this case, the SVG spec says that you have to walk the path backwards until you find a segment which has an angle, and then forwards until you find another segment with an angle, and then take their average angles and use them for the (*) node. Visually this makes sense: you don't see where there are contiguous duplicated nodes, but you certainly see lines coming out of that vertex. The algorithm finds those lines and takes their average angles for the marker.

    Now you know where our exotic names find_incoming_directionality_backwards() and find_outgoing_directionality_forwards() come from!

    Next up: refactoring C to make Rustification easier.

October 31, 2016

Flatpak cross-compilation support: Epilogue

You might remember my attempts at getting an easy to use cross-compilation for ARM applications on my x86-64 desktop machine.

With Fedora 25 approaching, I'm happy to say that the necessary changes to integrate the feature have now rolled into Fedora 25.

For example, to compile the GNU Hello Flatpak for ARM, you would run:

$ flatpak install gnome org.freedesktop.Platform/arm org.freedesktop.Sdk/arm
Installing: org.freedesktop.Platform/arm/1.4 from gnome
[...]
$ sudo dnf install -y qemu-user-static
[...]
$ TARGET=arm ./build.sh

For other applications, add the --arch=arm argument to the flatpak-builder command-line.

This example also works for 64-bit ARM with the architecture name aarch64.

October 28, 2016

Fri 2016/Oct/28

  • Porting a few C functions to Rust

    Last time I showed you my beginnings of porting parts of Librsvg to Rust. In this post I'll do an annotated porting of a few functions.

    Disclaimers: I'm learning Rust as I go. I don't know all the borrowing/lending rules; "Rust means never having to close a socket" is a very enlightening article, although it doesn't tell the whole story. I don't know Rust idioms that would make my code prettier. I am trying to refactor things to be prettier after a the initial pass of C-to-Rust. If you know an idiom that would be useful, please mail me!

    So, let's continue with the code to render SVG markers, as before. I'll start with this function:

    /* In C */
    
    static gboolean
    points_equal (double x1, double y1, double x2, double y2)
    {
        return DOUBLE_EQUALS (x1, x2) && DOUBLE_EQUALS (y1, y2);
    }

    I know that Rust supports tuples, and pretty structs, and everything. But so far, the refactoring I've done hasn't led me to really want to use them for this particular part of the library. Maybe later! Anyway, this translates easily to Rust; I already had a function called double_equals() from the last time. The result is as follows:

    /* In Rust */
    
    fn points_equal (x1: f64, y1: f64, x2: f64, y2: f64) -> bool {
        double_equals (x1, x2) && double_equals (y1, y2)
    }

    Pro-tip: text editor macros work very well for shuffling around the "double x1" into "x1: f64" :)

    Remove the return and the semicolon at the end of the line so that the function returns the value of the && expression. I could leave the return in there, but not having it is more Rusty, perhaps. (Rust also has a return keyword, which I think they keep around to allow early exits from functions.)

    This function doesn't get used yet, so the existing tests don't catch it. The first time I ran the Rust compiler on it, it complained of a type mismatch: I had put f64 instead of bool for the return type, which is of course wrong. Oops. Fix it, test again that it builds, done.

    Okay, next!

    But first, a note about how the original Segment struct in C evolved after refactoring in Rust.

    Original in C Straight port to Rust After refactoring
    typedef struct {
        /* If is_degenerate is true,
         * only (p1x, p1y) are valid.
         * If false, all are valid.
         */
        gboolean is_degenerate;
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    } Segment;
    struct Segment {
        /* If is_degenerate is true,
         * only (p1x, p1y) are valid.
         * If false, all are valid.
         */
        is_degenerate: bool,
        p1x: f64, p1y: f64,
        p2x: f64, p2y: f64,
        p3x: f64, p3y: f64,
        p4x: f64, p4y: f64
    }
    pub enum Segment {
        Degenerate { // A single lone point
            x: f64,
            y: f64
        },
    
        LineOrCurve {
            x1: f64, y1: f64,
            x2: f64, y2: f64,
            x3: f64, y3: f64,
            x4: f64, y4: f64
        },
    }

    In the C version, and in the original Rust version, I had to be careful to only access the x1/y1 fields if is_degenerate==true. Rust has a very convenient "enum" type, which can work pretty much as a normal C enum, or as a tagged union, as shown here. Rust will not let you access fields that don't correspond to the current tag value of the enum. (I'm not sure if "tag value" is the right way to call it — in any case, if a segment is Segment::Degenerate, the compiler only lets you access the x/y fields; if it is Segment::LineOrCurve, it only lets you access x1/y1/x2/y2/etc.) We'll see the match statement below, which is how enum access is done.

    Next!

    Original in C Straight port to Rust
    /* A segment is zero length if it is degenerate, or if all four control points
     * coincide (the first and last control points may coincide, but the others may
     * define a loop - thus nonzero length)
     */
    static gboolean
    is_zero_length_segment (Segment *segment)
    {
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    
        if (segment->is_degenerate)
            return TRUE;
    
        p1x = segment->p1x;
        p1y = segment->p1y;
    
        p2x = segment->p2x;
        p2y = segment->p2y;
    
        p3x = segment->p3x;
        p3y = segment->p3y;
    
        p4x = segment->p4x;
        p4y = segment->p4y;
    
        return (points_equal (p1x, p1y, p2x, p2y)
                && points_equal (p1x, p1y, p3x, p3y)
                && points_equal (p1x, p1y, p4x, p4y));
    }
    /* A segment is zero length if it is degenerate, or if all four control points
     * coincide (the first and last control points may coincide, but the others may
     * define a loop - thus nonzero length)
     */
    fn is_zero_length_segment (segment: Segment) -> bool {
        match segment {
            Segment::Degenerate { .. } => { true },
    
            Segment::LineOrCurve { x1, y1, x2, y2, x3, y3, x4, y4 } => {
                (points_equal (x1, y1, x2, y2)
                 && points_equal (x1, y1, x3, y3)
                 && points_equal (x1, y1, x4, y4))
            }
        }
    }

    To avoid a lot of "segment->this, segment->that, segment->somethingelse", the C version copies the fields from the struct into temporary variables and calls points_equal() with them. The Rust version doesn't need to do this, since we have a very convenient match statement.

    Rust really wants you to handle all the cases that your enum may be in. You cannot do something like "if segment == Segment::Degenerate", because you may forget an "else if" for some case. Instead, the match statement is much more powerful. It is really a pattern-matching engine, and for enums it lets you consider each case separately. The fields inside each case get unpacked like in "Segment::LineOrCurve { x1, y1, ... }" so you can use them easily, and only within that case. In the Degenerate case, I don't use the x/y fields, so I write "Segment::Degenerate { .. }" to avoid having unused variables.

    I'm sure I'll need to change something in the prototype of this function. The plain "segment: Segment" argument in Rust means that the is_zero_length_segment() function will take ownership of the segment. I'll be passing it from an array, but I don't know what shape that code will take yet, so I'll leave it like this for now and change it later.

    This function could use a little test, couldn't it? Just to guard from messing up the coordinate names later if I decide to refactor it with tuples for points, or something. Fortunately, the tests are really easy to set up in Rust:

        #[test]
        fn degenerate_segment_is_zero_length () {
            assert! (super::is_zero_length_segment (degenerate (1.0, 2.0)));
        }
    
        #[test]
        fn line_segment_is_nonzero_length () {
            assert! (!super::is_zero_length_segment (line (1.0, 2.0, 3.0, 4.0)));
        }
    
        #[test]
        fn line_segment_with_coincident_ends_is_zero_length () {
            assert! (super::is_zero_length_segment (line (1.0, 2.0, 1.0, 2.0)));
        }
    
        #[test]
        fn curves_with_loops_and_coincident_ends_are_nonzero_length () {
            assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 1.0, 2.0)));
            assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 1.0, 2.0, 3.0, 4.0, 1.0, 2.0)));
            assert! (!super::is_zero_length_segment (curve (1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 1.0, 2.0)));
        }
    
        #[test]
        fn curve_with_coincident_control_points_is_zero_length () {
            assert! (super::is_zero_length_segment (curve (1.0, 2.0, 1.0, 2.0, 1.0, 2.0, 1.0, 2.0)));
        }

    The degenerate(), line(), and curve() utility functions are just to create the appropriate Segment::Degenerate { x, y } without so much typing, and to make the tests more legible.

    After running cargo test, all the tests pass. Yay! And we didn't have to fuck around with relinking a version specifically for testing, or messing with making static functions available to tests, like we would have had to do in C. Double yay!

    Next!

    Original in C Straight port to Rust
    static gboolean
    find_incoming_directionality_backwards (Segment *segments, int num_segments, int start_index, double *vx, double *vy)
    {
        int j;
        gboolean found;
    
        /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
    
        found = FALSE;
    
        for (j = start_index; j >= 0; j--) {                                                                 /* 1 */
            if (segments[j].is_degenerate)
                break; /* reached the beginning of the subpath as we ran into a standalone point */
            else {                                                                                           /* 2 */
                if (is_zero_length_segment (&segments[j]))                                                   /* 3 */
                    continue;
                else {
                    found = TRUE;
                    break;
                }
            }
        }
    
        if (found) {                                                                                         /* 4 */
            g_assert (j >= 0);
            *vx = segments[j].p4x - segments[j].p3x;
            *vy = segments[j].p4y - segments[j].p3y;
            return TRUE;
        } else {
            *vx = 0.0;
            *vy = 0.0;
            return FALSE;
        }
    }
    fn find_incoming_directionality_backwards (segments: Vec, start_index: usize) -> (bool, f64, f64)
    {
        let mut found: bool;
        let mut vx: f64;
        let mut vy: f64;
    
        /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
    
        found = false;
        vx = 0.0;
        vy = 0.0;
    
        for j in (0 .. start_index + 1).rev () {                                                            /* 1 */
            match segments[j] {
                Segment::Degenerate { .. } => {
                    break; /* reached the beginning of the subpath as we ran into a standalone point */
                },
    
                Segment::LineOrCurve { x3, y3, x4, y4, .. } => {                                            /* 2 */
                    if is_zero_length_segment (&segments[j]) {                                              /* 3 */
                        continue;
                    } else {
                        vx = x4 - x3;
                        vy = y4 - y3;
                        found = true;
                        break;
                    }
                }
            }
        }
    
        if found {                                                                                           /* 4 */
            (true, vx, vy)
        } else {
            (false, 0.0, 0.0)
        }
    }

    In reality this function returns three values: whether a directionality was found, and if so, the vx/vy components of the direction vector. In C the prototype is like "bool myfunc (..., out vx, out vy)": return the boolean conventionally, and get a reference to the place where we should store the other return values. In Rust, it is simple to just return a 3-tuple.

    (Keen-eyed rustaceans will detect a code smell in the bool-plus-extra-crap return value, and tell me that I could use an Option instead. We'll see what the code wants to look like during the final refactoring!)

    With this code, I need temporary variables vx/vy to store the result. I'll refactor it to return immediately without needing temporaries or a found variable.

    1. We are looking backwards in the array of segments, starting at a specific element, until we find one that satisfies a certain condition. Looping backwards in C in the way done here has the peril that your loop variable needs to be signed, even though array indexes are unsigned: j will go from start_index down to -1, but the loop only runs while j >= 0.

    Rust provides a somewhat strange idiom for backwards numeric ranges. A normal range looks like "0 .. n" and that means the half-open range [0, n). So if we want to count from start_index down to 0, inclusive, we need to rev()erse the half-open range [0, start_index + 1), and that whole thing is "(0 .. start_index + 1).rev ()".

    2. Handling the degenerate case is trivial. Handling the other case is a bit more involved in Rust. We compute the vx/vy values here, instead of after the loop has exited, as at that time the j loop counter will be out of scope. This ugliness will go away during refactoring.

    However, note the match pattern "Segment::LineOrCurve { x3, y3, x4, y4, .. }". This means, "I am only interested in the x3/y3/x4/y4 fields of the enum"; the .. indicates to ignore the others.

    3. Note the ampersand in "is_zero_length_segment (&segments[j])". When I first wrote this, I didn't include the & sign, and Rust complained that it couldn't pass segments[j] to the function because the function would take ownership of that value, while in fact the value is owned by the array. I need to declare the function as taking a reference to a segment ("a pointer"), and I need to call the function by actually passing a reference to the segment, with & to take the "address" of the segment like in C. And if you look at the C version, it also says "&segments[j]"! So, the function now looks like this:

    fn is_zero_length_segment (segment: &Segment) -> bool {
        match *segment {
            ...

    Which means, the function takes a reference to a Segment, and when we want to use it, we de-reference it as *segment.

    While my C-oriented brain interprets this as references and dereferencing pointers, Rust wants me to think in the higher-level terms. A function will take ownership of an argument if it is declared like fn foo(x: Bar), and the caller will lose ownership of what it passed in x. If I want the caller to keep owning the value, I can "lend" it to the function by passing a reference to it, not the actual value. And I can make the function "borrow" the value without taking ownership, because references are not owned; they are just pointers to values.

    It turns out that the three chapters of the Rust book that deal with this are very clear and understandable, and I was irrationally scared of reading them. Go through them in order: Ownership, References and borrowing, Lifetimes. I haven't used the lifetime syntax yet, but it lets you solve the problem of dangling pointers inside live structs.

    4. At the end of the function, we build our 3-tuple result and return it.

    And what if we remove the ugliness from a straight C-to-Rust port? It starts looking like this:

    fn find_incoming_directionality_backwards (segments: Vec, start_index: usize) -> (bool, f64, f64)
    {
        /* "go backwards ... within the current subpath until ... segment which has directionality at its end point" */
    
        for j in (0 .. start_index + 1).rev () {
            match segments[j] {
                Segment::Degenerate { .. } => {
                    return (false, 0.0, 0.0); /* reached the beginning of the subpath as we ran into a standalone point */
                },
    
                Segment::LineOrCurve { x3, y3, x4, y4, .. } => {
                    if is_zero_length_segment (&segments[j]) {
                        continue;
                    } else {
                        return (true, x4 - x3, y4 - y3);
                    }
                }
            }
        }
    
        (false, 0.0, 0.0)
    }

    We removed the auxiliary variables by returning early from within the loop. I could remove the continue by negating the result of is_zero_length_segment() and returning the sought value in that case, but in my brain it is easier to read, "is this a zero length segment, i.e. that segment has no directionality? If yes, continue to the previous one, otherwise return the segment's outgoing tangent vector".

    But what if is_zero_length_segment() is the wrong concept? My calling function is called find_incoming_directionality_backwards(): it looks for segments in the array until it finds one with directionality. It happens to know that a zero-length segment has no directionality, but it doesn't really care about the length of segments. What if we called the helper function get_segment_directionality() and it returned false when the segment has none, and a vector otherwise?

    Rust provides the Option pattern just for this. And I'm itching to show you some diagrams of Bézier segments, their convex hulls, and what the goddamn tangents and directionalities actually mean graphically.

    But I have to evaluate Outreachy proposals, and if I keep reading Rust docs and refactoring merrily, I'll never get that done.

    Sorry to leave you in a cliffhanger! More to come soon!

Arnold Newman Portraits


Arnold Newman Portraits

The beginnings of "Environmental Portraits"

Anyone that has spent any time around me would realize that I’m particularly fond of portraits. From the wonderful works of Martin Schoeller to the sublime Dan Winters, I am simply fascinated by a well executed portrait. So I thought it would be fun to take a look at some selections from the “father” of environmental portraits - Arnold Newman.

Arnold Newman, Self Portrait, Baltimore MD, 1939 Arnold Newman, Self Portrait, Baltimore MD, 1939

Newman wanted to become a painter before needing to drop out of college after only two years to take a job shooting portraits in a photo studio in Philadelphia. This experience apparently taught him what he did not want to do with photography…

Luckily it may have started defining what he did want to do with his photography. Namely, his approach to capturing his subjects alongside (or within) the context of the things that made them notable in some way. This would became known as “Environmental Portraiture”. He described it best in an interview for American Photo in 2000:

I didn’t just want to make a photograph with some things in the background. The surroundings had to add to the composition and the understanding of the person. No matter who the subject was, it had to be an interesting photograph. Just to simply do a portrait of a famous person doesn’t mean a thing. 1

Though he has felt that the term might be unnecessarily restrictive (and possibly overshadows his other pursuits including abstractions and photojournalism), there’s no denying the impact of the results. Possibly his most famous portrait, of composer Igor Stravinsky, illustrates this wonderfully. The overall tones are almost monotone (flat - pun intended, and likely intentional on behalf of Newman) and are dominated by the stark duality of the white wall with the black piano.

Igor Stravinsky by Arnold Newman Igor Stravinsky, New York, NY, 1946 by Arnold Newman

Newman realized that the open lid of the piano “…is like the shape of a musical flat symbol—strong, linear, and beautiful, just like Stravinsky’s work.” 1 The geometric construction of the image instantly captures the eye and the aggressive crop makes the final composition even more interesting. In this case the crop was a fundamental part of the original composition as shot, but it was not uncommon for him to find new life in images with different crops.

In a similar theme his portraits of both Salador Dalí and John F. Kennedy show a willingness to allow the crop to bring in different defining characteristics of his subjects. In the case of Dalí it allows an abstraction to hang there mimicking the pose of the artist himself. Kennedy is mostly the only organic form, striking a relaxed pose, while dwarfed by the imposing architecture and hard lines surrounding him.

Salvador Dali, New York, NY, 1951 Salvador Dali, New York, NY, 1951 by Arnold Newman
John F. Kennedy, Washington D.C., 1953 John F. Kennedy, Washington D.C., 1953 by Arnold Newman

He manages to bring the same deft handling of placing his subjects in the context of their work with other photographers as well. His portrait of Ansel Adams shows the photographer just outside his studio with the surrounding wilderness not only visible around the frame but reflected in the glass of the doors behind him (and the photographers glasses). Perhaps an indication of the nature of Adams work being to capture natural scenes through glass?

Ansel Adams, 1975 by Arnold Newman Ansel Adams, 1975 by Arnold Newman

For anyone familiar with the pioneer of another form of photography, Newman’s portrait of (the usually camera shy) Henri Cartier-Bresson will instantly evoke a sense of the artists candid street images. In it, Bresson appears to take the place of one of his subjects caught briefly on the streets in a fleeting moment. The portrait has an almost spontaneous feeling to it, (again) mirroring the style of the work of its subject.

Henri Cartier-Bresson, New York, NY, 1947 Henri Cartier-Bresson, New York, NY, 1947 by Arnold Newman

Eight years after his portrait of surrealist painter Dali, Newman shot another famous (abstraction) artist, Pablo Picasso. This particular portrait is much more intimate and more classically composed, framing the subject as a headshot with little of the surrounding environment as before. I can’t help but think that the placement of the hand being similar in both images is intentional; a nod to the unconventional views both artists brought to the world.

Pablo Picasso,Vallauris, France, 1954 Pablo Picasso,Vallauris, France, 1954 by Arnold Newman

The eloquent Gregory Heisler had a wonderful discussion about Newman for Atlanta Celebrates Photography at the High Musuem in 2008:

Arnold Newman produced an amazing body of work that warrants some time and consideration for anyone interested in portraiture. These few examples simply do not do his collection of portraits justice. If you have a few moments to peruse some amazing images head over to his website and have a look (I’m particularly fond of his extremely design-oriented portrait of chinese-american architect I.M. Pei):

I.M. Pei, New York, NY, 1967 I.M. Pei, New York, NY, 1967 by Arnold Newman

Of historical interest is a look at Newman’s contact sheet for the Stravinsky image showing various compositions and approaches to his subject with the piano. (I would have easily chosen the last image in the first row as my pick.) I have seen the second image in the second row cropped as indicated, which was also a very strong choice. I adore being able to investigate contact sheets from shoots like this - it helps me to humanize these amazing photographers while simultaneously allowing me an opportunity to learn a little about their thought process and how I might incorporate it into my own photography.

Igor Stravinsky contact sheet

To close, a quote from his interview with American Photo magazine back in 2000 that will likely remain relevant to photographers for a long time:

But a lot of photographers think that if they buy a better camera they’ll be able to take better photographs. A better camera won’t do a thing for you if you don’t have anything in your head or in your heart. 1

1 Harris, Mark. “Arnold Newman: The Stories Behind Some of the Most Famous Portraits of the 20th Century.” American Photo, March/April 2000, pp. 36-38

October 26, 2016

Introducing Hundred Dollar Drawings!

Notice: This has been popular! New orders temporarily suspended while I work on backlog. I’ll offer Hundred Dollar Drawings again soon.

example of $100 drawing

Example of a $100 drawing

$100: Tell Nina what to draw* and she’ll draw it. It could be as vague as a word (“quadruped,” “equinox”) or more specific (“a cat driving a car,” “a sun and moon shaking hands”) or even more specific (“a tabby cat driving a convertible sportscar over a cardboard box,” “a sun and moon shaking hands over planet Earth, sky behind them half night and half day”). Nina will email you a photo of the finished drawing, and post it on her blog and social media.

*Specify drawing in Paypal checkout

 

+ $25: We’ll ship you the original art. Sizes will vary but it will be on 8.5 x 11″ or smaller paper.

 

+ $100: I will also make a “Making-of” video of the drawing, such as the above.

 

Example of cleaned-up, reproduction-ready PNG file

Example of cleaned-up, reproduction-ready PNG file

+ $100: Drawing cleaned-up and reproduction-ready for ANY USE YOU WANT!

 

FAQ

Q: What if I don’t like my drawing?
A: Too bad, sorry.

Q: Can you submit a sketch and let me comment for revisions?
A: No. If you want revisions, commission another $100 drawing, and a third, fourth, etc. You can get 10 $100 drawings for less than my usual professional rate.

Q: Can I use the drawing as a commercial logo for my business?
A: Yes.

Q: Can I use the drawing for advertising or other commercial purposes?
A: Yes, anything you want.

Q: Isn’t that crazy cheap for commercial art?
A: Yes. But some of these drawings are also non-commercial. It’s all less stress for me, and I don’t care what happens to the image after I draw it. (Actually I do care – the more it’s used, the better.)

Q: What about copyright?
A: Like most of my work this is Free Culture. There’s effectively no copyright to license or buy. You can do whatever you want with the art you commission, but it’s non-exclusive. I will be posting it on my blog and social media.

Q: What if I want exclusive rights?
A: Then you’ll have to pay more than $100 – same as most professional commercial art of this caliber. Shoot me an email to discuss.

Q: What if Nina finds my drawing instructions abhorrent?
A: I will refund your money and not do the drawing. Or I’ll keep the money and willfully misinterpret your request. That might be more interesting.

Q: Can you do a caricature if I send you a photo?
A: Not very well, but I’ll try. I am not a caricaturist so likenesses not guaranteed to be recognizable or remotely able to fulfill hopes and dreams.

Share/Bookmark

flattr this!

Dual-GPU integration in GNOME

Thanks to the work of Hans de Goede and many others, dual-GPU (aka NVidia Optimus or AMD Hybrid Graphics) support works better than ever in Fedora 25.

On my side, I picked up some work I originally did for Fedora 24, but ended up being blocked by hardware support. This brings better integration into GNOME.

The Details Settings panel now shows which video cards you have in your (most likely) laptop.

dual-GPU Graphics

The second feature is what Blender and 3D video games users have been waiting for: a contextual menu item to launch the application on the more powerful GPU in your machine.

Mooo Powaa!

This demonstration uses a slightly modified GtkGLArea example, which shows which of the GPUs is used to render the application in the title bar.

on the integrated GPU

on the discrete GPU

Behind the curtain

Behind those 2 features, we have a simple D-Bus service, which runs automatically on boot, and stays running to offer a single property (HasDualGpu) that system components can use to detect what UI to present. This requires the "switcheroo" driver to work on the machine in question.

Because of the way applications are launched on the discrete GPU, we cannot currently support D-Bus activated applications, but GPU-heavy D-Bus-integrated applications are few and far between right now.

Future plans

There's plenty more to do in this area, to polish the integration. We might want applications to tell us whether they'd prefer being run on the integrated or discrete GPU, as live switching between renderers is still something that's out of the question on Linux.

Wayland dual-GPU support, as well as support for the proprietary NVidia drivers are also things that will be worked on, probably by my colleagues though, as the graphics stack really isn't my field.

And if the hardware becomes more widely available, we'll most certainly want to support hardware with hotpluggable graphics support (whether gaming laptop "power-ups" or workstation docks).

Availability

All the patches necessary to make this work are now available in GNOME git (targeted at GNOME 3.24), and backports are integrated in Fedora 25, due to be released shortly.

October 25, 2016

Tue 2016/Oct/25

  • Librsvg gets Rusty

    I've been wanting to learn Rust for some time. It has frustrated me for a number of years that it is quite possible to write GNOME applications in high-level languages, but for the libraries that everything else uses ("the GNOME platform"), we are pretty much stuck with C. Vala is a very nice effort, but to me it never seemed to catch much momentum outside of GNOME.

    After reading this presentation called "Rust out your C", I got excited. It *is* possible to port C code to Rust, small bits at a time! You rewrite some functions in Rust, make them linkable to the C code, and keep calling them from C as usual. The contortions you need to do to make C types accessible from Rust are no worse than for any other language.

    I'm going to use librsvg as a testbed for this.

    Librsvg is an old library. It started as an experiment to write a SAX-based parser for SVG ("don't load the whole DOM into memory; instead, stream in the XML and parse it as we go"), and a renderer with the old libart (what we used in GNOME for 2D vector rendering before Cairo came along). Later it got ported to Cairo, and that's the version that we use now.

    Outside of GNOME, librsvg gets used at Wikimedia to render the SVGs all over Wikipedia. We have gotten excellent bug reports from them!

    Librsvg has a bunch of little parsers for the mini-languages inside SVG's XML attributes. For example, within a vector path definition, "M10,50 h20 V10 Z" means, "move to the coordinate (10, 50), draw a horizontal line 20 pixels to the right, then a vertical line to absolute coordinate 10, then close the path with another line". There are state machines, like the one that transforms that path definition into three line segments instead of the PostScript-like instructions that Cairo understands. There are some pixel-crunching functions, like Gaussian blurs and convolutions for SVG filters.

    It should be quite possible to port those parts of librsvg to Rust, and to preserve the C API for general consumption.

    Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it's all due to using C. We've gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That's the kind of 1970s bullshit that Rust prevents.

    I also hope that this will make it easier to actually write unit tests for librsvg. Currently we have some pretty nifty black-box tests for the whole library, which essentially take in complete SVG files, render them, and compare the results to a reference image. These are great for smoke testing and guarding against regressions. However, all the fine-grained machinery in librsvg has zero tests. It is always a pain in the ass to make static C functions testable "from the outside", or to make mock objects to provide them with the kind of environment they expect.

    So, on to Rustification!

    I've started with a bit of the code from librsvg that is fresh in my head: the state machine that renders SVG markers.

    SVG markers

    This image with markers comes from the official SVG test suite:

    SVG reference image        with markers

    SVG markers let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

    In the example image above, this is what is happening. The SVG defines four marker types:

    • A purple square that always stays upright.
    • A green circle.
    • A blue triangle that always stays upright.
    • A blue triangle whose orientation depends on the node where it sits.

    The top row, with the purple squares, is a path (the black line) that says, "put the purple-square marker on all my nodes".

    The middle row is a similar path, but it says, "put the purple-square marker on my first node, the green-circle marker on my middle nodes, and the blue-upright-triangle marker on my end node".

    The bottom row has the blue-orientable-triangle marker on all the nodes. The triangle is defined to point to the right (look at the bottommost triangles!). It gets rotated 45 degrees at the middle node, and 90 degrees so it points up at the top-left node.

    This was all fine and dandy, until one day we got a bug about incorrect rendering when there are funny paths paths. What makes a path funny?

    SVG image with funny        arrows

    For the code that renders markers, a path is not in the "easy" case when it is not obvious how to compute the orientation of nodes. A node's orientation, when it is well-behaved, is just the average angle of the node's incoming and outgoing lines (or curves). But if a path has contiguous coincident vertices, or stray points that don't have incoming/outgoing lines (imagine a sequence of moveto commands), or curveto commands with Bézier control points that are coincident with the nodes... well, in those cases, librsvg has to follow the spec to the letter, for it says how to handle those things.

    In short, one has to walk the segments away from the node in question, until one finds a segment whose "directionality" can be computed: a segment that is an actual line or curve, not a coincident vertex nor a stray point.

    Librsvg's algorithm has two parts to it. The first part takes the linear sequence of PostScript-like commands (moveto, lineto, curveto, closepath) and turns them into a sequence of segments. Each segment has two endpoints and two tangent directions at those endpoints; if the segment is a line, the tangents point in the same direction as the line. Or, the segment can be degenerate and it is just a single point.

    The second part of the algorithm takes that list of segments for each node, and it does the walking-back-and-forth as described in the SVG spec. Basically, it finds the first non-degenerate segment on each side of a node, and uses the tangents of those segments to find the average orientation of the node.

    The path-to-segments code

    In the C code I had this:

    typedef struct {
        gboolean is_degenerate; /* If true, only (p1x, p1y) are valid.  If false, all are valid */
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    } Segment;

    P1 and P4 are the endpoints of each Segment; P2 and P3 are, like in a Bézier curve, the control points from which the tangents can be computed.

    This translates readily to Rust:

    struct Segment {
        is_degenerate: bool, /* If true, only (p1x, p1y) are valid.  If false, all are valid */
        p1x: f64, p1y: f64,
        p2x: f64, p2y: f64,
        p3x: f64, p3y: f64,
        p4x: f64, p4y: f64
    }

    Then a little utility function:

    /* In C */
    	    
    #define EPSILON 1e-10
    #define DOUBLE_EQUALS(a, b) (fabs ((a) - (b)) < EPSILON)
    
    
    /* In Rust */
    	    
    const EPSILON: f64 = 1e-10;
    
    fn double_equals (a: f64, b: f64) -> bool {
        (a - b).abs () < EPSILON
    }

    And now, the actual code that transforms a cairo_path_t (a list of moveto/lineto/curveto commands) into a list of segments. I'll interleave C and Rust code with commentary.

    /* In C */
    
    typedef enum {
        SEGMENT_START,
        SEGMENT_END,
    } SegmentState;
    
    static void
    path_to_segments (const cairo_path_t *path,
                      Segment **out_segments,
                      int *num_segments)
    {
    
    
    /* In Rust */
    
    enum SegmentState {
        Start,
        End
    }
    
    fn path_to_segments (path: cairo::Path) -> Vec<Segment> {

    The enum is pretty much the same; Rust prefers CamelCase for enums instead of CAPITALIZED_SNAKE_CASE. The function prototype is much nicer in Rust. The cairo::Path is courtesy of gtk-rs, the budding Rust bindings for GTK+ and Cairo and all that goodness.

    The C version allocates the return value as an array of Segment structs, and returns it in the out_segments argument (... and the length of the array in num_segments). The Rust version returns a mentally easier vector of Segment structs.

    Now, the variable declarations at the beginning of the function:

    /* In C */
    
    {
        int i;
        double last_x, last_y;
        double cur_x, cur_y;
        double subpath_start_x, subpath_start_y;
        int max_segments;
        int segment_num;
        Segment *segments;
        SegmentState state;
    
    
    /* In Rust */
    
    {
        let mut last_x: f64;
        let mut last_y: f64;
        let mut cur_x: f64;
        let mut cur_y: f64;
        let mut subpath_start_x: f64;
        let mut subpath_start_y: f64;
        let mut has_first_segment : bool;
        let mut segment_num : usize;
        let mut segments: Vec<Segment>;
        let mut state: SegmentState;

    In addition to having different type names (double becomes f64), Rust wants you to say when a variable will be mutable, i.e. when it is allowed to change value after its initialization.

    Also, note that in C there's an "i" variable, which is used as a counter. There isn't a similar variable in the Rust version; there, we will use an iterator. Also, in the Rust version we have a new "has_first_segment" variable; read on to see its purpose.

        /* In C */
    
        max_segments = path->num_data; /* We'll generate maximum this many segments */
        segments = g_new (Segment, max_segments);
        *out_segments = segments;
    
        last_x = last_y = cur_x = cur_y = subpath_start_x = subpath_start_y = 0.0;
    
        segment_num = -1;
        state = SEGMENT_END;
    
    
        /* In Rust */
    	      
        cur_x = 0.0;
        cur_y = 0.0;
        subpath_start_x = 0.0;
        subpath_start_y = 0.0;
    
        has_first_segment = false;
        segment_num = 0;
        segments = Vec::new ();
        state = SegmentState::End;

    No problems here, just initializations. Note that in C we pre-allocate the segments array with a certain size. This is not the actual minimum size that the array will need; it is just an upper bound that comes from the way Cairo represents paths internally (it is not possible to compute the minimum size of the array without walking it first, so we use a good-enough value here that doesn't require walking). In the Rust version, we just create an empty vector and let it grow as needed.

    Note also that the C version initializes segment_num to -1, while the Rust version sets has_first_segment to false and segment_num to 0. Read on!

        /* In C */
    
        for (i = 0; i < path->num_data; i += path->data[i].header.length) {
            last_x = cur_x;
            last_y = cur_y;
    
    
        /* In Rust */
    
        for cairo_segment in path.iter () {
            last_x = cur_x;
            last_y = cur_y;

    We start iterating over the path's elements. Cairo, which is written in C, has a peculiar way of representing paths. path->num_data is the length of the path->data array. That array has elements in path->data[] that can be either commands, or point coordinates. Each command then specifies how many elements you need to "eat" to take in all its coordinates. Thus the "i" counter gets incremented on each iteration by path->data[i].header.length; this is the "how many to eat" magic value.

    The Rust version is more civilized. Get a path.iter() which feeds you Cairo path segments, and boom, you are done. That civilization is courtesy of the gtk-rs bindings. Onwards!

        /* In C */
    
            switch (path->data[i].header.type) {
            case CAIRO_PATH_MOVE_TO:
                segment_num++;
                g_assert (segment_num < max_segments);
    
    
    
        /* In Rust */
    
            match cairo_segment {
                cairo::PathSegment::MoveTo ((x, y)) => {
                    if has_first_segment {
                        segment_num += 1;
                    } else {
                        has_first_segment = true;
                    }

    The C version switch()es on the type of the path segment. It increments segment_num, our counter-of-segments, and checks that it doesn't overflow the space we allocated for the results array.

    The Rust version match()es on the cairo_segment, which is a Rust enum (think of it as a tagged union of structs). The first match case conveniently destructures the (x, y) coordinates; we will use them below.

    If you recall from the above, the C version initialized segment_num to -1. This code for MOVE_TO is the first case in the code that we will hit, and that "segment_num++" causes the value to become 0, which is exactly the index in the results array where we want to place the first segment. Rust *really* wants you to use an usize value to index arrays ("unsigned size"). I could have used a signed size value starting at -1 and then incremented it to zero, but then I would have to cast it to unsigned — which is slightly ugly. So I introduce a boolean variable, has_first_segment, and use that instead. I think I could refactor this to have another state in SegmentState and remove the boolean variable.

            /* In C */
    
                g_assert (i + 1 < path->num_data);
                cur_x = path->data[i + 1].point.x;
                cur_y = path->data[i + 1].point.y;
    
                subpath_start_x = cur_x;
                subpath_start_y = cur_y;
    
    
             /* In Rust */
    
                    cur_x = x;
                    cur_y = y;
    
                    subpath_start_x = cur_x;
                    subpath_start_y = cur_y;

    In the C version, I assign (cur_x, cur_y) from the path->data[], but first ensure that the index doesn't overflow. In the Rust version, the (x, y) values come from the destructuring described above.

            /* In C */
    
                segments[segment_num].is_degenerate = TRUE;
    
                segments[segment_num].p1x = cur_x;
                segments[segment_num].p1y = cur_y;
    
                state = SEGMENT_START;
    
                break;
    
    
             /* In Rust */
    
                    let seg = Segment {
                        is_degenerate: true,
                        p1x: cur_x,
                        p1y: cur_y,
                        p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0 // these are set in the next iteration
                    };
    
                    segments.push (seg);
    
                    state = SegmentState::Start;
                },

    This is where my lack of Rust idiomatic skills really starts to show. In C I put (cur_x, cur_y) in the (p1x, p1y) fields of the current segment, and since it is_degenerate, I'll know that the other p2/p3/p4 fields are not valid — and like any C programmer who wears sandals instead of steel-toed boots, I leave their memory uninitialized. Rust doesn't want me to have uninitialized values EVER, so I must fill a Segment structure and then push() it into our segments vector.

    So, the C version really wants to have a segment_num counter where I can keep track of which index I'm filling. Why is there a similar counter in the Rust version? We will see why in the next case.

            /* In C */
    
            case CAIRO_PATH_LINE_TO:
                g_assert (i + 1 < path->num_data);
                cur_x = path->data[i + 1].point.x;
                cur_y = path->data[i + 1].point.y;
    
                if (state == SEGMENT_START) {
                    segments[segment_num].is_degenerate = FALSE;
                    state = SEGMENT_END;
                } else /* SEGMENT_END */ {
                    segment_num++;
                    g_assert (segment_num < max_segments);
    
                    segments[segment_num].is_degenerate = FALSE;
    
                    segments[segment_num].p1x = last_x;
                    segments[segment_num].p1y = last_y;
                }
    
                segments[segment_num].p2x = cur_x;
                segments[segment_num].p2y = cur_y;
    
                segments[segment_num].p3x = last_x;
                segments[segment_num].p3y = last_y;
    
                segments[segment_num].p4x = cur_x;
                segments[segment_num].p4y = cur_y;
    
                break;
    
    
             /* In Rust */
    
                cairo::PathSegment::LineTo ((x, y)) => {
                    cur_x = x;
                    cur_y = y;
    
                    match state {
                        SegmentState::Start => {
                            segments[segment_num].is_degenerate = false;
                            state = SegmentState::End;
                        },
    
                        SegmentState::End => {
                            segment_num += 1;
    
                            let seg = Segment {
                                is_degenerate: false,
                                p1x: last_x,
                                p1y: last_y,
                                p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0  // these are set below
                            };
    
                            segments.push (seg);
                        }
                    }
    
                    segments[segment_num].p2x = cur_x;
                    segments[segment_num].p2y = cur_y;
    
                    segments[segment_num].p3x = last_x;
                    segments[segment_num].p3y = last_y;
    
                    segments[segment_num].p4x = cur_x;
                    segments[segment_num].p4y = cur_y;
                },

    Whoa! Buts let's piece it apart bit by bit.

    First we set cur_x and cur_y from the path data, as usual.

    Then we roll the state machine. Remember we got a LINE_TO. If we are in the state START ("just have a single point, possibly a degenerate one"), then we turn the old segment into a non-degenerate, complete line segment. If we are in the state END ("we were already drawing non-degenerate lines"), we create a new segment and fill it in. I'll probably change the names of those states to make it more obvious what they mean.

    In C we had a preallocated array for "segments", so the idiom to create a new segment is simply "segment_num++". In Rust we grow the segments array as we go, hence the "segments.push (seg)".

    I will probably refactor this code. I don't like it that it looks like

        case move_to:
            start possibly-degenerate segment
    
        case line_to:
            are we in a possibly-degenerate segment?
                yes: make it non-degenerate and remain in that segment...
    
                no: create a new segment, switch to it, and fill its first fields...
    
    	... for both cases, fill in the last fields of the segment

    That is, the "yes" case fills in fields from the segment we were handling in the *previous* iteration, while the "no" case fills in fields from a *new* segment that we created in the present iteration. That asymmetry bothers me. Maybe we should build up the next-segment's fields in auxiliary variables, and only put them in a complete Segment structure once we really know that we are done with that segment? I don't know; we'll see what is more legible in the end.

    The other two cases, for CURVE_TO and CLOSE_PATH, are analogous, except that CURVE_TO handles a bunch more coordinates for the control points, and CLOSE_PATH goes back to the coordinates from the last point that was a MOVE_TO.

    And those tests you were talking about?

    Well, I haven't written them yet! This is my very first Rust code, after reading a pile of getting-started documents.

    Already in the case for CLOSE_PATH I think I've found a bug. It doesn't really create a segment for multi-line paths when the path is being closed. The reftests didn't catch this because none of the reference images with SVG markers uses a CLOSE_PATH command! The unit tests for this path_to_segments() machinery should be able to find this easily, and closer to the root cause of the bug.

    What's next?

    Learning how to link and call that Rust code from the C library for librsvg. Then I'll be able to remove the corresponding C code.

    Feeling safer already?

darktable 2.0.7 released

we're proud to announce the seventh bugfix release for the 2.0 series of darktable, 2.0.7!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.7.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

a9226157404538183549079e3b8707c910fedbb669bd018106bdf584b88a1dab  darktable-2.0.7.tar.xz
0b341f3f753ae0715799e422f84d8de8854d8b9956dc9ce5da6d5405586d1392  darktable-2.0.7.dmg

and the changelog as compared to 2.0.6 can be found below.

New Features

  • Filter-out some EXIF tags when exporting. Helps keep metadata size below max limit of ~64Kb
  • Support the new Canon EOS 80D {m,s}RAW format
  • Always show rendering intent selector in lighttable view
  • Clear elevation when clearing geo data in map view
  • Temperature module, invert module: add SSE vectorization for X-Trans
  • Temperature module: add keyboard shortcuts for presets

Bugfixes

  • Rawspeed: fixes for building with libjpeg (as opposed to libjpeg-turbo)
  • OpenCL: always use blocking memory transfer hostdevice
  • OpenCL: remove bogus static keyword in extended.cl
  • Fix crash with missing configured display profile
  • Histogram: always show aperture with one digit after dot
  • Show if OpenEXR is supported in --version
  • Rawspeed: use a non-deprecated way of getting OSX version
  • Don't show bogus message about local copy when trying to delete physically deleted image

Base Support (newly added or small fixes)

  • Canon EOS 100D
  • Canon EOS 300D
  • Canon EOS 6D
  • Canon EOS 700D
  • Canon EOS 80D (sRaw1, sRaw2)
  • Canon PowerShot A720 IS (dng)
  • Fujifilm FinePix S100FS
  • Nikon D3400 (12bit-compressed)
  • Panasonic DMC-FZ300 (4:3)
  • Panasonic DMC-G8 (4:3)
  • Panasonic DMC-G80 (4:3)
  • Panasonic DMC-GX80 (4:3)
  • Panasonic DMC-GX85 (4:3)
  • Pentax K-70

Base Support (fixes, was broken in 2.0.6, apologies for inconvenience)

  • Nikon 1 AW1
  • Nikon 1 J1 (12bit-compressed)
  • Nikon 1 J2 (12bit-compressed)
  • Nikon 1 J3
  • Nikon 1 J4
  • Nikon 1 S1 (12bit-compressed)
  • Nikon 1 S2
  • Nikon 1 V1 (12bit-compressed)
  • Nikon 1 V2
  • Nikon Coolpix A (14bit-compressed)
  • Nikon Coolpix P330 (12bit-compressed)
  • Nikon Coolpix P6000
  • Nikon Coolpix P7000
  • Nikon Coolpix P7100
  • Nikon Coolpix P7700 (12bit-compressed)
  • Nikon Coolpix P7800 (12bit-compressed)
  • Nikon D1
  • Nikon D3 (12bit-compressed, 12bit-uncompressed)
  • Nikon D3000 (12bit-compressed)
  • Nikon D3100
  • Nikon D3200 (12bit-compressed)
  • Nikon D3S (12bit-compressed, 12bit-uncompressed)
  • Nikon D4 (12bit-compressed, 12bit-uncompressed)
  • Nikon D5 (12bit-compressed, 12bit-uncompressed)
  • Nikon D50
  • Nikon D5100
  • Nikon D5200
  • Nikon D600 (12bit-compressed)
  • Nikon D610 (12bit-compressed)
  • Nikon D70
  • Nikon D7000
  • Nikon D70s
  • Nikon D7100 (12bit-compressed)
  • Nikon E5400
  • Nikon E5700 (12bit-uncompressed)

We were unable to bring back these 4 cameras, because we have no samples.
If anyone reading this owns such a camera, please do consider providing samples.

  • Nikon E8400
  • Nikon E8800
  • Nikon D3X (12-bit)
  • Nikon Df (12-bit)

White Balance Presets

  • Pentax K-70

Noise Profiles

  • Sony DSC-RX10

Translations Updates

  • Catalan
  • German

October 23, 2016

Los Alamos Artists Studio Tour

[JunkDNA Art at the LA Studio Tour] The Los Alamos Artists Studio Tour was last weekend. It was a fun and somewhat successful day.

I was borrowing space in the studio of the fabulous scratchboard artist Heather Ward, because we didn't have enough White Rock artists signed up for the tour.

Traffic was sporadic: we'd have long periods when nobody came by (I was glad I'd brought my laptop, and managed to get some useful development done on track management in pytopo), punctuated by bursts where three or four groups would show up all at once.

It was fun talking to the people who came by. They all had questions about both my metalwork and Heather's scratchboard, and we had a lot of good conversations. Not many of them were actually buying -- I heard the same thing afterward from most of the other artists on the tour, so it wasn't just us. But I still sold enough that I more than made back the cost of the tour. (I hadn't realized, prior to this, that artists have to pay to be in shows and tours like this, so there's a lot of incentive to sell enough at least to break even.) Of course, I'm nowhere near covering the cost of materials and equipment. Maybe some day ...

[JunkDNA Art at the LA Studio Tour]

I figured snacks are always appreciated, so I set out my pelican snack bowl -- one of my first art pieces -- with brownies and cookies in it, next to the business cards.

It was funny how wrong I was in predicting what people would like. I thought everyone would want the roadrunners and dragonflies; in practice, scorpions were much more popular, along with a sea serpent that had been sitting on my garage shelf for a month while I tried to figure out how to finish it. (I do like how it eventually came out, though.)

And then after selling both my scorpions on Saturday, I rushed to make two more on Saturday night and Sunday morning, and of course no one on Sunday had the slightest interest in scorpions. Dave, who used to have a foot in the art world, tells me this is typical, and that artists should never make what they think the market will like; just go on making what you like yourself, and hope it works out.

Which, fortunately, is mostly what I do at this stage, since I'm mostly puttering around for fun and learning.

Anyway, it was a good learning experience, though I was a little stressed getting ready for it and I'm glad it's over. Next up: a big spider for the front yard, before Halloween.

October 20, 2016

CVE-2016-5195

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

The Electoral College

Episode 4 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

A US presidential election year is a wondrous thing. There are few places around the world where the campaign for head of state begins in earnest 18 months before the winner will take office. We are now in the home straight, with the final Presidential debate behind us, and election day coming up in 3 weeks, on the Tuesday after the first Monday in November (this year, that’s November 8th). And as with every election cycle, much time will be spent explaining the electoral college. This great American institution is at the heart of how America elects its President. Every 4 years, there are calls to reform it, to move to a different system, and yet it persists. What is it, where did it come from, and why does it cause so much controversy?

In the US, people do not vote for the President directly in November. Instead, they vote for electors – people who represent the state in voting for the President. A state gets a number of electoral votes equal to its number of senators (2) and its number of US representatives (this varies based on population). Sparsely populated states like Alaska and Montana get 3 electoral votes, while California gets 55. In total, there are 538 electors, and a majority of 270 electoral votes is needed to secure the presidency. What happens if the candidates fail to get a majority of the electors is outside the scope of this blog post, and in these days of a two party system, it is very unlikely (although not impossible).

State parties nominate elector lists before the election, and on election day, voters vote for the elector slate corresponding to their preferred candidate. Electoral votes can be awarded differently from state to state. In Nebraska, for example, there are 2 statewide electors for the winner of the statewide vote, and one elector for each congressional district, while in most states, the elector lists are chosen on a winner take all basis. After the election, the votes are counted in the local county, and sent to the state secretary for certification.

Once the election results are certified (which can take up to a month), the electors meet in their state in mid December to record their votes for president and vice president. Most states (but not all!) have laws restricting who electors are allowed to vote for, making this mostly a ceremonial position. The votes are then sent to the US senate and the national archivist for tabulation, and the votes are then cross referenced before being sent to a joint session of Congress in early January. Congress counts the electoral votes and declares the winner in the presidency. Two weeks later, the new President takes office (those 2 weeks are to allow for the process where no-one gets a majority in the electoral college).

Because it is possible to win heavily in some states with few electoral votes, and lose narrowly in others with a lot of electoral votes, it is possible to win the presidency without having a majority of Americans vote for you (as George W. Bush did in 2000). In modern elections, the electoral college can result in a huge difference of attention between “safe” states, and “swing” states – the vast majority of campaigning is done in only a dozen or so states, while states like Texas and Massachusetts do not get as much attention.

Why did the founding fathers of the US come up with such a convoluted system? Why not have people vote for the President directly, and have the counts of the states tabulated directly, without the pomp and ceremony of the electoral college vote?

First, think back to 1787, when the US constitution was written. The founders of the state had an interesting set of principles and constraints they wanted to uphold:

  • Big states should not be able to dominate small states
  • Similarly, small states should not be able to dominate big states
  • No political parties existed (and the founding fathers hoped it would stay that way)
  • Added 2016-10-21: Different states wanted to give a vote to different groups of people (and states with slavery wanted slaves to count in the population)
  • In the interests of having presidents who represented all of the states, candidates should have support outside their own state – in an era where running a national campaign was impractical
  • There was a logistical issue of finding out what happened on election day and determining the winner

To satisfy these constraints, a system was chosen which ensured that small states had a proportionally bigger say (by giving an electoral vote for each Senator), but more populous states still have a bigger say (by getting an electoral vote for each congressman). In the first elections, electors voted for 2 candidates, of which only one could be from their state, meaning that winning candidates had support from outside their state. The President was the person who got the most electoral votes, and the vice president was the candidate who came second – even if (as was the case with John Adams and Thomas Jefferson) they were not in the same party. It also created the possibility (as happened with Thomas Jefferson and Aaron Burr) that a vice presidential candidate could get the same number of electoral votes as the presidential candidate, resulting in Congress deciding who would be president. The modern electoral college was created with the 12th amendment to the US constitution in 1803.

Another criticism of direct voting is that populist demagogues could be elected by the people, but electors (being of the political classes) could be expected to be better informed, and make better decisions, about who to vote for. Alexander Hamilton wrote in The Federalist #68 that: “It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.” These days, most states have laws which require their electors to vote in accordance with the will of the electorate, so that original goal is now mostly obsolete.

A big part of the reason for having over two months between the election and the president taking office (and prior to 1934, it was 4 months) is, in part, due to the size of the colonial USA. The administrative unit for counting, the county, was defined so that every citizen could get to the county courthouse and home in a day’s ride – and after an appropriate amount of time to count the ballots, the results were sent to the state capital for certification, which could take up to 4 days in some states like Kentucky or New York. And then the electors needed to be notified, and attend the official elector count in the state capital. And then the results needed to be sent to Washington, which could take up to 2 weeks, and Congress (which was also having elections) needed to meet to ratify the results. All of these things took time, amplified by the fact that travel happened on horseback.

So at least in part, the electoral college system is based on how long, logistically, it took to bring the results to Washington and have Congress ratify them. The inauguration used to be on March 4th, because that was how long it took for the process to run its course. It was not until 1934 and the 20th amendment to the constitution that the date was moved to January.

Incidentally, two other constitutionally set constraints for election day are also based on constraints that no longer apply. Elections happen on a Tuesday, because of the need not to interfere with two key events: sabbath (Sunday) and market (Wednesday). And the elections were held in November primarily so as not to interfere with harvest. These dates and reasoning, set in stone in 1845, persist today.

October 19, 2016

FOSDEM SDN & NFV DevRoom Call for Content

We are pleased to announce the Call for Participation in the FOSDEM 2017 Software Defined Networking and Network Functions Virtualization DevRoom!

Important dates:

  • (Extended!) Nov 28: Deadline for submissions
  • Dec 1: Speakers notified of acceptance
  • Dec 5: Schedule published

This year the DevRoom topics will cover two distinct fields:

  • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
  • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

This year, the DevRoom will focus on low-level networking and high performance packet processing, network automation of containers and private cloud, and the management of telco applications to maintain very high availability and performance independent of whatever the world can throw at their infrastructure (datacenter outages, fires, broken servers, you name it).

A representative list of the projects and topics we would like to see on the schedule are:

  • Low-level networking and switching: IOvisor, eBPF, XDP, fd.io, Open vSwitch, OpenDataplane, …
  • SDN controllers and overlay networking: OpenStack Neutron, Canal, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
  • NFV Management and Orchestration: Open-O, ManageIQ, Juju, OpenBaton, Tacker, OSM, network management, PNDA.io, …
  • NFV related features: Service Function Chaining, fault management, dataplane acceleration, security, …

Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)

The deadline for submissions is November 16th 2016. FOSDEM will be held on the weekend of February 4-5, 2017 and the SDN/NFV DevRoom will take place on Saturday, February 4, 2017 (Updated 2016-10-20: an earlier version incorrectly said the DevRoom was on Sunday). Please use the following website to submit your proposals: https://penta.fosdem.org/submission/FOSDEM17 (you do not need to create a new Pentabarf account if you already have one from past years).

You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: network-devroom@lists.fosdem.org (subscription page: https://lists.fosdem.org/listinfo/network-devroom)

– The Networking DevRoom 2016 Organization Team

Security bug lifetime

In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

Patches_linux:
 break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2

This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

CVE lifetimes 2011-2016

And here it is zoomed in to just Critical and High:

Critical and High CVE lifetimes 2011-2016

The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

  • Critical: 2 @ 3.3 years
  • High: 34 @ 6.4 years
  • Medium: 334 @ 5.2 years
  • Low: 186 @ 5.0 years

This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

(Edit: see my updated graphs that include CVE-2016-5195.)

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

October 18, 2016

Microwave Time Remainder Temporal Disorientation, a definition

Microwave Time Remainder Temporal Disorientation - definition: The disorientation experienced when the remaining cook time on a microwave display appears to be a feasible but inaccurate time of day.

Example:

1:15 PM: Suzie puts her leftover pork chops in the office microwave, enters 5:00, and hits Start. After 1 minutes and 17 seconds, she hears sizzling, opens the microwave door and takes her meal.

1:25 PM: John walks by the microwave, sees 3:43 on the display and thinks: “What!? My life is slipping away from me!”

October 16, 2016

FreeCAD BIM development news

Here goes a little report from the href="http://www.freecadweb.org">FreeCAD front, showing a couple of things I've been working on in the last weeks. Site As a follow-up of href="http://yorik.uncreated.net/guestblog.php?2016=269">this post, several new features have been added to the href="http://www.freecadweb.org/wiki/index.php?title=Arch_Site">Arch Site object. The most important is that the Site is now a Part object, which means it has a...

October 13, 2016

October 12, 2016

Highlight Bloom and Photoillustration Look


Highlight Bloom and Photoillustration Look

Replicating a 'Lucisart'/Dave Hill type illustrative look

Over in the forums community member Sebastien Guyader (@sguyader) posted a neat workflow for emulating a photo-illustrative look popularized by photographers like Dave Hill where the resulting images often seem to have a sort of hyper-real feeling to them. Some of this feeling comes from a local-contrast boost and slight ‘blooming’ of the lighter tones in the image (though arguably most of the look is due to lighting and compositing of multiple elements).

To illustrate, here are a few representative samples of Dave Hill’s work that reflects this feeling:

Dave Hill Cliff Dave Hill Finishline Lotion Dave Hill Track Dave Hill Nick Saban A collection of example images. ©Dave Hill

A video of Dave presenting on how he brought together the idea and images for the series the first image above is from:

This effect is also popularized in Photoshop® filters such as LucisArt in an effort to attain what some would (erroneously) call an “HDR” effect. Really what they likely mean is a not-so-subtle tone-mapping. In particular the exaggerated local contrasts is often what garners folks attention.

We had previously posted about a method for exaggerating fine local contrasts and details using the “Freaky Details” method described by Calvin Hollywood. This workflow provides a similar idea but different results that many might find more appealing (it’s not as gritty as the Freaky Details approach).

Sebastien produced some great looking preview images to give folks a feeling for what the process would produce:

BMW IFA-F9 Fashion Woman Images from pixabay (CC0, public domain): Motorcycle, car, woman.

Replicating a “Dave Hill”/“LucasArt” effect

Sebastien’s approach relies only on having the always useful G’MIC plugin for GIMP. The general workflow is to do a high-pass frequency separation, and to apply some effects like local contrast enhancement and some smoothing on the residual low-pass layer. Then recombine the high+low pass layers to get the final result.

  1. Open the image.
  2. Duplicate the base layer.
    Rename it to “Lowpass”.
  3. With the top layer (“Lowpass”) active, open G’MIC.
  4. Use the Photocomix smoothing filter:

    Testing → Photocomix → Photocomix smoothing

    Set the Amplitude to 10. Apply.
    This is to taste, but a good startig place might be around 1% of the image dimensions (so a 2000px wide image - try using an Amplitude of 20).
  5. Change the “Lowpass” layer blend mode to Grain extract.
  6. Right-Click on the layer and choose New from visible.
    Rename this layer from “Visible“ to something more memorable like “Highpass” and set its layer mode to Grain merge.
    Turn off this layer visibility for now.
  7. Activate the “Lowpass” layer and set its layer blend mode back to Normal.
    The rest of the filters are applied to this “Lowpass” layer.
  8. Open G’MIC again.
    Apply the Simple local contrast filter:

    Details → Simple local contrast

    Using:
    • Edge Sensitivity to 25
    • Iterations to 1
    • Paint effect to 50
    • Post-gamma to 1.20
  9. Open G’MIC again.
    Now apply the Graphic novel filter:

    Artistic → Graphic novel

    Using:
    • check the Skip this step checkbox for Apply Local Normalization
    • Pencil size to 1
    • Pencil amplitude to 100-200
    • Pencil smoother sharpness/edge protection/smoothness
      to 0
    • Boost merging options Mixer to Soft light
    • Painter’s touch sharpness to 1.26
    • Painter’s edge protection flow to 0.37
    • Painter’s smoothness to 1.05
  10. Finally, make the “Highpass” layer visible again to bring back the fine details.

Trying It Out!

Let’s walk through the process. Sebastien got his sample images from the website https://pixabay.com, so I thought I would follow suit and find something suitable from there also. After some searching I found this neat image from Jerzy Gorecki licensed Create Commons 0/Public Domain.

Model The base image (link).
From pixabay, (CC0 - Public Domain): Jerzy Gorecki.

Frequency Separation

The first steps (1—7) are to create a High/Low pass frequency separation of the image. If you have a different method for obtaining the separation then feel free to use it. Sebastien uses the Photocomix smoothing filter to create his low-pass layer (other options might be Gaussian blur, bi-lateral smoothing, or even wavelets).

The basic steps to do this are to duplicate the base layer, blur it, then set the layer blend mode to Grain extract and create a new layer from visible. The new layer will be the Highpass (high-frequency) details and should have its layer blend mode set to Grain merge. The original blurred layer is the Lowpass (low-frequency) information and should have its layer blend mode set back to Normal.

So, following Sebastien’s steps, duplicate the base layer and rename the layer to “lowpass”. Then open G’MIC and apply:

Testing → Photocomix → Photocomix smoothing

with an amplitude of around 20. Change this to suit your own taste, but about 1% of the image width is a decent starting point. You’ll now have the base layer and the “lowpass” layer above it that has been smoothed:

Photocomix Smoothing “lowpass” layer after Photocomix smoothing with Amplitude set to 20.

Setting the “lowpass” layer blend mode to Grain extract will reveal the high-frequency details:

Grain Extract HP The high-frequency details visible after setting the blurred “lowpass” layer blend mode to Grain extract.

Now create a new layer from what is currently visible. Either right-click the “lowpass” layer and choose “New from visible” or from the menus:

Layer → New from Visible

Rename this new layer from “Visible” to “highpass” and set its layer blend mode to Grain merge. Select the “lowpass” layer and set its layer blend mode back to Normal.

Layers

The visible result should be back to what your starting image looked like. The rest of the steps for this tutorial will operate on the “lowpass” layer. You can leave the “highpass” filter visible during the rest of the steps to see what your results will look like.

Modifying the Low-Frequency Layer

These next steps will modify the underlying low-frequency image information to smooth it out and give it a bit of a contrast boost. First the “Simple local contrast” filter will separate tones and do some preliminary smoothing, while the “Graphic novel” filter will provide a nice boost to light tones along with further smoothing.

Simple Local Contrast

On the “lowpass” layer, open G’MIC and find the “Simple local contrast” filter:

Details → Simple local contrast

Change the following settings:

  • Edge Sensitivity to 25
  • Iterations to 1
  • Paint effect to 50
  • Post-gamma to 1.20

This will smooth out overall tones while simultaneously providing a nice local contrast boost. This is the step that causes small lighting details to “pop”:

Simple Local Contrast After applying the “Simple local contrast” filter.
(Click to compare to the original image)

The contrast increase provides a nice visual punch to the image. The addition of the “Graphic novel” filter will push the overall image much closer to a feeling of a photo-illustration.

Graphic Novel

Still on the “lowpass” layer, re-open G’MIC and open the “Graphic Novel” filter:

Artistic → Graphic novel

Change the following settings:

  • check the Skip this step checkbox for Apply Local Normalization
  • Pencil size to 1
  • Pencil amplitude to 100-200
  • Pencil smoother sharpness/edge protection/smoothness
    to 0
  • Boost merging options Mixer to Soft light
  • Painter’s touch sharpness to 1.26
  • Painter’s edge protection flow to 0.37
  • Painter’s smoothness to 1.05

The intent with this filter is to further smooth the overall tones, simplify details, and to give a nice boost to the light tones of the image:

Graphic Novel After applying the “Graphic novel” filter.
(Click to compare to the local contrast result)

The effect at 100% opacity can be a little strong. If so, simply adjust the opacity of the “lowpass” layer to taste. In some cases it would probably be desirable to mask areas you don’t want the effect applied to.

I’ve included the GIMP .xcf.bz2 file of this image while I was working on it for this article. You can download the file here (34.9MB). I did each step on a new layer so if you want to see the results of each effect step-by-step, simply turn that layer on/off:

Sample layers Example XCF layers

Finally, a great big Thank You! to Sebastien Guyader (@sguyader) for sharing this with everyone in the community!

A G’MIC Command

Of course, this wouldn’t be complete if someone didn’t come along with the direct G’MIC commands to get a similar result! And we can thank Iain Fergusson (@Iain) for coming up with the commands:

--gimp_anisotropic_smoothing[0] 10,0.16,0.63,0.6,2.35,0.8,30,2,0,1,1,0,1

-sub[0] [1]

-simplelocalcontrast_p[1] 25,1,50,1,1,1.2,1,1,1,1,1,1
-gimp_graphic_novelfxl[1] 1,2,6,5,20,0,1,100,0,1,0,0.78,1.92,0,0,2,1,1,1,1.26,0.37,1.05
-add
-c 0,255

October 11, 2016

New Mexico LWV Voter Guides are here!

[Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

October 10, 2016

Visualizing the raw (sensor) highlight clipping

Have you ever over-exposed your images? Have you ever noticed that your images look flat and dull after you apply negative exposure compensation? Even though the over/underexposed warning says there is no overexposure? Have you ever wondered what is going on? Read on.

the Problem

First, why would you want to know which pixels are overexposed, clipped?

Consider this image:
rawoverexposed-0

 … Why is the sky so white? Why is the image so flat and dull?

Let's enable overexposure indicator..
rawoverexposed-0.5
Nope, it does not indicate any part of the image to be overexposed.

Now, let's see what happens if we disable the highlight reconstruction module
rawoverexposed-1
Eww, the sky is pink!

An experienced person knows that it means the image was taken overexposed, and it is so dull and flat because a negative exposure compensation was applied via the exposure module.

Many of you have sometimes unintentionally overexposed your images. As you know, it is hard to figure out exactly which part of the image is overexposed, clipped.

But. What if it is actually very easy to figure out?

I'll show you the end result, what darktable's new, raw-based overexposure indicator says about that image, and then we will discuss details:
rawoverexposed-2

digital image processing, mathematical background

While modern sensors capture an astonishing dynamic range, they still can capture only so much.
A Sensor consists of millions of pixel sensors, each pixel containing a photodetector and an active amplifier. Each of these pixels could be thought of as a bucket: there is some upper limit of photons it can capture.
Which means, there is some point, above which the sensor can not distinguish how much light it received.

Now, the pixel captured some photons, and the pixel now has some charge that can be measured. But it is an analog value. For better or worse, all modern cameras and software operate in the digital world. Thus, next step is the conversion of that charge via ADC into a digital signal.

Most sane cameras that can save raw files, store those values of pixels as an array of unsigned integers.
What can we tell about those values?

  • Sensor readout results in some noise (black noise + readout noise), meaning that even with the shortest exposure, the pixels will not have zero value.
    That is a black level.
    For Canon it is often between \(\mathbf{2000}\) and \(\mathbf{2050}\).
  • Due to the non-magical nature of photosensitive pixels and ADC, there is some upper limit on the value each pixel can have. That limit may be different for each pixel, be it due to the different CFA color, or just manufacturing tolerances. Most modern Canon cameras produce 14-bit raw images, which means each pixel may have a value between \(\mathbf{0}\) and \(\mathbf{{2^{14}}-1}\) (i.e. \(\mathbf{16383}\)).
    So the lowest maximal value that still can be represented by all the pixels is called the white level.
    For Canon it is often between \(\mathbf{13000}\) and \(\mathbf{16000}\).
  • Both of these parameters also often depend on ISO.

why is the white level so low? (you can skip this)

Disclaimer: this is just my understanding of the subject. my understanding may be wrong.

You may ask, why white level is less than the maximal value that can be stored in the raw file (that is, e.g. for 14-bit raw images, less than \(\mathbf{{2^{14}}-1}\) (i.e. \(\mathbf{16383}\)))?
I have intentionally skipped over one more component of the sensor - an active amplifier.
It is the second most important component of the sensor (after the photodetectors themselves).

The saturation point of the photodetector is much lower than the saturation point of the ADC. Also, due to the non-magical nature of ADC, it has a very specific voltage nominal range \(\mathbf{V_{RefLow}..V_{RefHi}}\), outside of which it can not work properly.
E.g. photodetector may output an analog signal with an amplitude of (guess, general ballpark, not precise values) \(\mathbf{1..10}\) \(\mathbf{mV}\), while the ADC expects input analog signal to have an amplitude of \(\mathbf{1..10}\) \(\mathbf{V}\).
So if we directly pass charge from the photodetector to the ADC, at best, we will get a very faint digital signal, with much smaller magnitude, compared to what ADC can produce, and thus with very bad (low) SNR.
Also see: Signal conditioning.

Thus, when quantifying non-amplified analog signal, we lose data, which can not be recovered later.
Which means, the analog signal must be amplified, to equalize the output voltage levels of the photodetector and [expected] input voltage levels of the analog signal to the ADC. That is done by an amplifier. There may be more than just one amplifier, and more than one amplification step.

Okay, what if we amplify an analog signal from photodetector by a magnitude of \(\mathbf{3}\)? I.e. we had \(\mathbf{5}\) \(\mathbf{mV}\), but now got \(\mathbf{5}\) \(\mathbf{V}\). At first all seems in order, the signal is within expected range.
But we need to take into account one important detail: output voltage of a photodetector depends on the amount of light it received, and the exposure time.
So for low light and low shutter speed it will output the minimal voltage (in our example, \(\mathbf{1}\) \(\mathbf{mV}\)), and if we amplify that, we get \(\mathbf{1}\) \(\mathbf{V}\), which is the \(\mathbf{V_{RefLow}}\) of ADC.
Similarly, for bright light and long shutter speed it will output the maximal voltage (in our example, \(\mathbf{10}\) \(\mathbf{mV}\)), and if we amplify that, we get \(\mathbf{10}\) \(\mathbf{V}\), which is, again, the \(\mathbf{V_{RefHi}}\) of ADC.

So there are obvious cases where with constant amplification factor we get bad signal range. Thus, we need multiple amplifiers, each of which with different gain, and we need to be able to toggle them separately, to control the amplification in finer steps.

As you may have guessed by now, the signal amplification is the factor that results in the white level being at the e.g. \(\mathbf{16000}\), or some other value. Basically, this amplification is how the ISO level is implemented in hardware.

TL;DR, so why?

Because of the analog gain that was applied to the data to bring it into the nominal range and not blow (clip, make them bigger than \(\mathbf{16383}\)) the usable highlights. The gain is applied in finite discrete steps, it may be impossible to apply a finer gain, so that the white level is closer to \(\mathbf{16383}\).

This is a very brief summary, for a detailed write-up i can direct you to the Magic Lantern's CMOS/ADTG/Digic register investigation on ISO.

the first steps of processing a raw file

All right, we got a sensor readout – an array of unsigned integers – how do we get from that to an image, that can be displayed?

  1. Convert the values from integer (most often 16-bit unsigned) to float (not strictly required, but best for precision reasons; we use 32-bit float)
  2. Subtract black level
  3. Normalize the pixels so that the white level is \(\mathbf{1.0}\)
    Simplest way to do that is to divide each value by \(\mathbf{({white level} - {black level})}\)
  4. These 3 steps are done by the raw black/white point module.

  5. Next, the white balance is applied. It is as simple as multiplying each separate CFA color by a specific coefficient. This so-called white balance vector can be acquired from several places:
    1. Camera may store it in the image's metadata.
      (That is what preset = camera does)
    2. If the color matrix for a given sensor is known, an approximate white balance (that is, which will only take the sensor into account, but will not adjust for illuminant) can be computed from that matrix.
      (That is what preset = camera neutral does)
    3. Taking a simple arithmetic mean (average) of each of the color channels may give good-enough inverted white-balance multiplier.
      IMPORTANT: the computed white balance will be good only if, on average, that image is gray.
      That is, it will correct white balance so that the average color becomes gray, so if average color is not neutrally gray (e.g. red), the image will look wrong.

      (That is what preset = spot white balance does)
    4. etc (user input, camera wb preset, ...)

    As you remember, in the previous step, we have scaled the data so that the white level is \(\mathbf{1.0}\), for every color channel.
    White balance coefficients scale each channel separately. For example, an example white balance vector may be \({\begin{pmatrix} 2.0 , 0.9 , 1.5 \end{pmatrix}}^{T}\). That is, Red channel will be scaled by \(\mathbf{2.0}\), Green channel will be scaled by \(\mathbf{0.9}\), and Blue channel will be scaled by \(\mathbf{1.5}\).
    In practice, however, the white balance vector is most often normalized so that the Green channel multiplier is \(\mathbf{1.0}\).

  6. That step is done by the white balance module.

  7. And last, highlight handling.
    As we know from definition, all the data values which are bigger than the white level are unusable, clipped. Without / before white balance correction, it is clear that all the values which are bigger than \(\mathbf{1.0}\) are the clipped values, and they are useless without some advanced processing.

    Now, what did the white balance correction do to the white levels? Correct, now, the white levels will be: \(\mathbf{2.0}\) for Red channel, \(\mathbf{0.9}\) for Green channel, and \(\mathbf{1.5}\) for Blue channel.

    As we all know, the white color is \({\begin{pmatrix} 1.0 , 1.0 , 1.0 \end{pmatrix}}^{T}\). But the maximal values (the per-channel white levels) are \({\begin{pmatrix} 2.0 , 0.9 , 1.5 \end{pmatrix}}^{T}\), so our "white" will not be white, but, as experienced users may guess, purple-ish. What do we do?

    Since for white color, all the components have exact the same value – \(\mathbf{1.0}\) – we just need to make sure that the maximal values are the same value. We can not scale each of the channels separately, because that would change white balance. We simply need to pick the minimal white level – \(\mathbf{0.9}\) – in our case, and clip all the data to that level. I.e. all the data which had a value of less than or equal to that threshold, will retain the same value; and all the pixels with the value greater than the threshold will have the value of threshold – \(\mathbf{0.9}\).

    Alternatively, one could try to recover these highlights, see highlight reconstruction module; and
    Color Reconstruction
    (though this last one only guesses color based on surroundings, does not actually reconstruct the channels, and is a bit too late in the pipe).

    If you don't do highlight handling, you get what you have seen in the third image in this article - ugly, unnaturally looking, discolored, highlights.

Note: you might know that there are more steps required (namely: demosaicing, base curve, input color profile, output color profile; there may be others.), but for the purpose of detection and visualization of highlight clipping, they are unimportant, so i will not talk about them here.

From that list, it should now be clear that all the pixels which have a value greater than the minimal per-channel white level right before the highlight reconstruction module, are the clipped pixels.

the Solution

But a technical problem arises: we need to visualize the clipped pixels on top of the fully processed image, while we only know whether the pixel is clipped or not in the input buffer of highlight reconstruction module.
And we can not visualize clipping in the highlight reconstruction module itself, because the data is still mosaiced, and other modules will be applied after that anyway.

The problem was solved by back-transforming the given white balance coefficients and the white level, and then comparing the values of original raw buffer produced by camera with that threshold. And, back-transforming output pixel coordinates through all the geometric distortions to figure out which pixel in the original input buffer needs to be checked.

This seems to be the most flexible solution so far:

  • We can visualize overexposure on top of final, fully-processed image. That means, no module messes with the visualization
  • We do sample the original input buffer. That means we can actually know whether a given pixel is clipped or not

Obviously, this new raw-based overexposure indicator depends on the specific sensor pattern.
Good news is, it just works for both the Bayer, and X-Trans sensors!

modes of operation

rawoverexposed-ui

The raw-based overexposure indicator has 3 different modes of operation:

  1. mark with CFA color

    • If the clipped pixel was Red, a Red pixel will be displayed.
    • If the clipped pixel was Green, a Green pixel will be displayed.
    • If the clipped pixel was Blue, a Blue pixel will be displayed.

    Sample output, X-Trans image.
    There are some Blue, Green and Red pixels clipped (counting to the centre)
    rawoverexposed-xtrans-mode-cfa

  2. mark with solid color

    • If the raw pixel was clipped, it will be displayed in a given color (one of: red, green, blue, black)

    Same area, with color scheme = black.
    The more black dots the area contains, the more clipped pixels there are in that area.
    rawoverexposed-xtrans-mode-solid-black

  3. false color

    • If the clipped pixel was Red, the Red channel for current pixel will be set to \(\mathbf{0.0}\)
    • If the clipped pixel was Green, the Green channel for current pixel will be set to \(\mathbf{0.0}\)
    • If the clipped pixel was Blue, the Blue channel for current pixel will be set to \(\mathbf{0.0}\)

    Same area.
    rawoverexposed-xtrans-mode-falsecolor

understanding raw overexposure visualization

So, let's go back to the fourth image in this article:
rawoverexposed-2
This is mode = mark with CFA color.

What does it tell us?

  • Most of the sky is indeed clipped.
  • In the top-right portion of the image, only the Blue channel is clipped.
  • In the top-left portion of the image, Blue and Red channels are clipped.
  • No Green channel clipping.

Now you know that, you:

  1. Will know better than to over-expose so much next time :) (hint to myself, mostly)
  2. Could try to recover from clipping a bit

    1. either by not applying negative exposure compensation in exposure module
    2. or using highlight reconstruction module with mode = reconstruct in LCh
    3. or using highlight reconstruction module with mode = reconstruct color, though it is known to produce artefacts
    4. or using color reconstruction module

an important note about sensor clipping vs. color clipping

By default, the module visualizes the color clipping, NOT the sensor clipping.
The colors may be clipped, while the sensor is still not clipping.
Example:
rawoverexposed-2

Let's enable indicator...
rawoverexposed-3
The visualization says that Red and Blue channels are clipped.

But now let's disable the white balance module, while keeping indicator active:
rawoverexposed-4

Interesting, isn't it? So actually there is no sensor-level clipping, but the image is still overexposed, because after the white balance is applied, the channels do clip.

While there, i wanted to show highlight reconstruction module, mode = reconstruct in LCh.
If you ever used it, you know that it used to produce pretty useless results.
But not anymore:
highlight-reconstruction-reconstruct-in-lch
As you can compare that with the first version of this image in this block, the highlights, although they are clipped, are actually somewhat reconstructed, so the image is not so flat and dull, there is some gradient to it.

Too boring? :)

With sufficiently exposed image (or just set black levels to \(\mathbf{0}\) and white level to \(\mathbf{1}\) in raw black/white point module; and clipping threshold = \(\mathbf{0.0}\), mode = mark with CFA color in raw overexposure indicator), and a lucky combination of image size, output size and zoom level, produces a familiar-looking pattern :)
rawoverexposed-bayer-pattern
That is basically an artefact due to the downscaling for display. Though, feedback may ask to actually properly implement this as a feature...

Now, what if we enable the lens correction module? :)
rawoverexposed-bayer-pattern-and-lens-correction
So we could even create glitch-art with this thing!
Technically, that is some kind of visualization of lens distortion.

October 08, 2016

Bullet 2.85 released : pybullet and Virtual Reality support for HTC Vive and Oculus Rift

bullet_pybullet_vrWe have been making a lot of progress in higher quality physics simulation for robotics, games and visual effects. To make our physics simulation easier to use, especially for roboticist and machine learning experts, we created Python bindings, see examples/pybullet. In addition, we added Virtual Reality support for HTC Vive and Oculus Rift using the openvr sdk. See attached youtube movie. Updated documentation will be added soon, as well as possible show-stopper bug-fixes, so the actual release tag may bump up to 2.85.x. Download the release from github here.

.

October 05, 2016

Play notes, chords and arbitrary waveforms from Python

Reading Stephen Wolfram's latest discussion of teaching computational thinking (which, though I mostly agree with it, is more an extended ad for Wolfram Programming Lab than a discussion of what computational thinking is and why we should teach it) I found myself musing over ideas for future computer classes for Los Alamos Makers. Students, and especially kids, like to see something other than words on a screen. Graphics and games good, or robotics when possible ... but another fun project a novice programmer can appreciate is music.

I found myself curious what you could do with Python, since I hadn't played much with Python sound generation libraries. I did discover a while ago that Python is rather bad at playing audio files, though I did eventually manage to write a music player script that works quite well. What about generating tones and chords?

A web search revealed that this is another thing Python is bad at. I found lots of people asking about chord generation, and a handful of half-baked ideas that relied on long obsolete packages or external program. But none of it actually worked, at least without requiring Windows or relying on larger packages like fluidsynth (which looked worth exploring some day when I have more time).

Play an arbitrary waveform with Pygame and NumPy

But I did find one example based on a long-obsolete Python package called Numeric which, when rewritten to use NumPy, actually played a sound. You can take a NumPy array and play it using a pygame.sndarray object this way:

import pygame, pygame.sndarray

def play_for(sample_wave, ms):
    """Play the given NumPy array, as a sound, for ms milliseconds."""
    sound = pygame.sndarray.make_sound(sample_wave)
    sound.play(-1)
    pygame.time.delay(ms)
    sound.stop()

Then you just need to calculate the waveform you want to play. NumPy can generate sine waves on its own, while scipy.signal can generate square and sawtooth waves. Like this:

import numpy
import scipy.signal

sample_rate = 44100

def sine_wave(hz, peak, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    """
    length = sample_rate / float(hz)
    omega = numpy.pi * 2 / length
    xvalues = numpy.arange(int(length)) * omega
    onecycle = peak * numpy.sin(xvalues)
    return numpy.resize(onecycle, (n_samples,)).astype(numpy.int16)

def square_wave(hz, peak, duty_cycle=.5, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    """
    t = numpy.linspace(0, 1, 500 * 440/hz, endpoint=False)
    wave = scipy.signal.square(2 * numpy.pi * 5 * t, duty=duty_cycle)
    wave = numpy.resize(wave, (n_samples,))
    return (peak / 2 * wave.astype(numpy.int16))

# Play A (440Hz) for 1 second as a sine wave:
play_for(sine_wave(440, 4096), 1000)

# Play A-440 for 1 second as a square wave:
play_for(square_wave(440, 4096), 1000)

Playing chords

That's all very well, but it's still a single tone, not a chord.

To generate a chord of two notes, you can add the waveforms for the two notes. For instance, 440Hz is concert A, and the A one octave above it is double the frequence, or 880 Hz. If you wanted to play a chord consisting of those two As, you could do it like this:

play_for(sum([sine_wave(440, 4096), sine_wave(880, 4096)]), 1000)

Simple octaves aren't very interesting to listen to. What you want is chords like major and minor triads and so forth. If you google for chord ratios Google helpfully gives you a few of them right off, then links to a page with a table of ratios for some common chords.

For instance, the major triad ratios are listed as 4:5:6. What does that mean? It means that for a C-E-G triad (the first C chord you learn in piano), the E's frequency is 5/4 of the C's frequency, and the G is 6/4 of the C.

You can pass that list, [4, 5, 5] to a function that will calculate the right ratios to produce the set of waveforms you need to add to get your chord:

def make_chord(hz, ratios):
    """Make a chord based on a list of frequency ratios."""
    sampling = 4096
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, sine_wave(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz):
    return make_chord(hz, [4, 5, 6])

play_for(major_triad(440), length)

Even better, you can pass in the waveform you want to use when you're adding instruments together:

def make_chord(hz, ratios, waveform=None):
    """Make a chord based on a list of frequency ratios
       using a given waveform (defaults to a sine wave).
    """
    sampling = 4096
    if not waveform:
        waveform = sine_wave
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, waveform(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz, waveform=None):
    return make_chord(hz, [4, 5, 6], waveform)

play_for(major_triad(440, square_wave), length)

There are still some problems. For instance, sawtooth_wave() works fine individually or for pairs of notes, but triads of sawtooths don't play correctly. I'm guessing something about the sampling rate is making their overtones cancel out part of the sawtooth wave. Triangle waves (in scipy.signal, that's a sawtooth wave with rising ramp width of 0.5) don't seem to work right even for single tones. I'm sure these are solvable, perhaps by fiddling with the sampling rate. I'll probably need to add graphics so I can look at the waveform for debugging purposes.

In any case, it was a fun morning hack. Most chords work pretty well, and it's nice to know how to to play any waveform I can generate.

The full script is here: play_chord.py on GitHub.

security things in Linux v4.8

Previously: v4.7. Here are a bunch of security things I’m excited about in Linux v4.8:

SLUB freelist ASLR

Thomas Garnier continued his freelist randomization work by adding SLUB support.

x86_64 KASLR text base offset physical/virtual decoupling

On x86_64, to implement the KASLR text base offset, the physical memory location of the kernel was randomized, which resulted in the virtual address being offset as well. Due to how the kernel’s “-2GB” addressing works (gcc‘s “-mcmodel=kernel“), it wasn’t possible to randomize the physical location beyond the 2GB limit, leaving any additional physical memory unused as a randomization target. In order to decouple the physical and virtual location of the kernel (to make physical address exposures less valuable to attackers), the physical location of the kernel needed to be randomized separately from the virtual location. This required a lot of work for handling very large addresses spanning terabytes of address space. Yinghai Lu, Baoquan He, and I landed a series of patches that ultimately did this (and in the process fixed some other bugs too). This expands the physical offset entropy to roughly $physical_memory_size_of_system / 2MB bits.

x86_64 KASLR memory base offset

Thomas Garnier rolled out KASLR to the kernel’s various statically located memory ranges, randomizing their locations with CONFIG_RANDOMIZE_MEMORY. One of the more notable things randomized is the physical memory mapping, which is a known target for attacks. Also randomized is the vmalloc area, which makes attacks against targets vmalloced during boot (which tend to always end up in the same location on a given system) are now harder to locate. (The vmemmap region randomization accidentally missed the v4.8 window and will appear in v4.9.)

x86_64 KASLR with hibernation

Rafael Wysocki (with Thomas Garnier, Borislav Petkov, Yinghai Lu, Logan Gunthorpe, and myself) worked on a number of fixes to hibernation code that, even without KASLR, were coincidentally exposed by the earlier W^X fix. With that original problem fixed, then memory KASLR exposed more problems. I’m very grateful everyone was able to help out fixing these, especially Rafael and Thomas. It’s a hard place to debug. The bottom line, now, is that hibernation and KASLR are no longer mutually exclusive.

gcc plugin infrastructure

Emese Revfy ported the PaX/Grsecurity gcc plugin infrastructure to upstream. If you want to perform compiler-based magic on kernel builds, now it’s much easier with CONFIG_GCC_PLUGINS! The plugins live in scripts/gcc-plugins/. Current plugins are a short example called “Cyclic Complexity” which just emits the complexity of functions as they’re compiled, and “Sanitizer Coverage” which provides the same functionality as gcc’s recent “-fsanitize-coverage=trace-pc” but back through gcc 4.5. Another notable detail about this work is that it was the first Linux kernel security work funded by Linux Foundation’s Core Infrastructure Initiative. I’m looking forward to more plugins!

If you’re on Debian or Ubuntu, the required gcc plugin headers are available via the gcc-$N-plugin-dev package (and similarly for all cross-compiler packages).

hardened usercopy

Along with work from Rik van Riel, Laura Abbott, Casey Schaufler, and many other folks doing testing on the KSPP mailing list, I ported part of PAX_USERCOPY (the basic runtime bounds checking) to upstream as CONFIG_HARDENED_USERCOPY. One of the interface boundaries between the kernel and user-space are the copy_to_user()/copy_from_user() family of functions. Frequently, the size of a copy is known at compile-time (“built-in constant”), so there’s not much benefit in checking those sizes (hardened usercopy avoids these cases). In the case of dynamic sizes, hardened usercopy checks for 3 areas of memory: slab allocations, stack allocations, and kernel text. Direct kernel text copying is simply disallowed. Stack copying is allowed as long as it is entirely contained by the current stack memory range (and on x86, only if it does not include the saved stack frame and instruction pointers). For slab allocations (e.g. those allocated through kmem_cache_alloc() and the kmalloc()-family of functions), the copy size is compared against the size of the object being copied. For example, if copy_from_user() is writing to a structure that was allocated as size 64, but the copy gets tricked into trying to write 65 bytes, hardened usercopy will catch it and kill the process.

For testing hardened usercopy, lkdtm gained several new tests: USERCOPY_HEAP_SIZE_TO, USERCOPY_HEAP_SIZE_FROM, USERCOPY_STACK_FRAME_TO,
USERCOPY_STACK_FRAME_FROM, USERCOPY_STACK_BEYOND, and USERCOPY_KERNEL. Additionally, USERCOPY_HEAP_FLAG_TO and USERCOPY_HEAP_FLAG_FROM were added to test what will be coming next for hardened usercopy: flagging slab memory as “safe for copy to/from user-space”, effectively whitelisting certainly slab caches, as done by PAX_USERCOPY. This further reduces the scope of what’s allowed to be copied to/from, since most kernel memory is not intended to ever be exposed to user-space. Adding this logic will require some reorganization of usercopy code to add some new APIs, as PAX_USERCOPY’s approach to handling special-cases is to add bounce-copies (copy from slab to stack, then copy to userspace) as needed, which is unlikely to be acceptable upstream.

seccomp reordered after ptrace

By its original design, seccomp filtering happened before ptrace so that seccomp-based ptracers (i.e. SECCOMP_RET_TRACE) could explicitly bypass seccomp filtering and force a desired syscall. Nothing actually used this feature, and as it turns out, it’s not compatible with process launchers that install seccomp filters (e.g. systemd, lxc) since as long as the ptrace and fork syscalls are allowed (and fork is needed for any sensible container environment), a process could spawn a tracer to help bypass a filter by injecting syscalls. After Andy Lutomirski convinced me that ordering ptrace first does not change the attack surface of a running process (unless all syscalls are blacklisted, the entire ptrace attack surface will always be exposed), I rearranged things. Now there is no (expected) way to bypass seccomp filters, and containers with seccomp filters can allow ptrace again.

That’s it for v4.8! The merge window is open for v4.9…

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

October 04, 2016

Working with GIS, terrains and #FreeCAD

Or, how to build a precise 3D terrain from any place of the world. Again not much visually significant FreeCAD development to show this week, so here is another interesting subject, that I started looking at in an earlier post. We architects should really begin to learn about GIS. GIS stands for Geographic information system and begins to...

October 03, 2016

security things in Linux v4.7

Previously: v4.6. Onward to security things I found interesting in Linux v4.7:

KASLR text base offset for MIPS

Matt Redfearn added text base address KASLR to MIPS, similar to what’s available on x86 and arm64. As done with x86, MIPS attempts to gather entropy from various build-time, run-time, and CPU locations in an effort to find reasonable sources during early-boot. MIPS doesn’t yet have anything as strong as x86′s RDRAND (though most have an instruction counter like x86′s RDTSC), but it does have the benefit of being able to use Device Tree (i.e. the “/chosen/kaslr-seed” property) like arm64 does. By my understanding, even without Device Tree, MIPS KASLR entropy should be as strong as pre-RDRAND x86 entropy, which is more than sufficient for what is, similar to x86, not a huge KASLR range anyway: default 8 bits (a span of 16MB with 64KB alignment), though CONFIG_RANDOMIZE_BASE_MAX_OFFSET can be tuned to the device’s memory, giving a maximum of 11 bits on 32-bit, and 15 bits on EVA or 64-bit.

SLAB freelist ASLR

Thomas Garnier added CONFIG_SLAB_FREELIST_RANDOM to make slab allocation layouts less deterministic with a per-boot randomized freelist order. This raises the bar for successful kernel slab attacks. Attackers will need to either find additional bugs to help leak slab layout information or will need to perform more complex grooming during an attack. Thomas wrote a post describing the feature in more detail here: Randomizing the Linux kernel heap freelists. (SLAB is done in v4.7, and SLUB in v4.8.)

eBPF JIT constant blinding

Daniel Borkmann implemented constant blinding in the eBPF JIT subsystem. With strong kernel memory protections (CONFIG_DEBUG_RODATA) in place, and with the segregation of user-space memory execution from kernel (i.e SMEP, PXN, CONFIG_CPU_SW_DOMAIN_PAN), having a place where user-space can inject content into an executable area of kernel memory becomes very high-value to an attacker. The eBPF JIT was exactly such a thing: the use of BPF constants could result in the JIT producing instruction flows that could include attacker-controlled instructions (e.g. by directing execution into the middle of an instruction with a constant that would be interpreted as a native instruction). The eBPF JIT already uses a number of other defensive tricks (e.g. random starting position), but this added randomized blinding to any BPF constants, which makes building a malicious execution path in the eBPF JIT memory much more difficult (and helps block attempts at JIT spraying to bypass other protections).

Elena Reshetova updated a 2012 proof-of-concept attack to succeed against modern kernels to help provide a working example of what needed fixing in the JIT. This serves as a thorough regression test for the protection.

The cBPF JITs that exist in ARM, MIPS, PowerPC, and Sparc still need to be updated to eBPF, but when they do, they’ll gain all these protections immediatley.

Bottom line is that if you enable the (disabled-by-default) bpf_jit_enable sysctl, be sure to set the bpf_jit_harden sysctl to 2 (to perform blinding even for root).

fix brk ASLR weakness on arm64 compat

There have been a few ASLR fixes recently (e.g. ET_DYN, x86 32-bit unlimited stack), and while reviewing some suggested fixes to arm64 brk ASLR code from Jon Medhurst, I noticed that arm64′s brk ASLR entropy was slightly too low (less than 1 bit) for 64-bit and noticeably lower (by 2 bits) for 32-bit compat processes when compared to native 32-bit arm. I simplified the code by using literals for the entropy. Maybe we can add a sysctl some day to control brk ASLR entropy like was done for mmap ASLR entropy.

LoadPin LSM

LSM stacking is well-defined since v4.2, so I finally upstreamed a “small” LSM that implements a protection I wrote for Chrome OS several years back. On systems with a static root of trust that extends to the filesystem level (e.g. Chrome OS’s coreboot+depthcharge boot firmware chaining to dm-verity, or a system booting from read-only media), it’s redundant to sign kernel modules (you’ve already got the modules on read-only media: they can’t change). The kernel just needs to know they’re all coming from the correct location. (And this solves loading known-good firmware too, since there is no convention for signed firmware in the kernel yet.) LoadPin requires that all modules, firmware, etc come from the same mount (and assumes that the first loaded file defines which mount is “correct”, hence load “pinning”).

That’s it for v4.7. Prepare yourself for v4.8 next!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

October 01, 2016

Zsh magic: remove all raw photos that don't have a corresponding JPEG

Lately, when shooting photos with my DSLR, I've been shooting raw mode but with a JPEG copy as well. When I triage and label my photos (with pho and metapho), I use only the JPEG files, since they load faster and there's no need to index both. But that means that sometimes I delete a .jpg file while the huge .cr2 raw file is still on my disk.

I wanted some way of removing these orphaned raw files: in other words, for every .cr2 file that doesn't have a corresponding .jpg file, delete the .cr2.

That's an easy enough shell function to write: loop over *.cr2, change the .cr2 extension to .jpg, check whether that file exists, and if it doesn't, delete the .cr2.

But as I started to write the shell function, it occurred to me: this is just the sort of magic trick zsh tends to have built in.

So I hopped on over to #zsh and asked, and in just a few minutes, I had an answer:

rm *.cr2(e:'[[ ! -e ${REPLY%.cr2}.jpg ]]':)

Yikes! And it works! But how does it work? It's cheating to rely on people in IRC channels without trying to understand the answer so I can solve the next similar problem on my own.

Most of the answer is in the zshexpn man page, but it still took some reading and jumping around to put the pieces together.

First, we take all files matching the initial wildcard, *.cr2. We're going to apply to them the filename generation code expression in parentheses after the wildcard. (I think you need EXTENDED_GLOB set to use that sort of parenthetical expression.)

The variable $REPLY is set to the filename the wildcard expression matched; so it will be set to each .cr2 filename, e.g. img001.cr2.

The expression ${REPLY%.cr2} removes the .cr2 extension. Then we tack on a .jpg: ${REPLY%.cr2}.jpg. So now we have img001.jpg.

[[ ! -e ${REPLY%.cr2}.jpg ]] checks for the existence of that jpg filename, just like in a shell script.

So that explains the quoted shell expression. The final, and hardest part, is how to use that quoted expression. That's in section 14.8.7 Glob Qualifiers. (estring) executes string as shell code, and the filename will be included in the list if and only if the code returns a zero status.

The colons -- after the e and before the closing parenthesis -- are just separator characters. Whatever character immediately follows the e will be taken as the separator, and anything from there to the next instance of that separator (the second colon, in this case) is taken as the string to execute. Colons seem to be the character to use by convention, but you could use anything. This is also the part of the expression responsible for setting $REPLY to the filename being tested.

So why the quotes inside the colons? They're because some of the substitutions being done would be evaluated too early without them: "Note that expansions must be quoted in the string to prevent them from being expanded before globbing is done. string is then executed as shell code."

Whew! Complicated, but awfully handy. I know I'll have lots of other uses for that.

One additional note: section 14.8.5, Approximate Matching, in that manual page caught my eye. zsh can do fuzzy matches! I can't think offhand what I need that for ... but I'm sure an idea will come to me.

security things in Linux v4.6

Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella.

seccomp support for parisc

Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working.

x86 32-bit mmap ASLR vs unlimited stack fixed

Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. “ulimit -s unlimited“) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead.

x86 execute-only memory

Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for “execute only” memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I’m looking forward to either emulated QEmu support or access to one of these fancy CPUs.

CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86

Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel.

On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar’s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we’ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it’s reasonable to continue to leave an “out” for developers that find themselves tripping over it.

arm64 KASLR text base offset

Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the “/chosen/kaslr-seed” property) or from UEFI (via EFI_RNG_PROTOCOL), so if you’re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader.

zero-poison after free

Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity’s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn’t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called “sanity checking”), which can catch another small subset of flaws.

To understand the pieces of this, it’s worth describing that the kernel’s higher level allocator, the “page allocator” (e.g. __get_free_pages()) is used by the finer-grained “slab allocator” (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator).

Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn’t worth the performance trade-off.

Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you’re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well.

To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:

CONFIG_DEBUG_PAGEALLOC=n
CONFIG_PAGE_POISONING=y
CONFIG_PAGE_POISONING_NO_SANITY=y
CONFIG_PAGE_POISONING_ZERO=y
CONFIG_SLUB_DEBUG=y

and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:

page_poison=on slub_debug=P

To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add “F” to slub_debug as “slub_debug=PF“.

read-only after init

I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity’s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above).

Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared “const” in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked “__init“) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc.

As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it’s trivial to declare a new data section (“.data..ro_after_init“) and add it to the existing read-only data section (“.rodata“). Kernel structures can be annotated with the new section (via the “__ro_after_init” macro), and they’ll become read-only once boot has finished.

The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel’s attack surface can be made read-only for the majority of its lifetime.

As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these).

That’s it for v4.6, next up will be v4.7!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 28, 2016

security things in Linux v4.5

Previously: v4.4. Some things I found interesting in the Linux kernel v4.5:

CONFIG_IO_STRICT_DEVMEM

The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled. (And if you have a system where you discover you need IO memory access from userspace, you can boot with “iomem=relaxed” to disable this at runtime.)

If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled).

ptrace fsuid checking

Jann Horn fixed some corner-cases in how ptrace access checks were handled on special files in /proc. For example, prior to this fix, if a setuid process temporarily dropped privileges to perform actions as a regular user, the ptrace checks would not notice the reduced privilege, possibly allowing a regular user to trick a privileged process into disclosing things out of /proc (ASLR offsets, restricted directories, etc) that they normally would be restricted from seeing.

ASLR entropy sysctl

Daniel Cashman standardized the way architectures declare their maximum user-space ASLR entropy (CONFIG_ARCH_MMAP_RND_BITS_MAX) and then created a sysctl (/proc/sys/vm/mmap_rnd_bits) so that system owners could crank up entropy. For example, the default entropy on 32-bit ARM was 8 bits, but the maximum could be as much as 16. If your 64-bit kernel is built with CONFIG_COMPAT, there’s a compat version of the sysctl as well, for controlling the ASLR entropy of 32-bit processes: /proc/sys/vm/mmap_rnd_compat_bits.

Here’s how to crank your entropy to the max, without regard to what architecture you’re on:

for i in "" "compat_"; do f=/proc/sys/vm/mmap_rnd_${i}bits; n=$(cat $f); while echo $n > $f ; do n=$(( n + 1 )); done; done

strict sysctl writes

Two years ago I added a sysctl for treating sysctl writes more like regular files (i.e. what’s written first is what appears at the start), rather than like a ring-buffer (what’s written last is what appears first). At the time it wasn’t clear what might break if this was enabled, so a WARN was added to the kernel. Since only one such string showed up in searches over the last two years, the strict writing mode was made the default. The setting remains available as /proc/sys/kernel/sysctl_writes_strict.

seccomp UM support

Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures!

seccomp NNP vs TSYNC fix

Jann Horn noticed and fixed a problem where if a seccomp filter was already in place on a process (after being installed by a privileged process like systemd, a container launcher, etc) then the setting of the “no new privs” flag could be bypassed when adding filters with the SECCOMP_FILTER_FLAG_TSYNC flag set. Bypassing NNP meant it might be possible to trick a buggy setuid program into doing things as root after a seccomp filter forced a privilege drop to fail (generally referred to as the “sendmail setuid flaw”). With NNP set, a setuid program can’t be run in the first place.

That’s it! Next I’ll cover v4.6

Edit: Added notes about “iomem=…”

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 27, 2016

security things in Linux v4.4

Previously: v4.3. Continuing with interesting security things in the Linux kernel, here’s v4.4. As before, if you think there’s stuff I missed that should get some attention, please let me know.

seccomp Checkpoint/Restore-In-Userspace

Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) “this feature gives me the creeps.” :)

x86 W^X detection

Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this in v4.3 and added CONFIG_DEBUG_WX in v4.4 which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody’s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6).

x86_64 vsyscall CONFIG

I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn’t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with “vsyscall=emulate”.

That’s it for v4.4. Tune in tomorrow for v4.5!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

September 26, 2016

Obtendo mapas de São Paulo

No FISL do ano passado, ouvimos uma palestra sobre o geosampa bem interessante. Pouco tempo depois, o site já estava funcionando, e acabei de dar uma olhada agora, está ficando impressionante. Basicamente, é um site mantido pela prefeitura de São Paulo, que disponibiliza de maneira aberta e gratuita mapas de todo qual tipo da cidade. Veja...

security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :)

A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features.

So, to that end, here are things I found interesting in v4.3:

CONFIG_CPU_SW_DOMAIN_PAN

Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow.

This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink.

While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally.

To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features.

Ambient Capabilities

Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex “least privilege” execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier.

For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn’t enough, there is a self-test available in tools/testing/selftests/capabilities/ too.

PowerPC and Tile support for seccomp filter

Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly.

That’s it for v4.3. If I missed stuff you found interesting, please let me know! I’m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend. Next: v4.4.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Unclaimed Alcoholic Beverages

Dave was reading New Mexico laws regarding a voter guide issue we're researching, and he came across this gem in Section 29-1-14 G of the "Law Enforcement: Peace Officers in General: Unclaimed Property" laws:

Any alcoholic beverage that has been unclaimed by the true owner, is no longer necessary for use in obtaining a conviction, is not needed for any other public purpose and has been in the possession of a state, county or municipal law enforcement agency for more than ninety days may be destroyed or may be utilized by the scientific laboratory division of the department of health for educational or scientific purposes.

We can't decide which part is more fun: contemplating what the "other public purposes" might be, or musing on the various "educational or scientific purposes" one might come up with for a month-old beverage that's been sitting in the storage locker ... I'm envisioning a room surrounded by locked chain-link containing dusty shelves containing rows of half-full martini and highball glasses.

Working with terrain in #FreeCAD

Since I have not much new FreeCAD-related development to show this week, I'll showcase an existing feature that has been around for some time, which is an external workbench named geodata, programmed by the long-time FreeCAD community member and guru Microelly2. That workbench is part of the FreeCAD addons collection, which is a collection of additional...

September 25, 2016

Why an open Web is important when sea levels are rising

Cory Doctorow speaking on episode 221 of the excellent Changelog podcast:

“[t]here are things that are way more important than [whether in the internet should or shouldn’t be free]. There’s fundamental issues of economic justice, there’s climate change, there’s questions of race and gender and gender orientation, that are a lot more urgent than the future of the internet, but [...] every one of those fights is going to be won or lost on the internet.”

September 22, 2016

Comments about OARS and CSM age ratings

I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

Suggestions (especially references) welcome. Thanks!

September 21, 2016

GNOME Software and Age Ratings

After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.

screenshot-from-2016-09-21-10-22-36

The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

September 20, 2016

WebKitGTK+ 2.14

These six months has gone so fast and here we are again excited about the new WebKitGTK+ stable release. This is a release with almost no new API, but with major internal changes that we hope will improve all the applications using WebKitGTK+.

The threaded compositor

This is the most important change introduced in WebKitGTK+ 2.14 and what kept us busy for most of this release cycle. The idea is simple, we still render everything in the web process, but the accelerated compositing (all the OpenGL calls) has been moved to a secondary thread, leaving the main thread free to run all other heavy tasks like layout, JavaScript, etc. The result is a smoother experience in general, since the main thread is no longer busy rendering frames, it can process the JavaScript faster improving the responsiveness significantly. For all the details about the threaded compositor, read Yoon’s post here.

So, the idea is indeed simple, but the implementation required a lot of important changes in the whole graphics stack of WebKitGTK+.

  • Accelerated compositing always enabled: first of all, with the threaded compositor the accelerated mode is always enabled, so we no longer enter/exit the accelerating compositing mode when visiting pages depending on whether the contents require acceleration or not. This was the first challenge because there were several bugs related to accelerating compositing being always enabled, and even missing features like the web view background colors that didn’t work in accelerated mode.
  • Coordinated Graphics: it was introduced in WebKit when other ports switched to do the compositing in the UI process. We are still doing the compositing in the web process, but being in a different thread also needs coordination between the main thread and the compositing thread. We switched to use coordinated graphics too, but with some modifications for the threaded compositor case. This is the major change in the graphics stack compared to the previous one.
  • Adaptation to the new model: finally we had to adapt to the threaded model, mainly due to the fact that some tasks that were expected to be synchronous before became asyncrhonous, like resizing the web view.

This is a big change that we expect will drastically improve the performance of WebKitGTK+, especially in embedded systems with limited resources, but like all big changes it can also introduce new bugs or issues. Please, file a bug report if you notice any regression in your application. If you have any problem running WebKitGTK+ in your system or with your GPU drivers, please let us know. It’s still possible to disable the threaded compositor in two different ways. You can use the environment variable WEBKIT_DISABLE_COMPOSITING_MODE at runtime, but this will disable accelerated compositing support, so websites requiring acceleration might not work. To disable the threaded compositor and bring back the previous model you have to recompile WebKitGTK+ with the option ENABLE_THREADED_COMPOSITOR=OFF.

Wayland

WebKitGTK+ 2.14 is the first release that we can consider feature complete in Wayland. While previous versions worked in Wayland there were two important features missing that made it quite annoying to use: accelerated compositing and clipboard support.

Accelerated compositing

More and more websites require acceleration to properly work and it’s now a requirement of the threaded compositor too. WebKitGTK+ has supported accelerated compositing for a long time, but the implementation was specific to X11. The main challenge is compositing in the web process and sending the results to the UI process to be rendered on the actual screen. In X11 we use an offscreen redirected XComposite window to render in the web process, sending the XPixmap ID to the UI process that renders the window offscreen contents in the web view and uses XDamage extension to track the repaints happening in the XWindow. In Wayland we use a nested compositor in the UI process that implements the Wayland surface interface and a private WebKitGTK+ protocol interface to associate surfaces in the UI process to the web pages in the web process. The web process connects to the nested Wayland compositor and creates a new surface for the web page that is used to render accelerated contents. On every swap buffers operation in the web process, the nested compositor in the UI process is automatically notified through the Wayland surface protocol, and  new contents are rendered in the web view. The main difference compared to the X11 model, is that Wayland uses EGL in both the web and UI processes, so what we have in the UI process in the end is not a bitmap but a GL texture that can be used to render the contents to the screen using the GPU directly. We use gdk_cairo_draw_from_gl() when available to do that, falling back to using glReadPixels() and a cairo image surface for older versions of GTK+. This can make a huge difference, especially on embedded devices, so we are considering to use the nested Wayland compositor even on X11 in the future if possible.

Clipboard

The WebKitGTK+ clipboard implementation relies on GTK+, and there’s nothing X11 specific in there, however clipboard was read/written directly by the web processes. That doesn’t work in Wayland, even though we use GtkClipboard, because Wayland only allows clipboard operations between compositor clients, and web processes are not Wayland clients. This required to move the clipboard handling from the web process to the UI process. Clipboard handling is now centralized in the UI process and clipboard contents to be read/written are sent to the different WebKit processes using the internal IPC.

Memory pressure handler

The WebKit memory pressure handler is a monitor that watches the system memory (not only the memory used by the web engine processes) and tries to release memory under low memory conditions. This is quite important feature in embedded devices with memory limitations. This has been supported in WebKitGTK+ for some time, but the implementation is based on cgroups and systemd, that is not available in all systems, and requires user configuration. So, in practice nobody was actually using the memory pressure handler. Watching system memory in Linux is a challenge, mainly because /proc/meminfo is not pollable, so you need manual polling. In WebKit, there’s a memory pressure handler on every secondary process (Web, Plugin and Network), so waking up every second to read /proc/meminfo from every web process would not be acceptable. This is not a problem when using cgroups, because the kernel interface provides a way to poll an EventFD to be notified when memory usage is critical.

WebKitGTK+ 2.14 has a new memory monitor, used only when cgroups/systemd is not available or configured, based on polling /proc/meminfo to ensure the memory pressure handler is always available. The monitor lives in the UI process, to ensure there’s only one process doing the polling, and uses a dynamic poll interval based on the last system memory usage to read and parse /proc/meminfo in a secondary thread. Once memory usage is critical all the secondary processes are notified using an EventFD. Using EventFD for this monitor too, not only is more efficient than using a pipe or sending an IPC message, but also allows us to keep almost the same implementation in the secondary processes that either monitor the cgroups EventFD or the UI process one.

Other improvements and bug fixes

Like in all other major releases there are a lot of other improvements, features and bug fixes. The most relevant ones in WebKitGTK+ 2.14 are:

  • The HTTP disk cache implements speculative revalidation of resources.
  • The media backend now supports video orientation.
  • Several bugs have been fixed in the media backend to prevent deadlocks when playing HLS videos.
  • The amount of file descriptors that are kept open has been drastically reduced.
  • Fix the poor performance with the modesetting intel driver and DRI3 enabled.

Frogs on the Rio, and Other Amusements

Saturday, a friend led a group hike for the nature center from the Caja del Rio down to the Rio Grande.

The Caja (literally "box", referring to the depth of White Rock Canyon) is an area of national forest land west of Santa Fe, just across the river from Bandelier and White Rock. Getting there involves a lot of driving: first to Santa Fe, then out along increasingly dicey dirt roads until the road looks too daunting and it's time to get out and walk.

[Dave climbs the Frijoles Overlook trail] From where we stopped, it was only about a six mile hike, but the climb out is about 1100 feet and the day was unexpectedly hot and sunny (a mixed blessing: if it had been rainy, our Rav4 might have gotten stuck in mud on the way out). So it was a notable hike. But well worth it: the views of Frijoles Canyon (in Bandelier) were spectacular. We could see the lower Bandelier Falls, which I've never seen before, since Bandelier's Falls Trail washed out below the upper falls the summer before we moved here. Dave was convinced he could see the upper falls too, but no one else was convinced, though we could definitely see the red wall of the maar volcano in the canyon just below the upper falls.

[Canyon Tree Frog on the Rio Grande] We had lunch in a little grassy thicket by the Rio Grande, and we even saw a few little frogs, well camouflaged against the dirt: you could even see how their darker brown spots imitated the pebbles in the sand, and we wouldn't have had a chance of spotting them if they hadn't hopped. I believe these were canyon treefrogs (Hyla arenicolor). It's always nice to see frogs -- they're not as common as they used to be. We've heard canyon treefrogs at home a few times on rainy evenings: they make a loud, strange ratcheting noise which I managed to record on my digital camera. Of course, at noon on the Rio the frogs weren't making any noise: just hanging around looking cute.

[Chick Keller shows a burdock leaf] Sunday we drove around the Pojoaque Valley following their art tour, then after coming home I worked on setting up a new sandblaster to help with making my own art. The hardest and least fun part of welded art is cleaning the metal of rust and paint, so it's exciting to finally have a sandblaster to help with odd-shaped pieces like chains.

Then tonight was a flower walk in Pajarito Canyon, which is bursting at the seams with flowers, especially purple aster, goldeneye, Hooker's evening primrose and bahia. Now I'll sign off so I can catalog my flower photos before I forget what's what.

September 18, 2016

#FreeCAD news and Arch workflow

So, let's continue to post more often about FreeCAD. I'm beginning to organize a bit better, gathering screenshots and ideas during the week, so I'll try to keep this going. This week has seen many improvements, specially because we've been doing intense FreeCAD work with OpeningDesign. Like everytime you make intense use of FreeCAD or...

September 14, 2016

LVFS and ODRS are down

The LVFS firmware server and ODRS reviews server are down because my credit card registered with OpenShift expired. I’ve updated my credit card details, paid the pending invoice and still can’t start any server. I rang customer service who asked me to send an email and have heard nothing back.

screenshot-from-2016-09-14-17-34-59

I have backups a few days old, but this whole situation is terrible on so many levels.

EDIT: cdaley has got everything back working again, it appears I found a corner case in the code that deals with payments.

bicycle, node, network, design

This Monday ignite berlin took place and I did a fun, five minute, pecha kucha talk that also contained some systems analysis and a design insight. For a full transcript, read on.

ring ring

There are two things that you need to know about me. The first is that I am dutch and the second is that I am becoming a sentimental old fool. I combine the two when I do cycling holidays in Holland:

cycling in the dutch fields my partner Carmen leads the way

For this we use the fietsroutenetwerk, the bicycle route network of the Netherlands. This was designed for recreational cycling in the countryside. It was rolled out between 2003 and 2012. The network is point‐to‑point:

two points connected by a line, arrows pointing both ways

Between two neighbouring nodes there is complete signage—with no gaps—to get you from one to the other. And this in both directions. Here are some of these signs:

several roadside routing signposts sources: fietsen op de fiets, het groene woud, gps.nl

The implicit promise is that these are nice routes. That means: away from cars as much as possible. And scenic—through fields, heath and forrest.

Using the nodes, local networks have been designed and built:

a network of nodes on a local map

These networks are purely infrastructural; there is no preconception of what is ‘proper’ or ‘typical’ usage. They accommodate routes of any shape and any length.

At every node, one finds a local map, with the network:

on-location display of the local map source: wikimedia commons

It can be used for planning, reference and simply reassurance. Besides that, there are old‑fashioned maps and plenty of apps and websites for planning and sharing of routes.

The local networks were knitted together to form a national network:

a dense network covers the whole country

Looking at this map I see interesting differences in patterns and densities. I don’t think this only reflects the geography, but also the character of the locals; what they consider proper cycling infrastructure and scenic routes.

The network was not always nation-wide. It was rolled out over a period of nine years, one local network at the time. I still remember crossing a province border and (screech!) there was no more network. It was back to old‑fashioned map reading and finding the third street on the left.

not invented here

I was shocked to find out that the Dutch did not invent this network system. We have to go back to the 1980s, north‐east Belgium: all the coal mines are closing. Mining engineer Hugo Bollen proposes to create a recreational cycling network, in order to initiate economic regeneration of the region. Here’s Hugo:

Hugo Bollen rides a bike in nature source: toerisme limburg

He designed the network rules explained in this blog post. The Belgians actually had to build(!) all of the cycling infrastructure, so it took them to 1995 to open the first local network. It now brings in 16.5 million Euro a year to the region.

how many?

I got curious about the total number of network nodes in Holland. I could not find this number on the internet. The net is really quite short on stats and data of the cycling network. So I needed to find out by myself. What I did was take one of my maps—

a traditional cycling map that covers a part of holland

And I counted all the nodes—there were 309. I multiplied this with the number of maps that cover all of Holland. Then I took 75% of that number to deal with map overlaps and my own over‐enthusiasm. The result: I estimate that the dutch network consists of 9270 nodes.

in awe

The reason I got curious about that number is that every time I use the network, I am impressed by a real‐genius design decision (and I don’t get to say that very often). It makes all the difference, when using the network in anger.

All these nearly‐ten thousand nodes are identified by a two‑digit number. Not the four (or more future‐proof, five) one would expect. All the nodes are simply numbered 1 through 99, and then they start at one again. And shorter is much better:

cycling route signage with direction for node 02 source: recreatieschap westfriesland

Two digits is much faster to read and write down. It is easier to memorise, short‐term. It is instant to compare and confirm. Remember, most of these actions are performed while riding a bike at a nice cruising speed.

but…

Pushing through this two‑digit design must have been asking for trouble. Most of us can just imagine the bike‐shedding: ‘what if cyclists really need to be able to uniquely identify a node in the whole nation?’ Or: ‘will cyclists get confused by these repeating numbers?’

This older cycling signpost system has a five‑digit identification number:

a clycling signpost showing directions to nearby villages and towns source: dirk de baan

This number takes several steps to process. Two‑digit numbers are humane numbers. They exploit that way‐finding is a very local activity—although one can cover 130km a day on a bike.

whatchamacallit?

Wrapping up, the cycling network is a distributed network:

three graphs: a centralised, a decentralised and a distributed network source: j4n

All nodes are equal and so are all routes. Cyclist route themselves. In that way the network works quite like… the internet.

We could call it the democratic network, because it treats everyone as equals. Or we could call it the liberal network (that would be very dutch). Or—in a post‐modern way—we could call it the atomised network.

I simply call it the bicycle route network of the Netherlands.

a vista over dutch fields with a calf and two cyclists

September 12, 2016

Art on display at the Bandelier Visitor Center

As part of the advertising for next month's Los Alamos Artists Studio Tour (October 15 & 16), the Bandelier Visitor Center in White Rock has a display case set up, and I have two pieces in it.

[my art on display at Bandelier]

The Velociraptor on the left and the hummingbird at right in front of the sweater are mine. (Sorry about the reflections in the photo -- the light in the Visitor Center is tricky.)

The turtle at front center is my mentor David Trujillo's, and I'm pretty sure the rabbit at far left is from Richard Swenson.

The lemurs just right of center are some of Heather Ward's fabulous scratchboard work. You may think of scratchboard as a kids' toy (I know I used to), but Heather turns it into an amazing medium for wildlife art. I'm lucky enough to get to share her studio for the art tour: we didn't have a critical mass of artists in White Rock, just two of us, so we're borrowing space in Los Alamos for the tour.

September 09, 2016

Click Hooks

After being asked about what I like about Click hooks I thought it would be nice to write up a little bit of the why behind them in a blog post. The precursor to this story is that I told Colin Watson that he was wrong to build hooks like this; he kindly corrected me and helped me fix my code to match but I still wasn't convinced. Now today I see some of the wisdom in the Click hook design and I'm happy to share it.

The standard way to think about hooks is as a way to react to the changes to the system. If a new application is installed then the hook gets information about the application and responds to the new data. This is how most libraries work with providing signals about the data that they maintain, and we apply that same logic to thinking about filesystem hooks. But filesystem hooks are different because the coherent state is harder to query. In your library you might respond the signal for a few things, but in many code paths the chances are you'll just go through the list of original objects to do operations. With filesystem hooks that complete state is almost never used, only the caches are that are created by the hooks themselves.

Click hooks work by creating a directory of symbolic links that matches the current state of the system, and then asks you to ensure your cache matches that state of the system. This seems inefficient because you have to determine which parts of your cache need to change, which get removed and which get added. But it results in better software because your software, including your hooks, has errors in it. I'm sorry to be the first one to tell you, but there are bugs. If your software is 99% correct, there is still something it is doing wrong. When you have delta updates that update the cache that error compounds and never gets completely corrected with each update because the complete state is never examined. So slowly the quality of your cache gets worse, not awful, but worse. By transferring the current system state to the cache each time you get the error rate of your software in the cache, but you don't get the compounded error rate of each delta. This adds up.

The design of the hooks system in Click might feel wrong as you start to implement one, but I think that after you create a few hooks you'll find there is wisdom in it. And as you use other hook systems in other platforms think about checking the system state to ensure you're always creating the best cache possible, even if the hook system there didn't force you to do it.

September 08, 2016

Watch this person use Excel for an hour

Joel Spolsky, of Stack Overflow, Trello, and Fog Creek, did an internal presentation where he just walked through how he uses Microsoft Excel for about an hour.

It’s riveting for two reasons.

First, I learned a bunch of techniques that I didn’t know existed (transpose! named values! oh my!). Unfortunately, many of those don’t apply to Google Spreadsheets, which is worth using due to the simple and powerful collaboration tools. A few of the techniques are universal to spreadsheets, though.

Second, he’s good at it. There is something compelling about watching someone with deep skill and knowledge do their work, regardless of what it is. In the same way, I can enjoy watching a skilled musical perform regardless of my interest and taste in their musical genre.

This style of presentation, featuring a simple tour of the just-beyond-basic features, is a great way to share with co-workers. I’ve learned a ton from watching Stephen use Photoshop, and I got hooked on split-panes in iTerm after watching Malena screen-share in an unrelated presentation.

September 07, 2016

darktable 2.0.6 released

we're proud to announce the sixth bugfix release for the 2.0 series of darktable, 2.0.6!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.6.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

2368c1865221032061645342ba8c00bcd6d224e9829a55bc610e6cb67de738c1  darktable-2.0.6.tar.xz
8376ab1bb74f4a25998ff1a7f03c8498b57064bf27700c9af53a7356e5a2ee1e  darktable-2.0.6.dmg

and the changelog as compared to 2.0.5 can be found below.

New Features

  • Jpeg format writer: use libexiv2 to write metadata, like with other formats
  • Accept non-mosaiced raw files with 4 channels, assume they are RGBA (alpha channel is ignored)

Bugfixes

  • Once again, fix for yet another gtk theming regression...
  • OpenCL: properly discard CPU-based OpenCL devices. Fixes crashes on startup with some broken OpenCL implementations like pocl.
  • darktable-cli: do not even try to open display, we don't need it.
  • Rawspeed: NikonDecoder: stop accepting generic camera entries. Fixes multitude of Nikon raw loading issues.
  • OpenCL: fix border handling in crop&rotate module
  • Hotpixels iop: make it actually work for X-Trans
  • Clipping IOP: scale width of gray crop path with zoom level
  • One more fixup to canon lens name reading from exif
  • Fixup Bayer pattern for Olympus SP570UZ
  • Fix internal build issue: do not assume that Perl's @INC contains '.'

Base Support

  • Canon EOS-1D X Mark II
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot G7 X Mark II
  • Canon PowerShot G9 X
  • Fujifilm X-T2
  • GITUP GIT2 action camera
  • Panasonic DMC-FZ18 (16:9, 3:2)
  • Panasonic DMC-FZ50 (16:9, 3:2)
  • Pentax K-1
  • Sony DSLR-A380
  • Sony ILCE-6300
  • Nikon D500
  • Some other whitelevel fixups for some other Nikon cameras (in particular, mostly for 12-bit and not compressed raws)

White Balance Presets

  • Canon EOS-1D X Mark II
  • Canon EOS 1300D
  • Canon EOS Kiss X80
  • Canon EOS Rebel T6
  • Canon EOS M10
  • Canon PowerShot G7 X Mark II
  • Fujifilm X-T10
  • Sony ILCE-6300

Translations Updates

  • Slovak

Design sprints and healthcare

With the help of a few of my co-workers, I've written about a new design sprint process we've been using at silverorange, and how it applies in healthcare organizations. It started as a post on our silverorange blog, but was pulled into GV's Sprint Stories publication (thanks to John Zeratsky).

If you love design processes and healthcare (and who doesn't), read the article: Running a design sprint in a healthcare organization

On Surplus

“We as human beings find a way to waste most surpluses that technology hands to us.”

—Stewart Butterfield of Slack speaking on The Ezra Klein Show podcast.

He also makes a good analogy between our difficulty managing the new ability to communicate with anyone/anytime and the difficulty of dealing with the abundance of easy/cheap calories available to many of us.

September 06, 2016

The Taos Earthships (and a lovely sunset)

We drove up to Taos today to see the Earthships.

[Taos Earthships] Earthships are sustainable, completely off-the-grid houses built of adobe and recycled materials. That was pretty much all I knew about them, except that they were weird looking; I'd driven by on the highway a few times (they're on highway 64 just west of the beautiful Rio Grande Gorge Bridge) but never stopped and paid the $7 admission for the self-guided tour.

[Earthship construction] Seeing them up close was fun. The walls are made of old tires packed with dirt, then covered with adobe. The result is quite strong, though like all adobe structures it requires regular maintenance if you don't want it to melt away. For non load bearing walls, they pack adobe around old recycled bottles or cans.

The houses have a passive solar design, with big windows along one side that make a greenhouse for growing food and freshening the air, as well as collecting warmth in cold weather. Solar panels provide power -- supposedly along with windmills, but I didn't see any windmills in operation, and the ones they showed in photos looked too tiny to offer much help. To help make the most of the solar power, the house is wired for DC, and all the lighting, water pumps and so forth run off low voltage DC. There's even a special DC refrigerator. They do include an AC inverter for appliances like televisions and computer equipment that can't run directly off DC.

Water is supposedly self sustaining too, though I don't see how that could work in drought years. As long as there's enough rainfall, water runs off the roof into a cistern and is used for drinking, bathing etc., after which it's run through filters and then pumped into the greenhouse. Waste water from the greenhouse is used for flushing toilets, after which it finally goes to the septic tank.

All very cool. We're in a house now that makes us very happy (and has excellent passive solar, though we do plan to add solar panels and a greywater system some day) but if I was building a house, I'd be all over this.

We also discovered an excellent way to get there without getting stuck in traffic-clogged Taos (it's a lovely town, but you really don't want to go near there on a holiday, or a weekend ... or any other time when people might be visiting). There's a road from Pilar that crosses the Rio Grande then ascends up to the mesa high above the river, continuing up to highway 64 right near the earthships. We'd been a little way up that road once, on a petroglyph-viewing hike, but never all the way through. The map said it was dirt from the Rio all the way up to 64, and we were in the Corolla, since the Rav4's battery started misbehaving a few days ago and we haven't replaced it yet.

So we were hesitant. But the nice folks at the Rio Grande Gorge visitor center at Pilar assured us that the dirt section ended at the top of the mesa and any car could make it ("it gets bumpy -- a New Mexico massage! You'll get to the top very relaxed"). They were right: the Corolla made it with no difficulty and it was a much faster route than going through Taos.

[Nice sunset clouds in White Rock] We got home just in time for the rouladen I'd left cooking in the crockpot, and then finished dinner just in time for a great sunset sky.

A few more photos: Earthships (and a great sunset).

September 04, 2016

From the Community Vol. 1


From the Community Vol. 1

Welcome to the first installment of From the Community, a (hopefully) quarterly blog post to highlight a few of the things our community members have been doing!

Rapid Photo Downloader Process Model

@damonlynch has a great write up of Rapid Photo Download’s process model. Rapid Photo Downloader is built using Python, so if you’re looking for a good way to add threads to your Python program, this write up has some good information for you, check it out!

rpd process model

Community-built Software downloads page

Free Software development tends to move at a pretty good pace, so there is always something new to try out! Not all of the new things warrant a new release, but our community steps up and builds the software so that others can use and test! Instead of random links to dropboxes and such, we’ve created a Community-built Software page to help centralize and make it easy for our users to help find and download the freshest builds of software from our great community members. Keep in mind that support may be limited for these builds and they’re considered testing, so quality may vary, but if you covet the newest, shiniest things, this is the place for you!

Glitch art filters coming to G’MIC

G’MIC will be getting some cool glitch art filters in 1.7.6. @thething is interested in glitch art and requested some new filters in G’MIC, and @David_Tschumperle delivered very quickly!

You can flip blocks:

GMIC block flipping

and warp your images:

GMIC image warping

An Alternative to Watermarking

Watermarking is ugly and takes focus away from your image. Why not try and add an attribution bar to your images? In this post, @patdavid lays out how to add a bar underneath your image with your name, the image title, and a little logo. @David_Tschumperle followed that effort up with an alternate implementation using G’MIC instead of imagemagic. Lastly, @vato rolled the imagemagick version into a bash script with the necessary parameters exposed as variables at the beginning of the script.

Here is an example image by @Morgan_Hardwood:

attribution bar example

Help Author a Tutorial for Beginners

Finally, we’re still working on our beginner article to help new users navigate the myriad of free software photography software that is out there. If you have ideas, or better yet, want to author a bit of content with our community, please join and help out! The post is community wiki and has complete revision control, so don’t be afraid to jump in and contribute!

September 02, 2016

Fedora 25 and Additional Software Sources

I was asked to produce a checklist for applications that we want to show up in GNOME Software in Fedora 25. In this post I’ll refer to applications as graphical programs, rather than other system add-on components like drivers and codecs (which the next post will talk about). There is a big checklist, which really is the bare minimum that the distributor has to provide so that the application is listed correctly. If any of these points is causing problems or is confusing, please let me know and I’ll do my best to help.

So, these things really have to be done:

  • Verify that you ship a .desktop file for each built application, and that these keys exist: Name, Comment, Icon, Categories, Keywords and Exec and that desktop-file-validate correctly validates the file.
  • Verify that there is a PNG (with transparent background) or SVG icon is installed in /usr/share/icons, /usr/share/icons/hicolor/*/apps/*, or /usr/share/${app_name}/icons/* and is at least 64×64 in size.
  • At least one valid AppData file with the suffix .appdata.xml file must be installed into /usr/share/appdata with an <id> that matches the name of the .desktop file, e.g. gimp.appdata.xml. Ideally the name of both the desktop file and appdata should be reverse DNS, e.g. com.hughski.ColorHug.desktop rather than colorhug-client.desktop although this isn’t critically important.
  • Include several 16:9 aspect screenshots in the AppData file along with a compelling translated description made up of multiple paragraphs. Make sure you follow the style guide, which can be tested using appstream-util validate foo.appdata.xml
  • Make sure that there are not two applications installed with one package; in this case split up the package so that there are multiple subpackages or mark one of the .desktop files as NoDisplay=true. Make sure the application-subpackages depend on any -common subpackage and deal with upgrades (perhaps using a metapackage) if you’ve shipped the application before.
  • Make sure your application is visible in the example.xml.gz file when running appstream-builder on the binary rpm(s).
  • Make sure the AppStream metadata is regenerated when the application is updated in the repo, for more details see an entire blog post on this
  • Ensure that enabled_metadata=1 is set in the .repo file. This means that PackageKit will automatically download just the application metadata even when the repository is disabled.

August 30, 2016

Back from Krita Sprint 2016

Last week, I spent 4 days at the Krita Sprint in Deventer, where several contributors gathered to discuss the current hot topics, draw and hack together.

You can read a global report of the event on krita.org news.

On my side, besides meeting old and new friends, and discussing animation, brushes and vector stuff, I made three commits:
-replace some duplicate icons by aliases in qrc files
-update the default workspaces
-add a new “Eraser Switch Opacity” feature (this one is on a separate branch for now)

I also filed new tasks on phabricator for two feature requests to improve some color and animation workflow:

https://phabricator.kde.org/T3542

https://phabricator.kde.org/T3543

Once again, I feel it’s been a great and productive meeting for everyone. A lot of cool things are ready for next Krita version, this is exciting! So much thanks to KDE e.V. for the travel support, and to the Krita foundation for hosting the event and providing accomodation and food.

August 29, 2016

Happy Porting!

Last year, I wrote about how library authors should pretty darn well never ever make their users spend time on "porting". Porting is always a waste of time. No matter how important the library author thinks his newly fashionable way of doing stuff is, it is never ever as important as the time porting takes away from the application author's real mission: the work on their applications. I care foremost about my users; I expect a library author to care about their users, i.e, people like me.

So, today I was surprised by Goodbye, Q_FOREACH by Marc Mutz. (Well known for his quixotic crusade to de-Qt Qt.)

Well, fuck.

Marc, none, not a single one of all of the reasons you want to deprecate Q_FOREACH is a reason I care even a little bit about. It's going to be deprecated? Well, that's a decision, and a dumb one. It doesn't work on std containers, QVarLengthArray or C arrays? I don't use it on those. It adds 100 bytes of text size? Piffle. It makes it hard to reason about the loop for you? I don't care.

What I do care is the 1559 places where we use Q_FOREACH in Krita. Porting this will take weeks.

Marc, I hope that you will have a patch ready for us on phabricator soon: you can add it to this project and keep iterating until you've fixed all the bugs.

Happy porting, Marc!

Come into the real world and learn how well this let's-depracate-and-let-the-poor-shmuck-port-their-code attitude works out.

August 26, 2016

More map file conversions: ESRI Shapefiles and GeoJSON

I recently wrote about Translating track files between mapping formats like GPX, KML, KMZ and UTM But there's one common mapping format that keeps coming up that's hard to handle using free software, and tricky to translate to other formats: ESRI shapefiles.

ArcGIS shapefiles are crazy. Typically they come as an archive that includes many different files, with the same base name but different extensions: filename.sbn, filename.shx, filename.cpg, filename.sbx, filename.dbf, filename.shp, filename.prj, and so forth. Which of these are important and which aren't?

To be honest, I don't know. I found this description in my searches: "A shape file map consists of the geometry (.shp), the spatial index (.shx), the attribute table (.dbf) and the projection metadata file (.prj)." Poking around, I found that most of the interesting metadata (trail name, description, type, access restrictions and so on) was in the .dbf file.

You can convert the whole mess into other formats using the ogr2ogr program. On Debian it's part of the gdal-bin package. Pass it the .shp filename, and it will look in the same directory for files with the same basename and other shapefile-related extensions. For instance, to convert to KML:

 ogr2ogr -f KML output.kml input.shp

Unfortunately, most of the metadata -- comments on trail conditions and access restrictions that were in the .dbf file -- didn't make it into the KML.

GPX was even worse. ogr2ogr knows how to convert directly to GPX, but that printed a lot of errors like "Field of name 'foo' is not supported in GPX schema. Use GPX_USE_EXTENSIONS creation option to allow use of the <extensions> element." So I tried ogr2ogr -f "GPX" -dsco GPX_USE_EXTENSIONS=YES output.gpx input.shp but that just led to more errors. It did produce a GPX file, but it had almost no useful data in it, far less than the KML did. I got a better GPX file by using ogr2ogr to convert to KML, then using gpsbabel to convert that KML to GPX.

Use GeoJSON instead to preserve the metadata

But there is a better way: GeoJSON.

ogr2ogr -f "GeoJSON" -t_srs crs:84 output.geojson input.shp

That preserved most, maybe all, of the metadata the .dbf file and gave me a nicely formatted file. The only problem was that I didn't have any programs that could read GeoJSON ...

[PyTopo showing metadata from GeoJSON converted from a shapefile]

But JSON is a nice straightforward format, easy to read and easy to parse, and it took surprisingly little work to add GeoJSON parsing to PyTopo. Now, at least, I have a way to view the maps converted from shapefiles, click on a trail and see the metadata from the original shapefile.

See also:

August 25, 2016

Summer Talks, PurpleEgg

I recently gave talks at Flock in Krakow and GUADEC in Karlsruhe:

Flock: What’s Fedora’s Alternative to vi httpd.conf Video Slides: PDF ODP
GUADEC: Reworking the desktop distribution Video Slides: PDF ODP

The topics were different but related: The Flock talk talked about how to make things better for a developer using Fedora Workstation as their development workstation, while the GUADEC talk was about the work we are doing to move Fedora to a model where the OS is immutable and separate from applications. A shared idea of the two talks is that your workstation is not your development environment environment. Installing development tools, language runtimes, and header files as part of your base operating system implies that every project you are developing wants the same development environment, and that simply is not the case.

At both talks, I demo’ed a small project I’ve been working on with codename of PurpleEgg (I didn’t have that codename yet at Flock – the talk instead talks about “NewTerm” and “fedenv”.) PurpleEgg is about creating easily creating containerized environments dedicated to a project, and about integrating those projects into the desktop user interface in a natural, slick way.

The command line client to PurpleEgg is called pegg:

[otaylor@localhost ~]$ pegg create django mydjangosite
[otaylor@localhost ~]$ cd ~/Projects/mydjangosite
[otaylor@localhost mydangjosite]$  pegg shell
[[mydjangosite]]$ python manage.py runserver
August 24, 2016 - 19:11:36
Django version 1.9.8, using settings 'mydjangosite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

“pegg create” step did the following steps:

  • Created a directory ~/Projects/mydjangosite
  • Created a file pegg.yaml with the following contents:
base: fedora:24
packages:
- python3-virtualenv
- python3-django
  • Created a Docker image is the Fedora 24 base image plus the specified packages
  • Created a venv/ directory in the specified directory and initialized a virtual environment there
  • Ran ‘django-admin startproject’ to create the standard Django project

pegg shell

  • Checked to see if the Docker image needed updating
  • Ran a bash prompt inside the Docker image with a customized prompt
  • Activated the virtual environment

The end result is that, without changing the configuration of the host machine at all, in a few simple commands we got to a place where we can work on a Django project just as it is documented upstream.

But with the PurpleEgg application installed, you get more: you get search results in the GNOME Activities Overview for your projects, and when you activate a search result, you see a window like:

PurpleEgg-screenshot

We have a terminal interface specialized for our project:

  • We already have the pegg environment activated
  • New tabs also open within that environment
  • The prompt is uncluttered with relevant information moved to the header bar
  • If the project is checked into Git, the header bar also tracks the Git branch

There’s a fair bit more that could be done: a GUI for creating and importing projects as in GNOME Builder, GUI integration for Vagrant and Docker, configuring frequently used commands in pegg.yaml, etc.

At the most basic, the idea is that server-side development is terminal-centric and also somewhat specialized – different languages and frameworks have different ways of doing things. PurpleEgg embraces working like that, but adds just enough conventions so that we can make things better for the developer – just because the developer wants a terminal doesn’t mean that all we can give them is a big pile of terminals.

PurpleEgg codedump is here. Not warrantied to be fit for any purpose.


August 24, 2016

Getting S3 Statistics using S3stat

I’ve been using Amazon S3 as a CDN for the LVFS metadata for a few weeks now. It’s been working really well and we’ve shifted a huge number of files in that time already. One thing that made me very anxious was the bill that I was going to get sent by Amazon, as it’s kinda hard to work out the total when you’re serving cough millions of small files rather than a few large files to a few people. I also needed to keep track of which files were being downloaded for various reasons and the Amazon tools make this needlessly tricky.

I signed up for the free trial of S3stat and so far I’ve been pleasantly surprised. It seems to do a really good job of graphing the spend per day and also allowing me to drill down into any areas that need attention, e.g. looking at the list of 404 codes various people are causing. It was fairly easy to set up, although did take a couple of days to start processing logs (which is all explained in the set up). Amazon really should be providing something similar.

Screenshot from 2016-08-24 11-29-51

For people providing less than 200,000 hits per day it’s only $10, which seems pretty reasonable. For my use case (bazillions of small files) it rises to a little-harder-to-justify $50/month.

I can’t justify the $50/month for the LVFS, but luckily for me they have a Cheap Bastard Plan (their words, not mine!) which swaps a bit of advertising for a free unlimited license. Sounds like a fair swap, and means it’s available for a lot of projects where $600/yr is better spent elsewhere.

Devo Firmware Updating

Does anybody have a Devo RC transmitter I can borrow for a few weeks? I need model 6, 6S, 7E, 8, 8S, 10, 12, 12S, F7 or F12E — it doesn’t actually have to work, I just need the firmware upload feature for testing various things. Please reshare/repost if you’re in any UK RC groups that could help. Thanks!

August 18, 2016

Updating Firmware on 8Bitdo Game Controllers

I’ve spent a few days adding support for upgrading the firmware of the various wireless 8Bitdo controllers into fwupd. In my opinion, the 8Bitdo hardware is very well made and reasonably priced, and also really good retro fun.

Although they use a custom file format for firmware, and also use a custom flashing protocol (seriously hardware people, just use DFU!) it was quite straightforward to integrate into fwupd. I’ve created a few things to make this all work:

  • a small libebitdo library in fwupd
  • a small ebitdo-tool binary that talks to the device and can flash a vendor supplied .dat file
  • a ebitdo fwupd provider that uses libebitdo to flash the device
  • a firmware repo that contains all the extra metadata for the LVFS

I guess I need to thank the guys at 8Bitdo; after asking a huge number of questions they open sourced their OS-X and Windows flashing tools, and also allowed me to distribute the firmware binary on the LVFS. Doing both of those things made it easy to support the hardware.

Screenshot from 2016-08-18 10-36-56

The result of all this is that you can now do fwupd update when the game-pad is plugged in using the USB cable (not just connected via bluetooth) and the firmware will be updated to the latest version. Updates will show in GNOME Software, and the world is one step being closer to being awesome.

August 17, 2016

Making New Map Tracks with Google Earth

A few days ago I wrote about track files in maps, specifically Translating track files between mapping formats. I promised to follow up with information on how to create new tracks.

For instance, I have some scans of old maps from the 60s and 70s showing the trails in the local neighborhood. There's no newer version. (In many cases, the trails have disappeared from lack of use -- no one knows where they're supposed to be even though they're legally trails where you're allowed to walk.) I wanted a way to turn trails from the old map into GPX tracks.

My first thought was to trace the old PDF map. A lot of web searching found a grand total of one page that talks about that: How to convert image of map into vector format?. It involves using GIMP to make an image containing just black lines on a white background, saving as uncompressed TIFF, then using a series of commands in GRASS. I made a start on that, but it was looking like it might be a big job that way. Since a lot of the old trails are still visible as faint traces in satellite photos, I decided to investigate tracing satellite photos in a map editor first, before trying the GRASS method.

But finding a working open source map editor turns out to be basically impossible. (Opportunity alert: it actually wouldn't be that hard to add that to PyTopo. Some day I'll try that, but now I was trying to solve a problem and hoping not to get sidetracked.)

The only open source map editor I've found is called Viking, and it's terrible. The user interface is complicated and poorly documented, and I could input only two or three trail segments before it crashed and I had to restart. Saving often, I did build up part of the trail network that way, but it was so slow and tedious restoring between crashes that I gave up.

OpenStreetMap has several editors available, and some of them are quite good, but they're (quite understandably) oriented toward defining roads that you're going to upload to the OpenStreetMap world map. I do that for real trails that I've walked myself, but it doesn't seem appropriate for historical paths between houses, some of which are now fenced off and few of which I've actually tried walking yet.

Editing a track in Google Earth

In the end, the only reasonable map editor I found was Google Earth -- free as in beer, not speech. It's actually quite a good track editor once I figured out how to use it -- the documentation is sketchy and no one who writes about it tells you the important parts, which were, for me:

Click on "My Places" in the sidebar before starting, assuming you'll want to keep these tracks around.

Right-click on My Places and choose Add->Folder if you're going to be creating more than one path. That way you can have a single KML file (Google Earth creates KML/KMZ, not GPX) with all your tracks together.

Move and zoom the map to where you can see the starting point for your path.

Click the "Add Path" button in the toolbar. This brings up a dialog where you can name the path and choose a color that will stand out against the map. Do not hit Return after typing the name -- that will immediately dismiss the dialog and take you out of path editing mode, leaving you with an empty named object in your sidebar. If you forget, like I kept doing, you'll have to right-click it and choose Properties to get back into editing mode.

Iconify, shade or do whatever your window manager allows to get that large, intrusive dialog out of the way of the map you're trying to edit. Shade worked well for me in Openbox.

Click on the starting point for your path. If you forgot to move the map so that this point is visible, you're out of luck: there's no way I've found to move the map at this point. (You might expect something like dragging with the middle mouse button, but you'd be wrong.) Do not in any circumstances be tempted to drag with the left button to move the map: this will draw lots of path points.

If you added points you don't want -- for instance, if you dragged on the map trying to move it -- Ctrl-Z doesn't undo, and there's no Undo in the menus, but Delete removes previous points. Whew.

Once you've started adding points, you can move the map using the arrow keys on your keyboard. And you can always zoom with the mousewheel.

When you finish one path, click OK in its properties dialog to end it.

Save periodically: click on the folder you created in My Places and choose Save Place As... Google Earth is a lot less crashy than Viking, but I have seen crashes.

When you're done for the day, be sure to File->Save->Save My Places. Google Earth apparently doesn't do this automatically; I was forever being confused why it didn't remember things I had done, and why every time I started it it would give me syntax errors on My Places saying it was about to correct the problem, then the next time I'd get the exact same error. Save My Places finally fixed that, so I guess it's something we're expected to do now and then in Google Earth.

Once I'd learned those tricks, the map-making went fairly quickly. I had intended only to trace a few trails then stop for the night, but when I realized I was more than halfway through I decided to push through, and ended up with a nice set of KML tracks which I converted to GPX and loaded onto my phone. Now I'm ready to explore.

August 16, 2016

Design Team Fedora Activity Day (FAD) Event Report

Fedora Design Team Logo

design team fad attendees portrait

From left to right: Mo Duffy, Marie Nordin, Masha Leonova, Chris Roberts, Radhika Kolathumani, Sirko Kemter (photo credit: Sirko Kemter)

Two weekends ago now, we had a 2-day Fedora Activity Day (heh, a 2-day day) for the Fedora Design Team. We had three main goals for this FAD, although one of them we didn’t cover (:-() :

  • Hold a one-day badges hackfest – the full event report is available for this event – we have wanted to do an outreach activity for some time so this was a great start.
  • Work out design team logistics – some of our members have changed location causing some meeting time issues despite a few different attempts to work around them. We had a few other issues to tackle too (list to come later in this post.) We were able to work through all points and come up with solutions except for one (we ran out of time.)
  • Usability test / brainstorm on the Design Team Hub on Fedora Hubs – so the plan was that the Design Team Hub would be nearly ready for the Flock demo the next week, but this wasn’t exactly the case so we couldn’t test it. With all of the last-minute prep for the workshop event, we didn’t have any time to have much discussion on hubs, either. We did, however, discuss some related hub needs in going through our own workflow in our team logistics discussion, so we did hit on this briefly.

So I’m going to cover the topics discussed aside from the workshop (which already has a full event report), but first I want to talk a little bit about the logistics of planning a FAD and how that worked out first since I totally nerded out on that aspect and learned a lot I want to share. Then, I’ll talk about our design team discussion, the conclusions we reached, and the loose ends we need to tie up still.

Logistics

I had already planned an earlier Design Team FAD for January 2015, so I wasn’t totally new to the process. There were definitely challenges though.

Budget

First, we requested funding from the Fedora Council in late March. We found out 6 weeks later (early May, a little less than 3 months before the event) that we had funding approval, although the details about how that would work weren’t solidified until less than 4 weeks before the event.

Happily, I assumed it’d be approved, filed a request to use the Red Hat Westford facility for the event. There were two types of tickets I had to file for this – a GWS Special Event Request and a GWS Meeting Support Request. The special event request was the first one – I filed that on June 1 (2 months ahead) and it was approved June 21 (took about 3 weeks.) Then, on 7/25 the week before the event, I filed the meeting support request to have the room arranged in classroom style as well as open up the wall between the two medium-sized conference rooms so we had one big room for the community event. I also set up a meeting with the A/V support guy, Malcolm, to get a quick run through of how to get that working. It was good I went ahead and filed the initial request since it took 3 weeks to go through.

The reason it took a while to work out the details on the budget was because we scheduled the event for right before Flock, which meant coordinating / sharing budgets. We did this both to save money and also to make sure we could discuss design-team related Flock stuff before heading to Flock. While this saved some money ultimately, IMHO the complications weren’t worth it:

  • We had to wait for the Flock talk proposals to be reviewed and processed before we knew which FAD attendees would also be funded for Flock, which delayed things.
  • Since things were delayed from that, we ended up missing on some great flight pricing, which meant Ryan Lerch wasn’t able to come 🙁
  • To be able to afford the attendees we had with less than 4 weeks to go, we had to do this weird flight nesting trick jzb figured out. Basically, we booked home<=>BOS round trip tickets, then BOS<=>KRK round trip tickets. This meant Sirko had to fly to Boston after Flock before he could head home to PNH, but it saved a *ton* of money.
fad budget spreadsheet screenshot

behold, our budget

Another complication: we maxed out my corporate card limit before everything was booked. 🙂 I now have a credit increase, so hopefully next event this won’t happen!

The biggest positive budget-wise for this event was the venue cost – free. 🙂 Red Hat Westford kindly hosted us.

I filed the expense reports for the event this past week, and although the entire event was under budget, we had some unanticipated costs as well as a small overage in food budget:

  • Our original food budget was $660. We spent $685.28. We were $25.28 over. (Pretty good IMHO. I used an online pizza calculator to figure out budget for the community event and was overly generous in how much pizza people would consume. 🙂 )
  • We spent $185.83 in unanticipated costs. This included tolls (it costs $3.50 to leave Logan Airport), parking fees, gas, and hotel taxes ($90 in hotel taxes!)

Lessons Learned:

  • Sharing budget with other events slows your timeline down – proceed with caution!
  • Co-location with another event is a better way to share costs logistically.
  • Pizza calculators are a good tool for figuring out food budget. 🙂
  • Budget in a tank of gas if you’ve got a rental.
  • Figure out what tolls you’ll encounter. Oh and PAY CASH, in the US EzPass with a rental car is a ripoff.
  • Ask the hotel for price estimates including taxes/fees.

Transportation

I rented a minivan to get folks between Westford and the airport as well as between the hotel and the office. I carpool with my husband to work, so I picked it up near the Red Hat Westford office and set up the booking so I was able to leave it at Logan Airport after the last airport run.

Our chariot. I cropped him out of the portrait. Sorry, Toyota Sienna! It has nice pickup. I still am never buying a minivan ever, even if I have more kids. Never minivan, never!

Our chariot. I cropped him out of the portrait. Sorry, Toyota Sienna! It has nice pickup. I still am never buying a minivan ever, even if I have more kids. Never minivan, never!

With international flights and folks coming in on different nights, and the fact I actually live much closer to the airport than the hotel up in Westford (1 hour apart) – by the time the FAD started, I was really worn down as I had 3 nights in a row leading up to the FAD where I wasn’t getting home until midnight at the earliest and I had logged many hours driving, particularly in brutal Boston rush hour traffic. For dropoffs, it was not as bad as everybody left on the same day and there were only 2 airport trips then. Still – not getting home before my kids went to bed and my lack of sleep was a definite strain on my family.

So we had a free venue, but at a cost. For future FAD event planners, I would recommend either trying to get flights coming in on the same day as much as possible and/or sharing the load of airport pickups. Even better, would be to hold the event closer to the airport, but this wasn’t an option for us because of the cost that would entail and the fact we have such a geographically-distributed team.

The transportation situation - those time estimates aren't rush hour yet!

The transportation situation – those time estimates aren’t rush hour yet!

One thing that went very well that is common sense but bears repeating anyway – if you’re picking folks up from the airport, get their phone #’s ahead of time. Having folks phone numbers made pickup logistics waaaaay easier. If you have international numbers, look up how to dial them ahead of time. 🙂

Lessons Learned:

  • Try hard to cluster flights when possible to make for less pickups if the distance between airport / venue is great.
  • If possible, share responsibility for driving with someone to spread the load.
  • Closer to the airport logistically means spending less time in a car and less road trips, leaving more time for hacking.
  • Don’t burn yourself out before the event even starts. 🙂
  • Collect the phone numbers of everyone you’re picking up, or provide them some way of contacting you just in case you can’t find each other.
We're dispersed...

We’re dispersed… (original list of attendees’ locations or origin)

Food

This one went pretty smoothly. Westford has a lot of restaurants; actually, we have a lot more restaurants in Westford with vegetarian options than we did less than 2 years ago at the last Design Team FAD.

For the community event, the invite mentioned that we’d be providing pizzas. We had some special dietary requests from that, so I looked up pizza place that could accommodate, would deliver, and had good ratings. There were two that met the criteria so I went with the one that had the best ratings.

Since the Fedora design team FAD participants were leading / teaching the session, I went over the menu with them the day before the community event, took their orders for non-pizza sandwiches/salads, and called the order in right then and there. (I worried placing the order too far in advance would mean it’d get lost in the shuffle. Lesson learned from the 2015 FAD where Panera forgot our order!) Delivery was a must, because of the ease of not having to go and pick it up.

For snacks, we stopped by a local supermarket either before or after lunch on the first day and grabbed whatever appealed to us. Total bill: $30, and we had tons of drinks and yummy snacks (including fresh blueberries) that kept us tided over the whole weekend and were gone by the end.

We were pretty casual with other meals. Folks at the hotel had breakfast at the hotel, which meant less receipts to track for me. We just drove to places close by for lunch and dinner, and being a local + vegetarian meant we had options for everybody. I agonized way too much about lunch and dinner last FAD (although there were less options then.) Keeping it casual worked well this time; one night we tried to have dinner at a local Indian place and found out they had recently been evicted! (Luckily, there was a good Indian place right down the road.)

Lessons Learned:

  • For large orders, call in the day before and not so far in advance that the restaurant forgets your order.
  • Supermarkets are a cheap way to get a snack supply. Making it a group run ensures everyone has something they can enjoy.
  • Having a local with dietary restrictions can help make sure food options are available for everyone.

Okay, enough for logistics nerdery. Let’s move on to the meat here!

Design Team Planning

We spent most of the first day on Fedora Design team planning with a bit of logistics work for the workshop the following day. First, we started by opening up an Inkscape session up on the projector and calling out the stuff we wanted to work on. It ended up like this:

Screenshot of FAD brainstorming session from Inkscape

Screenshot of FAD brainstorming session from Inkscape

But let’s break it down because I suspect you had to be there to get this. Our high-level list of things to discuss broke down like this:

Discussion Topics

  • Newcomers
    – how can we better welcome newcomers to the team?
  • Pagure migration
    Fedora Trac is going to be sunset in favor of Pagure. How will we manage this transition?
  • Meeting times
    – we’ve been struggling to find a meeting time that works for everyone because we are so dispersed. What to do?
  • Status of our ticket queue
    – namely, our ticket system doesn’t have enough tickets for newbies to take!
  • Badges
    – conversely, we have SO MANY badge tickets needing artwork. How to manage?
  • Distro-related design
    – we need to create release artwork every release, but there’s no tickets for it so we end up forgetting about it. What to do?
  • Commops Thread
    – this point refers to Justin’s design-team list post about ambassadors working with the design team – how can we better work with ambassadors to get nice swag out without compromising the Fedora brand?

Let’s dive into each one.

Newcomers

This is the only topic I don’t think we fully explored. We did have some ideas here though:

  • Fedora Hubs will definitely help provide a single landing page for new comers to see what we’re working on in one place to get a feel for the projects we have going on – right now our work is scattered. Having a badge mission for joining the design team should make for a better onboarding experience – we need to work out what badges would be on that path though. One of the pain points we talked about was how incoming newbies go straight to design team members instead of looking at the ticket queue, which makes the process more manual and thus slower. We’re hoping Hubs can make it more self-service.
  • We had the idea to have something like whatcanidoforfedora.org, but specifically for the design team. One of the things we talked about is having it serve up tickets tagged with a ‘newbie’ tag from both the design-team and badges ticket systems, and have the tickets displayed by category. (E.g., are you interested in UX? Here’s a UX ticket.) The tricky part – our data wouldn’t be static as whatcanidoforfedora.org’s is – we wouldn’t want to present people with a ticket that was already assigned, for example. We’d only want to present tickets that were open and unassigned. Chris did quite a bit of investigation into this and seems to think it might be possible to modify asknot-ng to support this.
  • A Fedora Hubs widget that integrated with team-specific asknot instances was a natural idea that came out of this.
  • We do regular ticket triage during meetings. We decided as part of that effort, we should tag tickets with a difficulty level so it’s easier to find tickets for newbies, and maybe even try to have regular contributors avoid the easy ones to leave them open for newbies. We had some discussion about ticket difficulty level scales that we didn’t get to finish – at one point we were thinking:
    • Easy (1 point) (e.g., a simple text-replacement badge.)
    • Moderate (3 points) (e.g., a fresh badge concept with new illustration work.)
    • Difficult / Complex (10 points) (e.g., a minor UX project or a full badge series of 4-5 badges with original artwork.)

    Or something like this, and have a required number of points. This is a discussion we really need to finish.

  • Membership aspects we talked about – what level of work do we want to require for team emembership? Once a member, how much work do we want to require (if any) to stay “current?” How long should a membership be inactive before we retire? (Not to take anything away from someone – but it’s handy to have a list of active members and a handle on how many active folks there are to try to delegate tasks and plan things like this FAD or meetups at Flock.) No answers, but a lot of hard questions. This came up naturally thinking about membership from the beginning to the end.
  • We talked about potentially clearing inactive accounts out of the design-team group and doing this regularly. (By inactive, we mean FAS account has not been logged into from any Fedora service for ~1 year.)
  • Have a formal mentor process, so as folks sign up to join the team, they are assigned a mentor, similar to the ambassador process. Right now, we’re a bit all over the place. It’d be nice for incoming folks to have one person to contact (and this has worked well in the past, e.g., Mo mentoring interns, and Marie mentoring new badgers.)

Pagure migration

We talked about what features we really needed to be able to migrate:

  • The ability to export the data, since we use our trac tickets for design asset storage. We found out this is being worked on, so this concern is somewhat allayed.
  • The ability to generate reports for ticket review in meetings. (We rely on the custom reports Chris and Paul Frields created for us at the last FAD.) We talked through this and decided we wanted a few things:
    • We’d like to be able to do an “anti-tag” in pagure. So we’d want to view a list of tickets that did not have the “triage” tag on them, so we could go through them and triage them, and add a ‘triage’ tag as we completed triage. That would help us keep track of what new tickets needed to be assessed and which had already been evaluated.
    • We’d like some time-based automation of tag application, but don’t know how that would work. For example, right now if a reporter hasn’t responded for 4 weeks, we classify that ticket as “stalled.” So we’d want tickets where the reporter hasn’t responded in 4 weeks to be marked as “stalled.” Similarly, tickets that haven’t had activity for 2 weeks or more are considered “aging”, so we’d like an “aging” tag applied to them. So on and so forth.
    • We need attachment support for tickets – we discovered this was being worked on too. Currently pagure supports PNG image attachments but we have a wider range of asset types we need to attach – PDFs, Scribus SLAs, SVGs, etc. We tested these out in pagure and they didn’t work.

We agreed we need to follow up with pingou on our needs and our ideas here to see if any of these RFEs (or other solutions) could be worked out in Pagure. We were pretty excited that work was already happening on some of the items we thought would help meet our needs in being able to migrate over.

We don’t have enough tickets! (AKA we are too awesome)

We tend to grab tickets and finish them (or at least hold on to them) pretty quickly on the design team these days. This makes it harder for newbies to find things to work on to meet our membership requirement. We talked about a couple of things here, in addition to related topics already covered in the newbie discussion summary:

  • We need to be more strict about removing assignees from tickets with inactivity. If we’ve pinged the ticket owner twice (which should happen in at least a 4 week period of inactivity from the assignee) and had no response, we should unapologetically just reopen up the ticket for others to take. No hard feelings! Would be even better if we could automate this….
  • We should fill out the ticket queue with our regular release tasks. Which leads to another topic…

Distro-related design (Release Artwork)

Our meetings are very ticket-driven, so we don’t end up covering release artwork during them. Which leads to a scramble… we’ve been getting it done, but it’d be nice for it to involve less stress!

Ideally, we’d like some kind of solution that would automatically create tickets in our system for each work item per release once a new release cycle begins… but we don’t want to create a new system for trac since we’ll be migrating to pagure anyway. So we’ll create these tickets manually now, and hope to automate this once we’ve migrated to pagure.

We also reviewed our release deliverables and talked through each. A to-do item that came up here: We should talk to Jan Kurik and have him remove the splash tasks (we don’t create those splash screens anymore) and add social media banner tasks (we’ve started getting requests for these.) We should also drop CD, DVD, and DVD for multi, and DVD for workstation (transcribing this now I wonder if it’s right.) We also should talk to bproffitt about which social media Fedora users the most and what kind of banners we should create for those accounts for each release. So in summary: we need to drop some unnecessary items from the release schedule that we don’t create anymore, and we should do more research about social media banners and have them added to the schedule.

Another thing I forgot when I initially posted this – we need some kind of entropy / inspiration to keep our default wallpapers going. For the past few releases, we’ve gotten a lot of positive feedback and very few complaints, but we need more inspiration. An idea we came up with was to have a design-team internal ‘theme scheme’ where we go through the letters of the alphabet and draw some inspiration from an innovator related to that letter. We haven’t picked one for F25 yet and need to soon!

Finally, we talked about wallpapers. We’d like for the Fedora supplemental wallpapers to be installed by default – they tend to be popular but many users also don’t know they are there. We thought a good solution might be to propose an internship (maybe Outreachy, maybe GSoC?) to revive an old desktop team idea of wallpaper channels, and we could configure the Fedora supplementals to be part of the channel by default and maybe Nuancier could serve them up.

Badges

We never seem to have time to talk through the badges tickets during our meetings, and there are an awful lot of them. We talked about starting to hold a monthly badge meeting to see if this will address it, with the same kind of ticket triage approach we use for the main design team meetings. Overall, Marie and Maria have been doing a great job mentoring baby badgers!

Commops Thread

We also covered Justin’s design-team list post about ambassadors working with the design team, particularly about swag as that tends to be a hot-button issue. For reasons inexplicable to me except for perhaps that I am spaz, I stopped taking notes in Inkscape and started using the whiteboard on this one:

photo of whiteboard (contents described below)

Swag discussion whiteboard (with wifi password scrubbed 🙂 )

We had a few issues we were looking to address here:

  • Sometimes swag is produced too cheaply and doesn’t come out correctly. For example, recently Fedora DVDs were produced with sleeves where Fedora blue came out… black. (For visuals of some good examples compared to bad examples with these sorts of mistakes, check this out.)
  • Sometimes ambassadors don’t understand which types of files to send to printers – they grab a small size bitmap off of the wiki without asking for the print-ready version and things come out pixelated or distorted.
  • Sometimes files are used that don’t have a layer for die cutting – which results in sticker sheets with no cuts that you have to manually cut out with scissors (a waste!)
  • Sometimes files are sent to the printer with no bleeds – and the printer ends up going into the file and manipulating it, sometimes with disastrous results. If a design team member had been involved, they would have known to set the bleeds before sending to the printer.
  • Generally, printers sometimes have no clue, and without a designer working with them they make guesses that are oftentimes wrong and result in poor output.
  • Different regions have different price points and quality per type of item. For example, DVD production in Cambodia is very, very expensive – but print and embroidery items are high-quality and cheap.

Overall, we had concerns about money getting wasted on swag when – with a little coordination – we could produce higher-quality products and save money.

We brainstormed some ideas that we thought might help:

  • Swag quality oversight – Goods produced too cheaply hurt our brand. Could we come up with an approved vendor list, so we have some assurances of a base level of quality? This can be an open process, so we can add additional vendors at any time, but we’ll need some samples of work before they can be approved, and keep logs of our experience with them.
  • Swag design oversight – Ambassadors enjoy their autonomy. We recognize that’s important, but at a certain point sometimes overenthusiastic folks without design knowledge can end up spending a lot of money on items that don’t reflect our brand too well. We thought about setting some kind of cap – if you’re spending more than say $100 on swag, you need design team signoff – a designer will work with you to produce print-ready files and talk to the vendor to make sure everything comes out with a base quality level.
  • Control regional differences – Could we suggest one base swag producer per ambassador region, and indicate what types of products we use them for by default? Per product, we should have a base quality level requirement – e.g., DVDs cannot be burnt – they must be pressed.
  • Okay, I hope this is a fair summary of the discussion. I feel like we could have an entire FAD that focused just on swag. I think we had a lot of ideas here, and it could use more discussion too.

    Meeting Times

    We talked about meeting times. There is no way to get a meeting time that works for everybody, so we decided to split into North America / EMEA / LATAM, and APAC regions. Sirko, Ryan Lerch, and Yogi will lead the APAC time (as of yet to be determined.) And the North America / LATAM / EMEA time will be the traditional design team time – Thursdays at 10 AM ET. Each region will meet on a rotating basis, so one week it’ll be region #1, the next region #2. Each region will meet at least 2x a month then.

    How do we stay coordinated? We came up with a cool idea – the first item of each meeting will be to review the meetbot logs from the other region’s last meeting. That way, we’ll be able to keep up with what the other region is doing, and any questions/concerns we have, they’ll see when they review our minutes the next week. We haven’t had a chance to test this out yet, but I’m curious to see how it works in practice!

    Fun

    Chris’ flight left on Sunday morning, but everybody else had flights over to Poland which left in the evening, so before we went to the airport, we spent some time exploring Boston. First we went to the Isabella Stewart Gardener Museum, as it was a rainy day. (We’d wanted to do a walking tour.) We had lunch at Boloco, a cool Boston burrito-chain, then the sun decided to come out so we found a parking spot by Long Wharf and I gave everybody a walking tour of Quincy Market and the North End. Then we headed to the airport and said our goodbyes. 🙂

    From left to right: Mo, Masha, Marie, Radhika

    From left to right: Mo, Masha, Marie, Radhika

    What’s Next?

    There’s a lot of little action items embedded here. We covered a lot of ground, but we have a lot more work to do! OK, it’s taken me two weeks to get to this point and I don’t want this blog post delayed anymore, so I’m just going for it and posting now. 🙂 Enjoy!

    August 14, 2016

    Translating track files between mapping formats

    I use map tracks quite a bit. On my Android phone, I use OsmAnd, an excellent open-source mapping tool that can download map data generated from free OpenStreetMap, then display the maps offline, so I can use them in places where there's no cellphone signal (like nearly any hiking trail). At my computer, I never found a decent open-source mapping program, so I wrote my own, PyTopo, which downloads tiles from OpenStreetMap.

    In OsmAnd, I record tracks from all my hikes, upload the GPX files, and view them in PyTopo. But it's nice to go the other way, too, and take tracks or waypoints from other people or from the web and view them in my own mapping programs, or use them to find them when hiking.

    Translating between KML, KMZ and GPX

    Both OsmAnd and PyTopo can show Garmin track files in the GPX format. PyTopo can also show KML and KMZ files, Google's more complicated mapping format, but OsmAnd can't. A lot of track files are distributed in Google formats, and I find I have to translate them fairly often -- for instance, lists of trails or lists of waypoints on a new hike I plan to do may be distributed as KML or KMZ.

    The command-line gpsbabel program does a fine job translating KML to GPX. But I find its syntax hard to remember, so I wrote a shell alias:

    kml2gpx () {
            gpsbabel -i kml -f $1 -o gpx -F $1:t:r.gpx
    }
    
    so I can just type kml2gpx file.kml and it will create a file.gpx for me.

    More often, people distribute KMZ files, because they're smaller. They're just gzipped KML files, so the shell alias is only a little bit longer:

    kmz2gpx () {
            kmlfile=/tmp/$1:t:r.kml 
            gunzip -c $1 > $kmlfile
            gpsbabel -i kml -f $kmlfile -o gpx -F $kmlfile:t:r.gpx
    }
    

    Of course, if you ever have a need to go from GPX to KML, you can reverse the gpsbabel arguments appropriately; and if you need KMZ, run gzip afterward.

    UTM coordinates

    A couple of people I know use a different format, called UTM, which stands for Universal Transverse Mercator, for waypoints, and there are some secret lists of interesting local features passed around in that format.

    It's a strange system. Instead of using latitude and longitude like most world mapping coordinate systems, UTM breaks the world into 60 longitudinal zones. UTM coordinates don't usually specify their zone (at least, none of the ones I've been given ever have), so if someone gives you a UTM coordinate, you need to know what zone you're in before you can translate it to a latitude and longitude. Then a pair of UTM coordinates specifies easting, and northing which tells you where you are inside the zone. Wikipedia has a map of UTM zones.

    Note that UTM isn't a file format: it's just a way of specifying two (really three, if you count the zone) coordinates. So if you're given a list of UTM coordinate pairs, gpsbabel doesn't have a ready-made way to translate them into a GPX file. Fortunately, it allows a "universal CSV" (comma separated values) format, where the first line specifies which field goes where. So you can define a UTM UniCSV format that looks like this:

    name,utm_z,utm_e,utm_n,comment
    Trailhead,13,0395145,3966291,Trailhead on Buckman Rd
    Sierra Club TH,13,0396210,3966597,Alternate trailhead in the arroyo
    
    then translate it like this:
    gpsbabel -i unicsv -f filename.csv -o gpx -F filename.gpx
    
    I (and all the UTM coordinates I've had to deal with) are in zone 13, so that's what I used for that example and I hardwired that into my alias, but if you're near a zone boundary, you'll need to figure out which zone to use for each coordinate.

    I also know someone who tends to send me single UTM coordinate pairs, because that's what she has her Garmin configured to show her. For instance, "We'll be using the trailhead at 0395145 3966291". This happened often enough, and I got tired of looking up the UTM UniCSV format every time, that I made another shell function just for that.

    utm2gpx () {
            unicsv=`mktemp /tmp/point-XXXXX.csv` 
            gpxfile=$unicsv:r.gpx 
            echo "name,utm_z,utm_e,utm_n,comment" >> $unicsv
            printf "Point,13,%s,%s,point" $1 $2 >> $unicsv
            gpsbabel -i unicsv -f $unicsv -o gpx -F $gpxfile
            echo Created $gpxfile
    }
    
    So I can say utm2gpx 0395145 3966291, pasting the two coordinates from her email, and get a nice GPX file that I can push to my phone.

    What if all you have is a printed map, or a scan of an old map from the pre-digital days? That's part 2, which I'll post in a few days.

    August 11, 2016

    LVFS has a new CDN

    Now that we’re hitting cough Cough COUGH1 million users a month the LVFS is getting slower and slower. It’s really just a flask app that’s handling the admin panel and then apache is serving a set of small files to a lot of people. As switching to a HA server is taking longer than I hoped2, I’m in the process of switching to using S3 as a CDN to take the load off. I’ve pushed a commit that changes the default in the fwupd.conf file. If you want to help test this, you can do a substitution of secure-lvfs.rhcloud.com to s3.amazonaws.com/lvfsbucket in /etc/fwupd.conf although the old CDN will be running for a long time indeed for compatibility.

    1. Various vendors have sworn me to secrecy
    2. I can’t believe GPGME and python-gpg is the best we have…

    Flatpak cross-compilation support

    A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.

    Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.

    At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.

    How to compile for a different architecture

    There are four possible solutions to compile programs for a different architecture:

    • Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices.
    • Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite.
    • Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation.
    The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.

    Using the QEMU user-space emulator

    If you want to run just the one command, you'd do something like:

    qemu-static-arm myarmbinary

    Easy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.

    One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.

    To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say.

    Running QEmu user-space emulator in a container

    We have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.

    The Flatpak hack

    This commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.

    Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.

    $ TARGET=arm ./build.sh
    [...]
    $ ls org.gnu.hello.arm.xdgapp
    918k org.gnu.hello.arm.xdgapp

    Ready to install on your device!

    The proper way

    The above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.

    Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.

    In short

    With the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.

    Get cross-compiling!

    Adding suggestions to AppData files

    An oft-requested feature is to show suggestions for other apps to install. This is useful if the apps are part of a larger suite of application, or if the apps are in some way complimentary to each other. A good example might be that we want to recommend libreoffice-writer when the user is looking at the details (or perhaps had just installed) libreoffice-calc.

    At the moment we’ve got got any UI using this kind of data, as simply put, there isn’t much data to use. Using the ODRS I can kinda correlate things that the same people look at (i.e. user A got review for B and C, so B+C are possibly related) but it’s not as good as actual upstream information.

    Those familiar with my history will be unsuprised: AppData to the rescue! By adding lines like this in the foo.appdata.xml file you can provide some information to the software center:

    <suggests>
    <id>libreoffice-draw.desktop</id>
    <id>libreoffice-calc.desktop</id>
    </suggests>

    You don’t have to specify the parent app (e.g. libreoffice-writer.desktop in this case) and is the only tag that’s accepted. If isn’t found in the AppStream metadata then it’s just ignored, so it’s quite safe to add things that might not be in stable distros.

    If enough upstreams do this then we can look at what UI makes sense. If you make use of this feature, please let me know and I can make sure we discuss the use-case in the design discussions.

    August 10, 2016

    Double Rainbow, with Hummingbirds

    A couple of days ago we had a spectacular afternoon double rainbow. I was out planting grama grass seeds, hoping to take take advantage of a rainy week, but I cut the planting short to run up and get my camera.

    [Double rainbow]

    [Hummingbirds and rainbow] And then after shooting rainbow shots with the fisheye lens, it occurred to me that I could switch to the zoom and take some hummingbird shots with the rainbow in the background. How often do you get a chance to do that? (Not to mention a great excuse not to go back to planting grass seeds.)

    (Actually, here, it isn't all that uncommon since we get a lot of afternoon rainbows. But it's the first time I thought of trying it.)

    Focus is always chancy when you're standing next to the feeder, waiting for birds to fly by and shooting whatever you can. Next time maybe I'll have time to set up a tripod and remote shutter release. But I was pretty happy with what I got.

    Photos: Double rainbow, with hummingbirds.

    August 09, 2016

    compressing dynamic range with exposure fusion

    modern sensor capture an astonishing dynamic range, namely some sony sensors or canon with magic lantern's dual iso feature.

    this is in a range where the image has to be processed carefully to display it in pleasing ways on a monitor, let alone the limited dynamic range of print media.

    example images

    use graduated density filter to brighten foreground

    original

    graduated density filter

    using the graudated density iop works well in this case since the horizon here is more or less straight, so we can easily mask it out with a simple gradient in the graduated density module. now
    what if the objects can't be masked out so easily?

    more complex example

    this image needed to be substantially underexposed in order not to clip the interesting highlight detail in the clouds.

    original image, then extreme settings in the shadows and highlights iop (heavy fringing despite bilateral filter used for smoothing). also note how the shadow detail is still very dark. third one is tone mapped (drago) and fourth is default darktable processing with +6ev exposure.

    original

    shadows/highlights

    tonemap

    +6ev

    tone mapping also flattens a lot of details why this version already has some local contrast enhancement applied to it. this can quickly result in unnatural results. similar applies to colour saturation (for reasons of good taste, no link to examples at this point..).

    the last image in the set is just a regular default base curve pushed by six stops using the exposure module. the green colours of the grass look much more natural than in any of the other approaches taken so far (including graduated density filters, these need some fiddling in the colour saturation..). unfortunately we lose a lot of detail in the highlights (to say the least).

    this can be observed for most images, here is another example (original, then pushed +6ev):

    original

    +6ev

    exposure fusion

    this is precisely the motivation behind the great paper entitled Exposure Fusion: what if we develop the image a couple of times, each time exposing for a different feature (highlights, mid-tones, shadows), and then merge the results where they look best?

    this has been available in software for a while in enfuse
    even with a gui called EnfuseGUI.
    we now have this feature in darktable, too.

    find the new fusion combo box in the darktable base curve module:

    gui

    options are to merge the image with itself two or three times. each extra copy of the image will be boosted by an additional three stops (+3ev and +6ev), then the base curve will be applied to it and the laplacian pyramids of the resulting images will be merged.

    results

    this is a list of input images and the corresponding result of exposure fusion:

    0ev,+3ev,+6ev:

    original

    0ev,+3ev,+6ev

    0ev,+3ev:

    original

    0ev,+3ev

    0ev,+3ev,+6ev:

    original

    0ev,+3ev,+6ev

    0ev,+3ev,+6ev:

    original

    fusion

    0ev,+3ev:

    original

    fusion

    conclusion

    image from beginning:

    fusion

    note that the feature is currently merged to git master, but unreleased.

    links

    Blog backlog, Post 4, Headset fixes for Dell machines

    At the bottom of the release notes for GNOME 3.20, you might have seen the line:
    If you plug in an audio device (such as a headset, headphones or microphone) and it cannot be identified, you will now be asked what kind of device it is. This addresses an issue that prevented headsets and microphones being used on many Dell computers.
    Before I start explaining what this does, as a picture is worth a thousand words:


    This selection dialogue is one you will get on some laptops and desktop machines when the hardware is not able to detect whether the plugged in device is headphones, a microphone, or a combination of both, probably because it doesn't have an impedance detection circuit to figure that out.

    This functionality was integrated into Unity's gnome-settings-daemon version a couple of years ago, written by David Henningsson.

    The code that existed for this functionality was completely independent, not using any of the facilities available in the media-keys plugin to volume keys, and it could probably have been split as an external binary with very little effort.

    After a bit of to and fro, most of the sound backend functionality was merged into libgnome-volume-control, leaving just 2 entry points, one to signal that something was plugged into the jack, and another to select which type of device was plugged in, in response to the user selection. This means that the functionality should be easily implementable in other desktop environments that use libgnome-volume-control to interact with PulseAudio.

    Many thanks to David Henningsson for the original code, and his help integrating the functionality into GNOME, Bednet for providing hardware to test and maintain this functionality, and Allan, Florian and Rui for working on the UI notification part of the functionality, and wiring it all up after I abandoned them to go on holidays ;)

    August 07, 2016

    SIGGRAPH 2016 report

    Anaheim, 23 – 28 July 2016

    This year was the 25th anniversary of my SIGGRAPH membership (I am a proud member since ’91)! It was also my 18th visit in a row to the annual convention (since ’99). We didn’t have a booth on the trade show this year though. Expenses are so high! Since 2002 we exhibited 7 times, we skipped years more often, but since 2011 we were there every year. The positive side of not exhibiting was that I finally had time and energy to have meetings and participate in other events.

    Friday 22 – Saturday 23: Toronto

    MS_Ozzy_pose04_comp_paint.0001

    But first: an unexpected last minute change in the planning. Originally I was going to Anaheim to also meet with the owners of Tangent Animation about their (near 100% Blender) feature film studio. Instead they suggested it would be much more practical to rebook my flight and have a day stopover in Toronto to see the studio and have more time to meet.

    I spent two half days with them, and it was really blown away by the work they do there. I saw the opening 10 minutes of their current feature film (“Run Ozzy Run”). The film is nearly finished, currently being processed for grading and sound. The character designs are adorable, the story is engaging and funny, and they pulled off surprising good quality animation and visuals – especially knowing it’s still a low budget project made with all the constraints associated with it. And they used Blender! Very impressive how they managed to get quite massive scenes work. They hired a good team of technical artists and developers to support them. Their Cycles coder is a former Mental-Ray engineer, who will become a frequent contributor to Cycles.

    I also had a sneak peek of the excellent concept art of the new feature that’s in development – more budget, and much more ambitious even. For that project they offer to invest substantially in Blender, we spent the 2nd day on outlining a deal. In short that is:

    • Tangent will sponsor two developers to work in Blender Institute on 2.8 targets (defined by us)
    • Tangent will sponsor one Cycles developer, either to work in Blender Institute or in Toronto.
    • All of this full time and decently paid positions, for at least 1 year. Can be effective in September.

    Sunday 24: SIGGRAPH Anaheim

    blenderbof162 PM: Blender Birds of a Feather, community meeting

    As usual we start the meeting with giving everyone a short moment to say who they are what they do with Blender (or want to see happen). This takes 25+ minutes! There were visitors from Boeing, BMW, Pixar, Autodesk, Microsoft, etc.

    The rest of the time I did my usual presentation (talk about who we are, what we did last year, and the plans for next year).

    You can download the pdf of the slides here.

    3:30 PM : Blender Birds of a Feather, Spotlight event

    Theory Animation’s David Andrade offered to organise this ‘open stage’ event, giving artists or developers 5 minutes of time to show the work they did with Blender. It was great to see this organised so well! There was a huge line-up even, lasting 90 minutes even. Some highlights from my memory:

    • Theory Animation showed work they did for the famous TV show “Silicon Valley”. The hilarious “Pipey” animation is theirs.
    • Sean Kennedy is doing a lot of Blender vfx for tv series. Amazing work (can’t share here, sorry), and he gave a warm plea for more development attention for the Compositor in Blender.
    • Director Stephen Norrington (Blade, League of Extraordinary Gentlemen) is using Blender! He showed vfx work he did for a stop motion / puppet short film.
    • JT Nelson showed results of Martin Felke’s Blender Fracture Branch. Example.
    • Nimble Collective premiered their first “Animal Facts” short, The Chicken.

    Afterwards we went for drinks and food to one of the many bar/restaurants close by. (Well close, on the map it looked like 2 blocks, but in Anaheim these blocks were half a mile! Made the beer taste even better though :)

    Monday 25: the SIGGRAPH Animation Festival, Jury Award!

    Selfie with badge + ribbon

    Aside of all these interesting encounters you can have in LA (I met with people from Paramount Animation), the absolute highlight of Monday was picking up the Jury prize for Cosmos Laundromat. Still vividly remembering 25 years ago, struggling with the basics of CG, I never thought I’d be cheered on and applauded by 1000+ people in the Siggraph Electronic Theater!

    Clearly the award is not just mine, it’s for director Mathieu Auvray and writer Esther Wouda, the team of artists and developers who worked on the film, and most of all for everyone who contributed to Blender and to Blender Cloud in one way or another.

    Wait… but the amount of surprises weren’t over that day. I sneaked away from the festival screening and went to AMD’s launch party. I was pleasantly surprised to watch corporate VP Roy Taylor spending quite some time talking about Blender, ending with “We love Blender, we love the Blender Community!” AMD is very serious about focusing on 3D creators online, to serve the creative CG communities of which Blender users are one of the biggest now. If AMD could win back the hearts of Blender artists…

    Theory Animation guys!

    After the event I met with Roy Taylor, he confirmed the support they already give to Blender developer Mike Erwin (to upgrade OpenGL). Roy said AMD is committed to help us in many more ways, so I asked for one more full time Cycles coder. Deal! Support for 1 year full time developer on Cycles to finish the ‘OpenCL split kernel’ project is being processed now. I’ll be busy hiring people the coming period!

    Later in the evening I met with several Blender artists. They got the award handed over by me to show my appreciation. Big fun :)

    Tuesday 26 – Wednesday 27, SIGGRAPH tradeshow and meetings

    Not having a booth was a blessing (at least for once!). I could freely move around and plan the days with meetings and time to attend the activities outside of the trade show as well. Here’s a summary of activities and highlights

    • Tradeshow impression
      This year’s show seemed a bit smaller than last year, but on both days it felt crowded in most places, the attendance was very good. Best highlights are still the presentations by artists to show their work on larger booths such as Nvidia or Foundry. Also for having an original Vive experience it was worth the visit. Google’s Tango was there, but the marketing team failed to impress demoing it – 3d scanning the booth failed completely all the time (don’t put tv screens on walls if you want to scan!).
    • USD-1Pixar USD launch lunch
      Pixar presented the official launch of the Universal Scene Description format, a set of formats with a software library to manage your entire pipeline. The  USD design is very inviting for Blender to align well with – we already share some of the core design decisions, but USD is quite more advanced. It will be interesting to see whether USD will be used for pipeline IO (file exchange) among applications as well.
    • Autodesk meeting
      Autodesk has appointed a director open source strategy, he couldn’t attend but connected me with Marc Stevens and Chris Vienneau, executives in the M&E department. They also brought in Arnold’s creator Marcos Fajardo.
      Marcos expressed their interest in having Arnold support for Blender. We discussed the (legal, licensing) technicalities of this a bit more, but for as long they stick to data transport between the programs (like PRman and VRay do now using Blender’s render API) there’s no issue. With Marc and Chris I had a lengthy discussion about Autodesk’s (lack of) commitment to open source and openly accessible production pipelines. They said that Autodesk is changing their strategy though and they will show this with actively sharing sources or participating in open projects as well. I invited them to publish the FBX spec doc (needs to get blessings from board, but they’ll try) and to work with Pixar on getting the character module for USD fleshed out (make it work for Maya + Max, in open license). The latter suggestion was met with quite some enthusiasm. Would make the whole FBX issue go away mostly. 
    • Nvidia
      It was very cool to meet with Ross Cunniff, Technology Lead at NVIDIA. He is nice down-to-earth and practical. With his connections it’ll be easier to get a regular seed of GTX cards to developers. I’ve asked for a handful 1080ies right away! Nvidia will also actively work on getting Blender Cycles files in the official benchmarking suites.
    • Massive Software
      David Andrade (Theory Animation) setup a meeting with me and industry legend Stephen Regelous, founder of Massive Software and the genius behind the epic Lord of the Rings battle scenes. Stephen said that at Massive user meetings there’s an increasing demand for Blender support. He explained me how they do it; basically everything’s low poly and usually gets rendered in 1 pass! The Massive API has a hook into the render engine to generate the geometry on the fly, to prevent huge file or caching bottlenecks. In order to get this work for Blender Cycles, a similar hook should be written. They currently don’t have the engineers to do this, but they’d be happy to support someone for it.
    • Khronos
      I further attended the WebGL meeting (with demos by Blend4web team) and the Khronos party. Was big fun, a lot of Blender users and fans there! The Khronos initiative remains incredibly important – they are keeping the graphics standards open (like OpenGL, glTF) and make innovation available for everyone (WebGL and Vulkan).

    Friday 29, San Francisco and Bay Area

    on-highway1Wednesday evening and Thursday I took my time driving the touristic route north to San Francisco. I wanted to meet some friends there (loyal Blender supporter David Jeske, director/layout artist Colin Levy, CG industry consultants Jon and Kathleen Peddie, Google engineer Keir Mierle) and visit two business contacts.

    • Nimble Collective
      Located in a lovely office in Mountain View (looks like it’s always sunny and pleasant there!) this startup is also heavily investing in Blender and using it for a couple of short film projects. I leave it them to release the info on the films :) but it’s going to be amazing good! I also had a demo of their platform, which is like a ‘virtual’ animation production workstation, which you can use in a browser. The Blender demo on their platform was feeling very responsive, including fast Cycles renders.
      The visit ended participating in their “weekly”. Just like the Blender Institute weekly! An encouraging and enthusiast gathering to celebrate results and work that’s been done.
    • Netflix
      netflix_cosmoslaundromatThe technical department from Netflix contacted us a while ago, they were looking for high quality HDR content to do streaming and other tests. We then sent them the OpenEXR files of Cosmos Laundromat, which is unclipped high resolution color. Netflix took it to a specialist HDR grading company and they showed me the result – M I N D blowing! Really awesome to see how the dynamics of Cycles renders (like the hard morning light) works on a screen that allows a dynamic ‘more than white’ display. Cosmos Laundromat is now on Netflix, as one of the first HDR films.
      We then discussed how Netflix could do more with our work. Obviously they’re happy to share the graded HDR film, but they’re especially interested in getting more content – especially in 4k. A proposal for sponsoring our work is being evaluated internally now.

    Sunday 31 July, Back home

    I was gone for 9 days, with 24 hours spent in airplanes. But it was worth it :) Jetlag usually kicks in then, took a week to resolve. In the coming weeks there’s a lot of work waiting, especially setting up all the projects around Blender 2.8. A new design/planning doc on 2.8 is first priority.

    Please feel invited to discuss the topics in our channels and talk to me in person in IRC about Blender 2.8 and Cycles development work. Or send me a mail with feedback. That’s ton at blender.org, as usual.

    Ton Roosendaal
    August 7, 2016

    August 06, 2016

    Adding a Back button in Python Webkit-GTK

    I have a little browser script in Python, called quickbrowse, based on Python-Webkit-GTK. I use it for things like quickly calling up an anonymous window with full javascript and cookies, for when I hit a page that doesn't work with Firefox and privacy blocking; and as a quick solution for calling up HTML conversions of doc and pdf email attachments.

    Python-webkit comes with a simple browser as an example -- on Debian it's installed in /usr/share/doc/python-webkit/examples/browser.py. But it's very minimal, and lacks important basic features like command-line arguments. One of those basic features I've been meaning to add is Back and Forward buttons.

    Should be easy, right? Of course webkit has a go_back() method, so I just have to add a button and call that, right? Ha. It turned out to be a lot more difficult than I expected, and although I found a fair number of pages asking about it, I didn't find many working examples. So here's how to do it.

    Add a toolbar button

    In the WebToolbar class (derived from gtk.Toolbar): In __init__(), after initializing the parent class and before creating the location text entry (assuming you want your buttons left of the location bar), create the two buttons:

            backButton = gtk.ToolButton(gtk.STOCK_GO_BACK)
            backButton.connect("clicked", self.back_cb)
            self.insert(backButton, -1)
            backButton.show()
    
            forwardButton = gtk.ToolButton(gtk.STOCK_GO_FORWARD)
            forwardButton.connect("clicked", self.forward_cb)
            self.insert(forwardButton, -1)
            forwardButton.show()
    

    Now create those callbacks you just referenced:

       def back_cb(self, w):
            self.emit("go-back-requested")
    
        def forward_cb(self, w):
            self.emit("go-forward-requested")
    

    That's right, you can't just call go_back on the web view, because GtkToolbar doesn't know anything about the window containing it. All it can do is pass signals up the chain.

    But wait -- it can't even pass signals unless you define them. There's a __gsignals__ object defined at the beginning of the class that needs all its signals spelled out. In this case, what you need is

           "go-back-requested": (gobject.SIGNAL_RUN_FIRST,
                                  gobject.TYPE_NONE, ()),
           "go-forward-requested": (gobject.SIGNAL_RUN_FIRST,
                                  gobject.TYPE_NONE, ()),
    
    Now these signals will bubble up to the window containing the toolbar.

    Handle the signals in the containing window

    So now you have to handle those signals in the window. In WebBrowserWindow (derived from gtk.Window), in __init__ after creating the toolbar:

            toolbar.connect("go-back-requested", self.go_back_requested_cb,
                            self.content_tabs)
            toolbar.connect("go-forward-requested", self.go_forward_requested_cb,
                            self.content_tabs)
    

    And then of course you have to define those callbacks:

    def go_back_requested_cb (self, widget, content_pane):
        # Oops! What goes here?
    def go_forward_requested_cb (self, widget, content_pane):
        # Oops! What goes here?
    

    But whoops! What do we put there? It turns out that WebBrowserWindow has no better idea than WebToolbar did of where its content is or how to tell it to go back or forward. What it does have is a ContentPane (derived from gtk.Notebook), which is basically just a container with no exposed methods that have anything to do with web browsing.

    Get the BrowserView for the current tab

    Fortunately we can fix that. In ContentPane, you can get the current page (meaning the current browser tab, in this case); and each page has a child, which turns out to be a BrowserView. So you can add this function to ContentPane to help other classes get the current BrowserView:

        def current_view(self):
            return self.get_nth_page(self.get_current_page()).get_child()
    

    And now, using that, we can define those callbacks in WebBrowserWindow:

    def go_back_requested_cb (self, widget, content_pane):
        content_pane.current_view().go_back()
    def go_forward_requested_cb (self, widget, content_pane):
        content_pane.current_view().go_forward()
    

    Whew! That's a lot of steps for something I thought was going to be just adding two buttons and two callbacks.

    August 03, 2016

    The Fedora Design Team’s Inkscape/Badges Workshop!

    Fedora Design Team Logo

    This past weekend, the Fedora Design Team held an Inkscape and Fedora Badges workshop at Red Hat’s office in Westford, Massachusetts. (You can see our public announcement here.)

    Badges Workshop

    Why did the Fedora Design Team hold this event?

    At our January 2015 FAD, one of the major themes of things we wanted to do as a team was outreach, to both help teach Fedora and the FLOSS creative tools set as a platform for would-be future designers, as well as to bring more designers into our team. We planned to do a badges workshop at some future point to try to achieve that goal, and this workshop (which was part of a longer Design FAD event I’ll detail in another post) was it. We collectively feel that designing artwork for badges is a great “gateway contribution” for Fedora contributors because:

    • The badges artwork standards and process is extremely well-documented.
    • The artwork for a badge is a small, atomic unit of contribution that does not take up too much of a contributor’s time to create.
    • Badges individually touch on varying areas of the Fedora project, so by making a single badge you could learn (in a rather gentle way) how a particular aspect of how the Fedora project works (as a first step towards learning more about Fedora.)
    • The process of creating badge artwork and submitting it from start to finish is achievable during a one-day event, and being able to walk away from such an event having submitted your first open source contribution is pretty motivating!

    This is the first event of this kind the Fedora Design team has held, and perhaps any Fedora group? We aimed for a general, local community audience rather than attaching this event to a larger technology-focused conference event or release party. We explicitly wanted to bring folks not currently affiliated with Fedora or even the open source community into our world with this event.

    Preparing for the event

    Photo of event handouts

    There was a lot we had to do in order to prepare for this event. Here’s a rough breakdown:

    Marketing (AKA getting people to show up!)

    We wanted to outreach to folks in the general area of Red Hat’s Westford Office. Originally, we had wanted to have the event located closer to Boston and partner with a university, but for various reasons we needed to have this event in the summer – a poor time for recruiting university students. Red Hat Westford graciously offered us space for free, but without something like a university community, we weren’t sure how to go about advertising the event to get people to sign up.

    Here’s what we ended up doing:

    • We created an event page on EventBrite (free to use for free events.) That gave us a bit of marketing exposure – we got 2 signups from on Event Brite referrals. The site also helped us with event logistics (see next section for more on that.)
    • We advertised the event on Red Hat’s Westford employee list – Red Hat has local office mailing lists for each office, so we advertised the event on there asking area employees to spread the word about the event to friends and family. We got many referrals this way.
    • We advertised the event on a public Westford community Facebook page – I don’t know about other areas, but in the Boston area, many of the individual towns have public town bulletin boards set up as Facebook groups, and event listings are allowed and even encouraged on many of these sites. I was able to get access to one of the more popular Westford groups and posted about our event there – first about a month out, then a reminder the week before. We received a number of referrals this way as well.

    Photo of the event

    Logistics

    We had to formally reserve the space, and also figure out how many people were coming so we knew how much and what kinds of food to order – among many other little logistical things. Here’s how we tackled that:

    • Booking the space – I filed a ticket with Red Hat’s Global Workplace Services group to book the space. We decided to open up 30 slots for the workshop, which required booking two conference rooms on the first floor of the office (generally considered the space we offer for public events) and also requesting those rooms be set up classroom-style with a partition opened up between them to make one large classroom. The GWS team was easy to work with and a huge help in making things run smoothly.
    • Managing headcount – As mentioned earlier, we set up an EventBrite page for the event, which allowed us to set up the 30 slots and allow people to sign up to reserve a slot in the class. This was extremely helpful in the days leading up to the event, because it provided me a final head count for ordering food and also a way to communicate with attendees before the event (as registration requires providing an email address.) We had a last-minute cancellation of two slots, and we were able to push out information to the three channels we’d marketed the event to and get those slots filled the day before the event so we had a full house day of.
    • Ordering food – I called the day before the event to order the food. We went with a local Italian place that did delivery and ordered pizzas and soda for the guests and sandwiches / salads for the instructors (I gathered instructor orders right before making the call.) We had a couple of attendees who had special dietary needs, so I made sure to order from a place that could accommodate.
    • Session video recording – During the event, we used BlueJeans to wirelessly project our slides to the projectors. Consequentially, this also resulted in recordings being taken of the sessions. On my to-do list is to edit those down to just the useful bits and post them, sending the link to attendees.
    • Surveying attendees – After the event, Event Brite helpfully allowed us to send out a survey (via Survey Monkey) to the attendees to see how it went.
    • Making slides available – Several attendees asked for us to send out the slides we used (I just sent them out this afternoon, and have provided them here as well!)
    • Getting permission – I knew we were going to be writing up an event report like this, so I did get the permission/consent of everyone in the room before taking pictures and hitting record on the BlueJeans session.
    • Parking / Access – I realized too late that we probably should have provided parking information up front to attendees, but luckily it was pretty straightforward and we had plenty of spots up front. Radhika helpfully stood by the front entrance as attendees arrived to allow them in the front door and escort them to the classroom.
    • Audio/Video training – Red Hat somewhat recently got a new A/V system I wasn’t familiar with, and there are specific things you need to know about getting the two projectors in the two rooms in sync when the partition is open, so I was lucky to book a meeting with one of Red Hat’s extremely helpful media folks to meet with me the day before and teach me how to run the A/V system.

    28055771984_35dfc9fdd9_k

    Inkscape / Badges Prep Work

    We also needed to prepare for the sessions themselves, of course:

    • Working out an agenda – We talked about the agenda for the event on our mailing list as well as during team meetings, but the rough agenda was basically to offer an Inkscape install fest followed by a basic Inkscape class (mizmo), run through an Inkscape tutorial (gnokii), and then do a badges workshop (riecatnor & mleonova.) We’ll talk about how well this worked later in this post. 🙂
    • Prepare slides / talking points – riecatnor, mleonova, and myself prepped some slides for our sessions; gnokii prepared a tutorial.
    • Prepare handouts – You can see in one of the photos above that we provided attendees with handouts. There were two keyboard shortcut printouts – one for basic / most frequently used ones, the other a more extended / full list we found provided by Michael van der Nest. We also provided a help sheet on how to install Inkscape. We printed them the morning of and distributed them at each seat in the classroom.
    • Prepare badges – riecatnor and mleonova very carefully combed through open badge requests in need of artwork and put together a list of those most appropriate for newbies, filling in ideas for artwork concepts and tips/hints for the would-be badgers who’d pick up the tickets at the event. They also provided the list of ticket numbers for these badges on the whiteboard at the event.

    Marie explaining the anatomy of a badge

    The Agenda / Materials

    Here’s a rough outline of our agenda, with planned and actual times:

    Here’s the materials we used:

    As mentioned elsewhere in this post, we did record the sessions, but I’ve got to go through the recordings to see how usable they are and edit them down if they are. I’ll do another post if that’s the case with links to the videos.

    How did the event go?

    Unfortunately, despite our best efforts (and a massive amount of prep work,) I don’t think any of us would qualify the event as a home run. We ran into a number of challenges, some of our own (um, mine, actually!) making, some out of our control. That being said, thus far our survey results have been very positive – basically attendees agreed with our self-analysis and felt it was a good-to-very good, useful event that could have been even better with a few tweaks.

    graph showing attendees rated the presentation good-to-excellent

    The Good

    • Generally attendees enjoyed the sessions and found them useful. As you can see in the chart above, of 8 survey respondents, 2 thought it was excellent, 3 thought it was very good, and 3 thought it was good. I’ll talk more about the survey results later on, but enjoy this respondent’s quote: “I’m an Adobe person and I’ve never used other design softwares, so I’m happy I learned about a free open source software that will help me become more of an asset when I finish college and begin looking for a career.”
    • The event was sold out – interest in what we had to say and teach is high!. We had all 30 slots filled over a week before the event; when we had 2 last-minute dropouts, we were able to quickly re-fill those slots. I don’t know if every single person who signed up attended, but we weren’t left with any extra seats in the room at the peak of attendance.
    • The A/V system worked well. We had a couple of mysterious drops from BlueJeans that lead some some furious reconnecting to continue the presentation, but overall, our A/V setup worked well. I
    • The food was good. There was something to eat for everyone, and it all arrived on time. For close to 40 people, it cost $190. This included 11 pizzas (9 large, 2 medium gluten free), 4 salads, 2 sandwiches, and 5 2-liter bottles of soda. (Roughly $5.30/person.) Maybe a silly point to make, but food is important too, especially since the event ran right through lunch (10 AM – 3 PM.)
    • We didn’t frighten newbies away (at least, not right away.) About half of the attendees came with Inkscape preinstalled, half didn’t. We divided them into different halves of the room. The non-preinstallers (who we classified as “newbies,”) stayed until a little past lunch, which I consider a victory – they were able to follow at least the first long session, stayed for food, and completed most of gnokii’s tutorial.
    • Inkscape worked great, even cross-platform. Inkscape worked like a champ – there were no catastrophic crashes and generally people seemed to enjoy using it. We had everyone installed by about 20 minutes into the first session – one OS X laptop had some issues due to some settings in the OS X control panel relating to XQuartz, but we were able to solve them. Everyone left the event with a working copy of Inkscape on their system! I would guesstimate we had about 1/3 OS X, 1/3 Windows, and 1/3 Linux machines (the latter RH employees + family mostly. 🙂 )
    • No hardware issues. We instructed attendees to bring their own hardware and all did, with the exception of one attendee who contacted me ahead of time – I was able to arrange to provide a loaner laptop for her. Some folks forgot to bring a computer mouse and I had enough available to lend.

    survey results about event length - too long

    The Bad

    • We ran too long. We originally planned the workshop to last from 10 AM to 2 PM. We actually ran until about 4 PM; although we officially ended at 3 PM with everyone in the room’s consent around 1:30. This is almost entirely my fault; I covered the Inkscape Bootcamp slides too slowly. We had a range of skill levels in the room, and while I was able to keep the newbies on board during my session, the more advanced folks were bored until gnokii ran his (much more advanced) tutorial. The survey results also provided evidence for this, as folks felt the event ran too long and some respondents felt it moved too slow, others too fast.
    • We covered too much material. Going hand-in-hand with running too long, we also tried to do too much. We tried to provide instruction for everyone from the absolute beginner, to Adobe convert, to more experienced attendee, and lost folks along the way as the pacing and level of detail needed for each different audience is too different to pull off successfully in one event. In our post-event session, the Fedora Design Team members running the event agreed we should cut a lot of the basic Inkscape instruction and instead focus on badges as the conduit for more (perhaps one-on-one lab session style) Inkscape instruction to better focus the event.
    • We lost people after lunch. We lost about half of our attendees not long after lunch. I believe this is for a number of reasons, not the least of which we covered so much material to start, they simply needed to go decompress (one survey respondent: “I ended up having to leave before the badges part because my brain hurt from the button tutorial. Maybe don’t do quite so many things next time?”) Another interesting thing to note is the half of the room that was less experienced (they didn’t come with Inkscape pre-installed and along the way tended to need more instructor help,) is the half that pretty much cleared out, while the more experienced half of the room was still full by the official end of the event. This helps support the notion that the newbies were overwhelmed and the more experienced folks hungry for more information.
    • FAS account creation was painful. We should have given the Fedora admins a heads up that we’d be signing 30 folks up for FAS accounts all at the same time – we didn’t, oops! Luckily we got in touch via IRC, so folks were finally able to sign up for accounts without being blocked due to getting flagged as potential spammers. The general workflow for FAS account signup (as we all know) is really clunky and definitely made things more difficult than it needed to be.
    • We should have been more clear about the agenda / had slides available. This one came up multiple times on the survey – folks wanted a local copy of the slides / agenda at the event so when they got lost they could try to help themselves. We were surprised by unwilling folks seemed to be to ask for help, despite our attempts to set a laid back, audience-participation heavy environment. In chatting with some of the attendees over lunch and after the event, both newbie and experienced folks expressed a desire to avoid ‘slowing everybody else down’ by asking a question and wanting to try to ‘figure it out myself first.’
    • No OSD keypress guides. We forgot to run an app that showed our keypresses while we demoed stuff, which would have made our instructions easier to follow. One of the survey respondents pointed this one out.
    • We didn’t have name badges Another survey comment – we weren’t wearing name badges and our names weren’t written anywhere, so some folks forgot our names and didn’t know how to call for us.
    • We weren’t super-organized on assisting folks around the room. We should have set a game plan before starting and assigned some of the other staff to stand in particular corners of the room and kind of assign them that area to help people one-on-one. This would have helped because as just mentioned, people were reluctant to ask for help. Pacing behind them as they worked and taking note of their screens when they seemed stuck and offering help worked well.

    Workshop participants working on their projects

    Survey results so far

    Thus far we’ve had 8 respondents out of the 30 attendees, which is actually not an awful response rate. Here’s a quick rundown of the results:

    1. How likely is it that you would recommend the event to a friend or colleague? 2 detractors, 3 passives, 2 promoters; net promoter score 0 (eek)
    2. Overall, how would you rate the event? Excellent (2), Very Good (3), Good (3), Fair (0), Poor (o)
    3. What did you like about the event? This was a freeform text field. Some responses:
      • “I think the individuals running the event did a great job catering to the inexperience of some of the audience members. The guy that ran the button making lab was incredibly knowledgeable and he helped me learn a lot of new tools in a software I’ve never used before that I may not have found on my own.”
      • “The first Inkscape walk through of short cut keys and their use. Presenter was confident, well prepared and easy to follow. Everyone was very helpful later as we tried “Evil Computer” mods with assistance from knowledgeable artists.”
      • “I enjoyed learning about Inkscape. Once I understood all the basic commands it made it very easy to render cool-looking logos.”
      • “It was a good learning experience. It taught me some things about graphics that I did not know.”
    4. What did you dislike about the event? This was a freeform text field. Some responses:
      • “I wish there was more of an agenda that went out. I tried installing Inkscape at my home before going, but I ran into some issues so I went to the office early to get help. Then I found out that the first hour of the workshop was actually designed to help people instal it. It also went much later than originally indicated and although it didn’t bother me, many people left at the time it was supposed to end, therefore not being able to see how to be an open source contributor.”
      • “The button explanation was very fast and confusing. I’m hoping the video helps because I can pause it and looking away for a moment won’t mean I miss something important.”
      • “Hard to follow directions, too fast paced”
      • “The pace was sometimes too slow.”
      • “While the pace felt good, it can be hard to follow what specific keypresses/mouse movements produced an effect on the projector. When it’s time to do it yourself, you may have forgotten or just get confused. A handout outlining the steps for each assignment would have been helpful.”
    5. How organized was the event? Extremely organized (0), Very organized (5), Somewhat organized (3), Not so organized (0), Not at all organized (0)
    6. How friendly was the staff? Extremely friendly (4), Very friendly (4), Somewhat friendly (0), Not so friendly (0), Not at all friendly (0)
    7. How helpful was the staff? Extremely helpful (2), Very helpful (3), Somewhat helpful (3), not so helpful (0), not at all helpful (0).
    8. How much of the information you were hoping to get from this event did you walk away with? All of the information (4), most of the information (2), some of the information (2), a little of the information (0), none of the information (0)
    9. Was the event length too long, too short, or about right? Much too long (0), somewhat too long (3), slightly too long (3), about right (2), slightly too short (0), somewhat too short (0), much too short (0).
    10. Freeform Feedback: Some example things people wrote:
      • “I’m an Adobe person and I’ve never used other design softwares, so I’m happy I learned about a free open source software that will help me become more of an asset when I finish college and begin looking for a career.”
      • “Overall fantastic event. I hope I’m able to find out if another workshop like this is ever held because I’d definitely go.”
      • “If you are willing to make the slides available and focus on tool flow it would help as I am still looking for how BADGE is obtained and distributed.”

    mleonova showing off our badges

    Looking forward!

    Despite some of the hiccups, it is clear attendees got a lot out of the event and enjoyed it. There are a lot of recommendations / suggestions documented in this post for improving the next event, should one of us decide to run another one.

    In general, in our post-event discussion we agreed that future events should have a tighter experience level pre-requisite; for example, absolute beginners tended to like the Inkscape bootcamp material, so maybe have a separate Inkscape bootcamp event for them. The more experienced users enjoyed gnokii’s project-style, fast-paced tutorial and the badges workshop, so having an event that included just that material and had a pre-requisite (perhaps you must be able to install Inkscape on your own and be at least a little comfortable using it) would probably work well.

    Setting a time limit of 3-4 hours and sticking to it, with check-ins, would be ideal. I think an event like this with this many attendees needs 2-3 people minimum running it to work smoothly. If there were 2-3 Fedorans co-located and comfortable with the material, it could be run fairly cheaply; if the facility is free, you could do it for around $200 if you provide food.

    Anyway I hope this event summary is useful, and helps folks run events like this in the future! A big thanks to the Fedora Council for funding the Fedora Design Team FAD and this event!

    July 31, 2016

    New Stellarium User Guide is available

    Dear all,

    while we were working on new features for the 0.15 release, we have also thoroughly reworked the Stellarium User Guide (SUG). This should now include all changes introduced since the 0.12 series and be up-to-date with the 0.15 series. It includes many details about landscape creation, skyculture creation, telescope control, putting your deep-sky photos among the stars, how to start scripting, creation of 3D sceneries for Stellarium, and much more.

    The SUG is now almost 300 pages and available for download as hyperlinked PDF from stellarium.org. It is also packed in the Windows install package download, so you don't need a separate download.

    The online user guide on the wiki will no longer be updated, and may even go away if we do not hear a major outcry from you.

    Clear skies for observing, and now you have something to read for the cloudy nights as well ;-)

    Kind regards,
    Georg

    Stellarium 0.15.0

    In memory of our team member Barry Gerdes.

    Version 0.15.0 is based on Qt5.6. Starting with this version, some graphics cards have been blacklisted by Qt and are automatically forced to use ANGLE on Windows.
    We introduce a major internal change with the StelProperty system.
    This allows simpler access to internal variables and therefore more ways of operation.
    Most notably this version introduces an alternative control option via RemoteControl, a new webserver interface plugin.
    We also introduce another milestone towards providing better astronomical accuracy for historical applications: experimental support of getting planetary positions from JPL DE430 and DE431 ephemerides. This feature is however not fully tested yet.
    The major changes:
    - Added StelProperty system
    - Added new plugin for exhibitions and planetariums - Remote Control
    - Added new skycultures: Macedonian, Ojibwe, Dakota/Lakota/Nakota,
    Kamilaroi/Euahlayi
    - Updated code of plugins
    - Added Bookmarks tool and updated AstroCalc tool
    - Added new functions for Scripting Engine and new scripts
    - Added Miller Cylindrical Projection
    - Added updates and improvements in DSO and star catalogues (include initial
    support of The Washington Double Star Catalog)
    - azimuth lines (also targetting geographic locations) in ArchaeoLines plugin
    - Many fixes and improvements...

    In addition, we prepared a new user guide.

    A huge thanks to our community whose contributions help to make Stellarium better!

    Full list of changes:
    - Added getting planetary positions from JPL DE430 and DE431 ephemerides (SoCiS2015 project)
    - Added RemoteControl and preliminary RemoteSync plugins (SoCiS2015 project)
    - Added StelProperty system (SoCiS2015 project)
    - Added immediate saving of settings for plugins (Angle Measure, Archeo Lines, Compass Marks)
    - Added Belarusian translation for landscapes and sky cultures (LP: #1520303)
    - Added Bengali description for landscapes and sky cultures (LP: #1548627)
    - Added new skycultures: Macedonian, Ojibwe, Dakota/Lakota/Nakota, Kamilaroi/Euahlayi
    - Added support Off-Axis Guider feature in Oculars plugin (LP: #1354427)
    - Added support permanent rotation angle for CCD in Oculars plugin
    - Added type of mount for telescopes in Oculars plugin
    - Added improvements for displaying data in decimal format
    - Added possibility to drawing of permanent orbits of the planets (disables of hiding orbits for planets, when they are out of field of view). (LP: #1509674)
    - Added tentative support for screens with 4K resolution for Windows packages (LP: #1372781)
    - Enabled support for side-by-side assembly technology for Windows packages (LP: #1400045)
    - Added CLI options --angle-d3d9, --angle-d3d11, --angle-warp for fine-tuning ANGLE flavour selection on Windows.
    - Added improvements in Stellarium's installer on Windows
    - Added improvements in Telescope Control plugin
    - Added feature for build a dependency graphs of various characteristics of exoplanets (Exoplanets plugin)
    - Added support of the proper names for exoplanets and their host stars (Exoplanets plugin)
    - Added improvement for Search Tool
    - Added improvement for scripting engine
    - Added their Bayer designation for some stars in Scorpius (LP: #1518437)
    - Added updates and improvements in Stellarium DSO Catalog
    - Added initial support of subset of The Washington Double Star Catalog (LP: #1537449)
    - Added Prime Vertical and Colures lines
    - Added new functions for Scripting Engine
    - Added new DSO textures
    - Finished migration from Phonon to QtMultimedia (LP: #1260108)
    - Added scripting function to block tracking or centering for special installations.
    - Added visualization of ephemerides
    - Added config option for animation speed of pointers (gui/pointer_animation_speed = 1.0)
    - Added implementation of semi-transparent mask in the Oculars plugin (LP: #1511393)
    - Added hiding the halo when inner planet between Sun and observer (or moon between planet and observer) (LP: #1533647)
    - Added a tool for fill custom settings of position of Great Red Spot on Jupiter
    - Added Bookmarks tool (LP: #1106779)
    - Added new scripts: Best objects in the New General Catalog, The Jack Bennett Catalog, Binosky: Deep Sky Objects for Binoculars, Herschel 400 Tour, Binocular Highlights, 20 Fun Naked-Eye Double Stars, List of largest known stars
    - Added Circumpolar Circles (LP: #1590785)
    - Added Miller Cylindrical Projection
    - Allow viewport offset change in scripts.
    - Allow centering zenith or pole via scripting (LP: #1068529)
    - Allow freezing/unfreezing average atmospheric brightness (e.g. for balanced-brightness image export scripts.)
    - Allow saving of output.txt to another file so that it can be read by other programs on Windows while Stellarium is still open.
    - Allow min/max values and wraparound settings for AngleSpinBox
    - Allow configurable speed and script speed buttons
    - Allow storing and retrieval of screen location for StelDialogs (LP: #1249251)
    - Allow polygonal horizons with many negative values (LP: #1554639)
    - Allow altitude-dependent twinkling for stars (LP: #1594065)
    - Allow display of sun's halo if sun is just outside viewport (LP: #1294498)
    - Reconfigure viewDialog GUI to put constellation switches to skylore tab.
    - Limit location coordinate spinboxes to useful coordinates
    - Apply Fluctuations in the Moon's Mean Longitude in DeltaT calculations (Source: Spencer Jones, H., 'The Rotation of the Earth, and the Secular Accelerations of the Sun, Moon and Planets', MNRAS, 99 (1939), 541-558 [http://adsabs.harvard.edu/abs/1939MNRAS..99..541S])
    - Applying device pixel ratio to the pixmap, so that it displays correctly on Mac's.
    - Added improvements for Paste and Search feature (Search Tool)
    - Added ecliptical coordinates info for objects in scripting engine
    - Added exit pupil calculation in the Oculars plugin (LP: #1500225)
    - Added support MSVC2015
    - Added automatic reloading catalogs after updating for some plugins
    - Added a tour of Messier Objects
    - Added fix to circumvent text rendering bug (CLI option: -t)
    - Introduce env variable STEL_OPTS to allow preconfiguring default CLI options.
    - Added option for hide background under buttons on bottom toolbar (LP: #1204639)
    - Added check position on the screen for orbits of satellites (LP: #1510530)
    - Added new option to changing behaviour of displaying of the labels of DSO on the screen (LP: #1600283)
    - Star catalogues has been updated from 'XHIP: An Extended Hipparcos Compilation' data.
    - Fixed validation of day in Date and Time dialog (LP: #1206284)
    - Fixed display of sidereal time (mod24), show apparent sidereal time only if nutation is used.
    - Fixed issue of saving some setting from the View window (LP: #1509639)
    - Fixed issue for reset of number of satellite orbit segments (LP: #1510592)
    - Fixed bug in download of stars catalogs in debug mode (LP: #1514542)
    - Fixed issue with smooth blending/fading in ArchaeoLines plugin
    - Fixed loading scenes for Scenery 3D plugin (LP: #1533069)
    - Fixed connection troubles in Telescope Control Plugin on Windows (LP: #1530372)
    - Fixed wrong altitude of culmination in Observability plugin (LP: #1531561)
    - Fixed the meteor radiants movements when time is switched manually (LP: #1535950)
    - Fixed misbehaving zoom out to initial view position (LP: #1537446)
    - Fixed format for declination in AstroCalc
    - Fixed value of ecliptic obliquity and ecliptic coordinates of date (LP: #1520792)
    - Fixed zoom/art brightness handling (LP: #1520783)
    - Fixed perspective mode with offset viewport in scenery3d (LP: #1509728)
    - Fixed drawing reticle for telescope (LP: #1526348)
    - Fixed wrong altitudes for some locations (LP: #1530759)
    - Fixed window location having offscreen frame when leaving fullscreen (LP: #1471954)
    - Fixed core.moveToAltAzi(90,XX) issue (LP: #1068529)
    - Fixed some skyculture links
    - Fixed issue of sidereal time: sidereal time is no longer displayed negative in the Western timezones.
    - Fixed online search tool for MPC website
    - Fixed translation of Egyptian planet names (LP: #1548008)
    - Fixed bug about wrong rise/set times in Observability for years far in the past
    - Fixed issue for resets flip buttons in Oculars plugin (LP: #1511389)
    - Fixed proper detection of GLSL ES version on Raspberry Pi with VC4 driver (and maybe other devices).
    - Fixed odd DateTimeDialog behavior during daylight saving change
    - Fixed key handling issue on Mac OS X in Scenery3D (LP: #1566805)
    - Fixed omission in documentation (LP: #1574583, #1575059)
    - Fixed a loss of focus in the sky when you click on the button (LP: #1578773)
    - Fixed issue of getting location from network.
    - Fixed bug in visualization of opposition/conjunction longitude
    - Fixed crash of Navigational Stars plugin (LP: #1598375)
    - Fixed satellites mutual occultation (LP: #1389765)
    - Fixed NaN in landscape brightness computation (LP: #1597129)
    - Fixed oversized corona (LP: #1599513)
    - Fixed displaying common names of DSO after changes filters of catalogs (LP: #1600283)
    - Ensure Large File Support for DE431 also for ARM boards.
    - Changed behaviour for drawing of the planet orbits (LP: #1509673)
    - Make moon halo visible again even when below -45 degrees (LP: #1586796)
    - Reduce planet brightness in daylight (LP: #1503248)
    - Updated AstroCalc tool
    - Updated icons for View dialog
    - Updated ssystem.ini (LP: #1509693, #1509692)
    - Updated names of stars (LP: #1550642)
    - Updated the search rules in the search dialog (LP: #1593965)
    - Avoid false display of tiny eclipse factor (rounding error).
    - Avoid issues around GLdouble in GLES2/ARM boards.
    - Reduce brightness of stars for ocular and CCD views
    - Hide displaying markers for meteor radiants during daylight
    - Cosmetic updates in Equation Of Time plugin
    - Enabled permanent visualization of position angles for galaxies
    - Updated bookmarks in Solar System Editor plugin
    - Updated default config options
    - Updated scripts
    - Updated shortcuts for scripts
    - Updated norwegian skyculture descriptions
    - Updated connection behaviour for autodiscovery location through network (FreeGeoIP)
    - Updated and optimized GUI
    - Updated Navigational Stars plugin
    - Implementation of quick turning to different directions (examples: CdC, HNSKY)
    - Important optimizations of planet position computation
    - Refactoring coloring markers of the DSO
    - Refactoring of the generating parts of the infrastructure (LP: #1571391)
    - Refactoring Telescope Control plugin
    - Removed info about Moon phases (avoid inconsistency for strings).
    - Removed rotation of movement by convergence angle correction in Scenery 3D plugin.

    July 28, 2016

    E-Interiores: Next-generation interior design with Blender

    By: Dalai Felinto, Blender Developer

    Meet e-interiores. This Brazilian interior design e-commerce startup transformed their creation process into an entire new fashion. This tale will show you how Blender made this possible, and how far we got.

    We developed a new platform based on a semi-vanilla Blender, Fluid Designer, and our own pipelines. Thanks to the accomplished results, e-interiores was able to consolidate a partnership with the giant Tok&Stok providing a complete design of a room, in 72 hours.

    A long time ago in a galaxy far far away

    During its initial years, e-interiores focused on delivering top-notch projects, with state of the art 3d rendering. Back then, this would involve a pantheon of software, namely: AutoCAD, SketchUp, VRay, Photoshop.

    All those mainstream tools were responsible for producing technical drawings, 3D studies, final renderings, and the presentation boards. Although nothing could be said about the final quality of their deliverables, the overall process was “artisanal” at most and extremely time consuming.

    Would it be possible to handle those steps inside a single tool? How much time could be saved from handling the non-essential tasks to the computer itself?

    New times require new tools

    The benefits of automatization in a pipeline are known and easily measured. But how much thought does a studio give to customization? How much can a studio gain from a custom-tailored tool?

    It was clear that we had to minimize the time spent on the preparation, rendering and presentation. This would leave the creators free to dedicate their time and sweat over what really matters: which furnitures to use and how to arrange them, which colors and materials to employ, the interior design itself.

    A fresh start

    The development paradigm was as such:

    • Vanilla Blender: The underneath software should stay as close to its consumer version as possible
    • Addon: The core of the project would be to create a Python script to control the end to end user experience
    • Low entry barrier: the users should not have to be skilled in any previous 3D software, specially not in Blender

    The development started by cleaning up the Blender Interface completely. I wanted the user to be unaware of the software being used underneath. We took a few hints from Fluid Designer (the theme is literally their startup file), but we focused on making the interface tied to the specifics of e-interiores working steps.

    You have the tools to create the unchanged elements of the space – walls, floor, …, the render point of views, the dynamic elements of the project, and the library. Besides that, there are a whole different set of tools dedicated to create the final boards, add notations, measurements, …

    A little bit about coding

    Although I wanted to keep Blender as close to its pristine release condition as possible, there were some changes in Blender that were necessary. They mostly orbited around the Font objects functionality which we use extensively in the boards preparations.

    The simplest solution in this case was to make the required modifications myself, and contribute them back to Blender. The following contributions are all part of the official Blender code, helping not only our project, but anyone that requires a more robust all-around text editing functionality:

    With this out of the way we have a total 18,443 lines of code for the core system, 1,458 of model conversion and 2,407 for database. All of this ammounts to over 22 thousand lines of Python scripting.

    Infrastructure barebones

    The first tools we drafted are what we call the skeleton. We have parametric walls, doors, windows. We can make floor and ceilings. We can adjust their measurements later. We can play with their style and materials.

    Objects library

    We have over 12,000 3d models made available to us by Tok&Stok. The challenge was to batch convert them into a format Cycles could use. The files were originally in Collada, and modelled and textured for realtime usage. We then ditched the lightmaps, removed the support meshes, and assigned Cycles-hand-made materials based on the object category.

    Part of this was only possible thanks to the support of Blender developer and Collada functionality maintainer Gaia Clary. Many thanks!

    More dynamic elements

    Curtains, Mirrors, marbles, blindex . . . there a few components of a project that are custom-made and adjusted on an individual case basis.

    Boards

    This is where the system shines. The moment an object is on the scene we can automatically generate the lighting layout, the descriptive memorial, and the product list.

    The boards are the final deliverable to the clients. This is where the perspectives, the project lists, the blueprints all come together. The following animation illustrates the few steps involved in creating a board with all the used products, with their info gathered from our database.

    Miscellaneous results

    Finally you can see a sample of the generated result of the initial projects done with this platform. Thanks to Blender’s script possibilities and customization we put together an end-to-end experience to our designer and architects.

    July 27, 2016

    A Chiaroscuro Portrait


    A Chiaroscuro Portrait

    Following the Old Masters

    Introduction (Concept/Theory)

    The term Chiaroscuro is derived from the Italian chiaro meaning ‘clear, bright’ and oscuro meaning ‘dark, obscure’. In art the term has come to refer to the use of bold contrasts between light and shadow, particularly across an entire composition, where they are a prominent feature of the work.

    This interplay of shadow and light is particularly important in allowing the viewer to extrapolate volume from a flat image. The use of a single light source helps to accentuate the perception of volume as well as adding drama and dynamics to the scene.

    Historically the use of chiaroscuro can often be associated with the works of old masters such as Rembrandt and Caravaggio. The use of such extreme lighting immediately evokes a sense of shape and volume, while focusing the attention of the viewer.

    Rembrandt Self Portrait Self Portrait with Gorget by Rembrandt
    Girl with a Pearl Earring Girl with a Pearl Earring by Johannes Vermeer

    The aim of this tutorial will be to emulate the lighting characteristics of chiaroscuro in producing a portrait to evoke the feeling of an old master painting.

    Equipment

    In examining chiaroscuro portraiture, it becomes apparent that a strong characteristic of the images is the use of single light source on the scene. So this tutorial will focus on using a single source to illuminate the portrait.

    Getting the keylight off the camera is essential. The closer the keylight is to the axis of the camera the larger the reduction in shadows. This is counter to the intention of this workflow. Shadows are an essential component in producing this look, and on-camera lighting simply will not work.

    The reason to choose a softbox versus the myriad of other light modifiers available is simple: control. Umbrellas can soften the light, but due to their open nature have a tendency to spill light everywhere while doing so. A softbox allows the light to be softened while also retaining a higher level of spill control.

    Light spill can still occur with a softbox, so the best option is to bring the light in as close as possible to the subject. Due to the inverse square nature of light attenuation, this will help to drop the background very dark (or black) when exposing properly for the subject.

    Inverse Square Light Fall Off

    Left
    For example, in the sample images above, a 20 inch softbox was initially located about 18 inches away from the subject (first). The rear wall was approximately 48 inches away from the subject or just over twice the distance from the softbox. Thus, on a proper exposure for the subject, the background would be around 3 stops lower in light. This is seen as the background in the first image has dropped to a dark gray.

    Middle
    When the light distance to the subject is doubled and the light distance to the rear wall stays the same, the ratio is not as extreme between them. The light distance from the subject is now 36 inches, while the light distance to the rear wall is still 48 inches. When properly exposing for the subject, the rear wall is now only about 1 stop lower in light.

    Right
    In the final example, the distance from the light to both the subject and the rear wall are very close. As such, a proper exposure for the subject almost brings the wall to a middle exposure.

    What this example provides is a good visual guide for how to position the subject and light relative to the surroundings to create the desired look. To accentuate the ratio between dark and light in the image it would be best to move the light as close to the subject as possible.

    If there is nothing to reflect light on the shadow side of the subject, then the shadows would fall to very dark or black. Usually, there are at least walls and ceilings in a space that will reflect some light, and the amount falling on the shadow side can be attenuated by either moving the subject nearer to a wall on that side, or using a bounce/reflector as desired.

    Shooting

    Planning

    The setup for the shot would be to push the key light in very close to the model, while still allowing some bounce to slightly fill the shadows.

    Mairi Light Setup

    As noted previously, having the key light close to the model would allow the rest of the scene to become much darker. The softbox is arranged such that the face is almost completely vertical and the bottom edge is just above the models eyes. This was to feather the lower edge of the light falloff along the front of the model.

    There are 2 main adjustments that can be made to fine-tune the image result with this setup.

    The first is the key light distance/orientation to the subject. This will dictate the proper exposure for the subject. For this image the intention is to push the key light in as close as possible without being in frame. There is also the option of angling the key light relative to the subject. In the diagram above, the softbox is actually angled away from the subject. The intention here was to feather the edge of the light in order to control spill onto the rest of the model (putting more emphasis on her face).

    The second adjustment, once the key light is in a good location, is the distance from the key light and subject together, to the surrounding walls (or a reflector if one is being used). Moving both subject and keylight closer to the side wall will increase the amount of reflected light being bounced into the shadows.

    Mood Board

    If possible, it can be extremely helpful to both the model and photographer to have a Mood Board available. This is usually just a collection or collage of images that help to convey the desired feeling or desired result from the session. For help in directing the model, the images do not necessarily need the same lighting setup. The intention is to help the model understand what your vision is for the pose and facial expressions.

    The Shoot

    The lighting is set up and the model understands what type of look is desired, so all that’s left is to shoot the image!

    Mairi Contact Sheet

    In the end, I favored the last image in the sequence for a combination of the models head position/body language and the slight smile she has.

    Postprocessing

    Having chosen the final image from the contact sheet, it’s now time to proceed with developing the image and retouching as needed.

    If you’d like to follow along you can download the raw .ORF file:

    Mairi_Troisieme.ORF (13MB)

    This file is licensed (Creative Commons, By-Attribution, Non-Commercial, Share-Alike), and is the same image that I shared with everyone on the forums for a PlayRaw processing practice. You can see how other folks approached processing this image in the topic on discuss. If you decide to try this out for yourself, come share your results with us!

    Raw Development

    There are various Free raw processing tools available and for this tutorial I will be using the wonderful darktable.

    darktable logo

    Base Curve

    Not surprisingly the initial image loaded without any modifications is a bit dark and rather flat looking. By default darktable should have recognized that the file is from Olympus, and attempted to apply a sane base curve to the linear raw data. If it doesn’t you can choose the preset “olympus like alternate”.

    I found that the preset tended to crush the darkest tones a bit too much, and instead opted for a simple curve with a single point as seen here:

    darktable base curve

    Resist the temptation to try and adjust overall exposure and contrast with the base curve. These parameters will be adjusted shortly in the appropriate modules. The base curve is only intended to transform the linear raw rgb to something that looks good on your output device. The base curve will affect how the contrasts, colors, and saturation all relate in the final output. For the purposes of this tutorial, it is enough to simply choose a preset.

    The next series of steps focus on adjusting various exposure parameters for the image. Conceptually they start with the most broad adjustment, exposure, then to slightly more targeted adjustments such as contrast, brightness, and saturation, then finish with targeted tonal adjustments in tone curves.

    darktable manual: base curve

    Exposure

    Once the base curve is set, the next module to adjust would be the overall exposure of the image (and the black point). This is done in the “exposure” module (below the base curve).

    darktable exposure

    The important area to watch while adjusting the exposure for the image is the histogram. The image was exposed a little dark, so increase the exposure overall for the image. In the histogram, avoid clipping any channels by allowing them to be pushed outside the range. In this case, the desire is to provide a nice mid-level brightness to the models face. The exposure can be raised until the channels begin to clip on the far right of the histogram, then brought back down a bit to leave some headroom.

    The darkest areas of the histogram on the left are clipped a bit, so raising the black level brings the detail back in the darkest shadows. When in doubt try to let the histogram guide you with data from the image. Particularly around the highest and lowest values (avoid clipping if possible).

    An easy way to think of the exposure module is that it allows the entire image exposure to be shifted along with compressing/expanding the overall range by modifying the black point.

    darktable manual: exposure

    Contrast Brightness Saturation

    Where the Exposure module shifts the overall image values from a global perspective, modules such as the “contrast brightness saturation” allow finer tuning of the image within the range of the exposure.

    To emphasis the models face, while also strengthening the interplay of shadow and light on the image, drop the brightness down to taste. I brought the brightness levels down quite a bit (-0.31) to push almost all of the image below medium brightness.

    darktable contrast brightness saturation

    Overall this helps to emphasis the models face over the rest of the image initially. While the rest of the image is comprised of various dark/neutral tones, the models face is not. Pushing the saturation down as well will remove much of the color from the scene and face. This is done to bring the skin tones back down to something slightly more natural looking, while also muting some of those tones.

    darktable contrast brightness saturation

    The skin now looks a bit more natural but muted. The background tones have become more neutral as well. A very slight bump in contrast to taste finishes out this module.

    darktable manual: contrast brightness saturation

    Tone Curve

    A final modification to the exposure of the image is through a tone curve adjustment. This gives us the ability to make some slight changes to particular tonal ranges. In this case pushing the darker tones down a bit more while boosting the upper mid and high tones.

    darktable tone curve

    This is actually a type of contrast increase, but controlled to specific tones based on the curve. The darkest darks (bottom of the curve) get pushed a little bit darker, which will include most of the sweater, background, and shadow side of the models face. The very slight rolling boost to the lighter tones primarily helps to allow the face to brighten up against the background even more.

    The changes are very slight and to taste. The tone curve is very sensitive to changes, and often only very small modifications are required to achieve a given result.

    darktable manual: tone curve

    Sharpen

    By default the sharpen module will apply a small amount of sharpening to the image. The module uses unsharp mask for sharpening, so the radius parameter is the blur radius into the unsharp mask. I wanted to sharpen lightly very fine details, so set the radius to ~1, with an amount around 0.9 and no threshold. This produced results that are very hard to distinguish from the default settings, but appears to sharpen smaller structures just slightly more.

    darktable exposure

    I personally include a final sharpening step as a side effect of using wavelet decompose for skin retouching later in the process with GIMP. As such I am not usually as concerned about sharpening here as much. If I were, there are better modules for adjusting sharpening from wavelets using the equalizer module.

    darktable manual: sharpen

    Denoise (profiled)

    The darktable team and its users profiled many different cameras for noise profiles at various ISOs to build a statistical model with brightness across the three color channels. Using these profiles, darktable can then do a better job at efficiently denoising images. In the case of my camera (Olympus OM-D E-M5), there was a profile already captured for ISO200.

    darktable denoise profiled

    In this case, the chroma noise wasn’t too bad, and a very slight reduction in luma noise would be sufficient for the image. As such, I used a non-local means with a large patch size (to retain sharpness) and a low strength. This was all applied uniformly against the HSV lightness option.

    darktable manual: denoise - profiled

    Export

    Finally! The image tones and exposure are in a desirable state, so export the results to a new file. I tend to use either TIF or PNG at 16 bit. This is in case I want to work in a full 16 bit workflow with the latest GIMP, or may want to in the future.

    GIMP

    When there are still some pixel-level modifications that need to be done to the image, the go-to software is GIMP.

    • Skin retouching
    • spot healing/touchups
    • Background rebuild
    GIMP - GNU Image Manipulation Program <3

    Skin Retouching with Wavelet Decompose

    This step is not always needed, but who doesn’t want their skin to look a little nicer if possible?

    The ability to modify an image based on detail scales isolated on their own layers is a very powerful tool. The approach is similar to frequency separation, but has the advantage of providing multiple frequencies to modify simultaneously of progressively larger and larger detail scales. This offers a large range of flexibility and an easier workflow vs. frequency separation (you can work on any detail scale simply by switching to a different layer).

    I used to use the wonderful Wavelet Decompose plugin from marcor on the GIMP plugin registry. I have since switched to using the same result from G’MIC once David Tschumperlé added it in for me. It can be found in G’MIC under:

    Details → Split details [wavelets]

    Running Split details [wavelets] against the image to produce 5 wavelet scales and a residual layer yields (cropped):

    Wavelet scales example decompose

    The plugin (or script) will produce 5 layers of isolated details plus a residual layer of low-frequency color information. Seen here in ascending size of detail scales. The finest scales (1 & 2) will be hard to discern the details as they are quite fine.

    To help visualizing what the different scale levels look like here is a view of the same levels above, normalized:

    Wavelet scales normalized

    The normalized view shows clearly the various types of detail scales on each layer.

    There are various types of changes that can be made to the final image from these details scales. In this image, we are going to focus on evening out the skin tones overall. The scales with the biggest impact on even skin tones for this image are 4 and 5.

    A good workflow when smoothing overall skin tones and using wavelet scales is to work on smoothing from the largest detail scales and working down to finer scales. Usually, a nice amount of pleasing tonal smoothing can be accomplished in the first couple of coarse detail scales.

    Skin Retouching Zones

    Different portions of a face will often require different levels of smoothing. Below is a rough map of facial contours to consider when retouching. Not all faces will require the exact same regions, but it is a good starting point to consider when approaching a new image.

    Skin retouching by zones

    The selections are made with the Free Select Tool with the “Feather edges” option on and set to roughly 30px.

    Smoothing

    A good starting point to consider is the forehead on the largest detail scale (5). The basic workflow is to select a region of interest and a layer of detail, then to suppress the features on that detail level. The method of suppressing features is a matter of personal taste but is usually done across the entire selection using a blur filter of some sort.

    A good first choice would be to use a gaussian blur (or Selective Gaussian Blur) to smooth the selection. A better choice, if G’MIC is installed, is to use a bilateral blur for its edge-preserving properties. The rest of these examples will use the bilateral blur for smoothing.

    Considering the forehead region:

    Sking retouching wavelet scales forehead

    The first image is the original. The second image is after running a bilateral blur (in G’MIC: Smooth [bilateral]), with the default parameter values:

    • Spatial variance: 10
    • Value variance: 7
    • Iterations: 2

    These values were chosen from experience using this filter for the same purpose across many, many images. The results of running a single blur on the largest wavelet scale is immediately obvious. The unevenness of the skin and tones overall are smoothed in a pleasing way, while still retaining the finer details that allow the eye to see a realistic skin texture.

    The last image is the result of working on the next detail scale layer down (Wavelet scale 4), with much softer blur parameters:

    • Spatial variance: 5
    • Value variance: 2
    • Iterations: 1

    This pass does a good job of finishing off the skin tones globally. The overall impression of the skin is much smoother than the original, but crucial fine details are all left intact (wrinkles, pores) to keep the it looking realistic.

    This same process is repeated for each of the facial regions described. In some cases the results of running the first bilateral blur on the largest scale level is enough to even out the tones (the cheeks and upper lip for example). The chin got the same treatment as the forehead. The process is entirely subjective, and will vary from person to person for the parameters. Experimentation is encouraged here.

    More importantly, the key word to consider while working on skin tones is moderation. It is also important to check your results zoomed out, as this will give you an impression of the image as seen when scaled to something more web-sized. A good rule of thumb might be:

    “If it looks good to you, go back and reduce the effect more”.

    The original vs. results after wavelet smoothing:

    Mairi Face Wavelet Wavelet Smoothed.
    Click to compare original
    <noscript> <figure> <img alt="Mairi Face Original" height="741" src="https://pixls.us/articles/a-chiaroscuro-portrait/face-original.jpg" width="640" /> Original </figure> </noscript>

    When the work is finished on the wavelet scales, a new layer from all of the visible layers can be created to continue touching up spot areas that may need it.

    Layer → New from Visible

    Spot Touchups

    The use of wavelets is good for a large-scale selection area smoothing but a different set of tools is required for spot touchups where needed. For example, there is a stray hair that runs across the models forehead that can be removed using the Heal tool.

    For best results when using the Heal tool, use a hard edged brush. Soft edges can sometimes lead to a slight smearing in the feathered edge of a brush that is undesirable. Due to the nature of the heal algorithm sampling, it is also advisable to avoid trying to heal across hard/contrasty edges.

    This is also a good tool to use for small blemishes that might have been tedious to repair across all of the wavelet scales from the previous section. This is also a good time to repair hot-spots, fly-away hairs, or other small details.

    Sweater Enhancement

    The model is wearing a nicely textured sweater but the details and texture are a slightly muted. A small increase in contrast and local details will help to bring some enhancement to the textures and tones. One method of enhancing local details would be to use the Unsharp Mask enhancement with a high radius and low amount (HiRaLoAm is an acronym some might use for this).

    Create a duplicate of the “Spot Healing” layer that was worked on in the previous step, and apply an Unsharp Mask to the layer using HiRaLoAm values.

    For example, a good starting point for parameters might be:

    • Radius: 200
    • Amount: 0.25

    With these parameters the sharpen function will instead tend to increase local contrast more, providing more “presence” or “pop” to the sweater texture.

    Background Rebuild

    The background of the image is a little too uniformly dark and could benefit from some lightening and variation. A nice lighter background gradient will enhance the subject a little.

    Normally this could be obtained through the use of a second strobe (probably gridded or with a snoot) firing at the background. In our case we will have to fake the same result through some masking.

    First, a crop is chosen to focus the composition a little stronger on the subject. I placed the center of the models face along the right-side golden section vertical and tried to place things near the center of the frame:

    Mairi cropped

    The slight-centered crop is to emulate the type of crop that might be expected from a classical painting (thereby strengthening the overall theme of the portrait further).

    Subject Isolation

    There are a few different methods to approach the background modification. The method I describe here is simply one of them.

    The image at this point is duplicated and the duplicate has the levels raised to brighten it up considerably. In this way, a simple layer mask can control the brightness and where it occurs in the image at this point.

    Mairi isolation
    Mairi isolation layers

    This is what will give our background a gradient of light. To get our subject back to dark will require masking the subject on a layer mask again. A quick way to get a mask to work from is to add a layer mask to the “Over” layer, letting the background show through, but turning the subject opaque.

    Add a layer mask to the “Over” layer as a “Grayscale copy of layer”, and check the “Invert mask” option:

    Mairi isolation add layer mask

    With an initial mask in place, a quick use of the tool:

    Colors → Threshold

    will allow you to modify the mask to define the shoulder of the model as a good transition. The mask will be quite narrow. Adjust the threshold until the lighter background is speckle-free and there is a good definition of the edge of the sweater against the background.

    Mairi threshold

    Once the initial mask is in place it can be cleaned up further by making the subject entirely opaque (white on the mask), and the background fully transparent (black on the mask). This can be done with paint tools easily. For not much work a decent mask and result can be had:

    Mairi isolation final

    This provides a nice contrast of the background being lighter behind the darker portions of the model and the opposite on the lighter subjects face.

    Lighten Face Highlights

    Speaking of the subjects face, there’s a nice simple method for applying a small accent on the highlighted portions of the models face in order to draw more attention to her.

    Duplicate the lightened layer that was used to create the background gradient, move it to the top of the layer stack, and remove the layer mask from it.

    Mairi Lighten Face Layers

    Set the layer mode of the copied layer to “Lighten only.

    As before, add a new layer mask to it, “Grayscale copy of layer”, but don’t check the “Invert mask” option. This time use the Levels tool:

    Colors → Levels

    to raise the blacks of the mask up to about mid-way or more. This will isolate the lightening mask to the brightest tones in the image that happen to correspond to the models face. You should see your adjustments modify the mask on-canvas in real-time. When you are happy with the highlights, apply.

    Mairi Lighten Highlights

    Last Sharpening Pass + Grain

    Finally, using I like to apply a last pass of sharpening to the image, and to overlay some grain from a grain field I have to help add some structure to the image as well as mask any gradient issues when rebuilding the background. For this particular image the grain step isn’t really needed as there’s already sufficient luma noise to provide its own structure.

    Usually, I will use the smallest of the wavelet scales from the prior steps and sometimes the next largest scale as well (Wavelet scale 1 & 2). I’ll leave Wavelet scale 1 at 100% opacity, and scale 2 usually around 50% opacity (to taste, of course).

    Mairi Final

    Minor touchups that could still be done might include darkening the chair in the bottom right corner, darkening the gradient in the bottom left corner, and possibly adding a slight white overlay to the eyes to subtly give them a small pop.

    As it stands now I think the image is a decent representation of a chiaroscuro portrait that mimics the style of a classical composition and interplay between light and shadows across the subject.

    July 25, 2016

    I hate deals

    One of my favourite tech-writers, Paul Miller from The Verge, has articulated something I've always felt, but have never been able to express well: I hate deals.

    From Why I'm a Prime Day Grinch: I hate deals by Paul Miller:

    Deals aren't about you. They're about improving profits for the store, and the businesses who distribute products through that store. Amazon's Prime Day isn't about giving back to the community. It's about unloading stale inventory and making a killing.

    But what about when you decide you really do want / need something, and it just happens to be on sale? Well, lucky you. I guess I've grown too bitter and skeptical. I just assume automatically that if something's on sale AND I want to buy it, I must've messed up in my decision making process somewhere along the way.

    I also hate parties and fun.

    July 24, 2016

    Preparation to release of version 0.15.0

    Greetings all!

    We plan to release Stellarium 0.15.0 at the end of next week (31 July).

    This is another major release, who has many changes in code and few new skycultures. If you can assist with translation to any of the 136 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

    Thank you!