December 18, 2014

Firefox deprecates flash -- temporarily

Recently Firefox started refusing to run flash, including youtube videos (about the only flash I run). A bar would appear at the top of the page saying "This plug-in is vulnerable and should be upgraded". Apparently Adobe had another security bug. There's an "Update now" button in the Firefox bar, but it's a chimera: Firefox has never known how to install plug-ins for Linux (there are longstanding bugs filed on why it claims to be able to but can't), and it certainly doesn't know how to update a Debian package.

I use a Firefox downloaded from Mozilla.org, but flash from Debian's flashplugin-nonfree package. So I figured updating Debian -- apt-get update; apt-get dist-upgrade -- would fix it. Nope. I still got the same message.

A little googling found several pages recommending update-flashplugin-nonfree --install; I tried that but it didn't help either. It seemed to download a tarball, but as far as I could tell it never unpacked or installed the tarball it downloaded.

What finally did the trick was

apt-get install --reinstall flashplugin-nonfree
That downloaded a new tarball, AND unpacked and installed it. After restarting Firefox, I was able to view the video I'd been trying to watch.

Are you awesome? Would you like to work with me …

Are you awesome? Would you like to work with me? Every day? Silverorange, the web development company at which I enjoy spending most of my days, is considering hiring a designer / front-end developer.

Wikipedia #Edit2014 Video

About a two months ago I was approached by Victor Grigas, a video producer for the Wikimedia Foundation (the non-profit that supports Wikipedia), about using some of the techniques I had previously discussed to create 2.5D parallax video images from single photographs. The intention was to use these 2.5D videos as part of their first ever "Year in Review" video:



For reference, this was my previous result using F/OSS to create the 2.5D parallax effect with still images:



For the Wikipedia video, Victor asked if I could use some images from Wiki Loves Monuments (apparently the worlds largest photo competition according to the Guiness World Records). How could I say no? (Disclaimer: I donate every year during their funding drives).

So I agreed, and after a short wait for the finalists from the competition to be chosen, was sent these two awesome images to turn into 2.5D parallax videos:



After a bit of slicing and dicing, I ended up with these short segments that ended up in the final video. As before, I did the main plane separations in GIMP manually. I divided the planes to best accommodate the anticipated camera movement through the scene (simple dolly pans). Once I had the planes separated, it was a simple process to bring them into Blender and offset the planes as the camera tracked across the scene:





This was a fun project to work on, and I want to thank the Wikimedia Foundation for giving me a chance to play with some gorgeous images and hopefully to help out in my own small way with the final outcome!

Also, Victor does a nice interview with the Wikimedia blog about producing the overall video. Great work everyone!

Leaving KO

Inge, Tobias and I founded KO GmbH in 2007 in Magdeburg. We named it KOfficeSource, because we believed that KOffice, which is Calligra these days, was getting ready for the big time, especially on mobile. Nokia was beginning to invest heavily into open source, Intel joining in with Moblin, the times were heady and exciting! After a bit of rough-and-tumble about the name, we renamed KOfficeSource GmbH to KO GmbH and from 2010 on, we were in business!

For a couple of years we had a great time. We ported Calligra to Maemo, Meego, Sailfish and Windows. We created half a dozen mobile versions of the core Calligra applications: viewers, editors. Along the way, we found some other customers, next to Nokia and Intel: NLNet helped with the port to Windows, SKF used Calligra in their Orpheus ball-bearing modeling tool as the report-writing component, ROSA was getting interested in the WebODF technology we had developed together with NLNet.

Our customers were happy: we really delivered amazing technology, applications with a great user experience, were good at working together with other teams and, well, basically, we always delivered. Whether it was C++, Python or Javascript, Qt, QML or HTML5.

Then things began to go awry. Even after dropping Meego, Nokia was still a customer of ours for some time, but we were doing prototype stuff in j2me for Asha phones. Not really exciting! ROSA went broke. We lost SKF as a customer when they had to reorganize to turn their development process around. Other customers had to cut down -- and we were also basically a bunch of tech nerds with no idea about doing sales: until now we never had to do sales.

Which meant that we failed to build enough of a business to sustain ourselves. We tried to expand, with Krita being an obvious choice for a mature product. But that still needed sales, and we failed at that, too.

So, from January on, I'll be no longer with KO GmbH. The Krita Foundation has taken over Krita on Steam and the support for the Krita Studio customers. We'll first release Krita 2.9, which is going to be awesome! And then, I'll be looking for work again, as a project lead or developer, freelance or with a company, on Krita or something else altogether.

December 17, 2014

Actually shipping AppStream metadata in the repodata

For the last couple of releases Fedora has been shipping the appstream metadata in a package. First it was the gnome-software package, but this wasn’t an awesome dep for KDE applications like Apper and was a pain to keep updated. We then moved the data to an appstream-data package, but this was just as much of a hack that was slightly more palatable for KDE. What I’ve wanted for a long time is to actually ship the metadata as metadata, i.e. next to the other files like primary.xml.gz on the mirrors.

I’ve just pushed the final patches to libhif, PackageKit and appstream-glib, which that means if you ship metadata of type appstream and appstream-icons in repomd.xml then they get downloaded automatically and decompressed into the right place so that gnome-software and apper can use the data magically.

I had not worked on this much before, as appstream-builder (which actually produces the two AppStream files) wasn’t suitable for the Fedora builders for two reasons:

  • Even just processing the changed packages, it took a lot of CPU, memory, and thus time.
  • Downloading screenshots from random websites all over the internet wasn’t something that a build server can do.

So, createrepo_c and modifyrepo_c to the rescue. This is what I’m currently doing for the Utopia repo.

createrepo_c --no-database x86_64/
createrepo_c --no-database SRPMS/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream.xml.gz		\
	x86_64/repodata/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

If you actually do want to create the metadata on the build server, this is what I use for Utopia:

appstream-builder			\
	--api-version=0.8		\
	--origin=utopia			\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=4			\
	--min-icon-size=48		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons	\
	--screenshot-uri=http://people.freedesktop.org/~hughsient/fedora/21/screenshots/

For Fedora, I’m going to suggest getting the data files from alt.fedoraproject.org during compose. It’s not ideal as it still needs a separate server to build them on (currently sitting in the corner of my office) but gets us a step closer to what we want. Comments, as always, welcome.

December 12, 2014

OpenRaster and OpenDocument

OpenRaster is a file format for layered images. The OpenRaster specification is small and relatively easy to understand, essentially each layer is represented by a PNG image, and other information is contained written in XML and it is all contained in a Zip Archive. OpenRaster is inspired by OpenDocument.
OpenDocument is a group of different file formats, including word processing, spreadsheets, and vector drawings. The specification is huge and continues to grow. It cleverly reuses many existing standards, avoiding repeating old mistakes, and building on existing knowledge.

OpenRaster can and should reuse more from OpenDocument.



It is easy to say but putting it into practice is harder. OpenDocument is a huge standard so where to begin? I am not even talking about the OpenDocument Graphics (.odg) specifically but more generally than that. It is best that show it with an example. So I created an example OpenRaster image with some fractal designs. You can unzip this file and see that like a standard OpenRaster file it contains:


fractal.ora  
 ├ mimetype
 ├ stack.xml
 ├ data/
 │  ├ layer0.png
 │  ├ layer1.png
 │  ├ layer2.png
 │  ├ layer3.png
 │  ├ layer4.png
 │  └ layer5.png
 ├ Thumbnails/
 │  └ thumbnail.png
 └ mergedimage.png

It also unusually contains two other files manifest.xml content.xml. Despite the fact that OpenDocument is a huge standard the minimum requirements for a valid OpenDocument file comes down to just a few files. The manifest is a list of all the files contained in the archive, and content.xml is the main body of the file, and does some of the things that stack.xml does in OpenRaster (for the purposes of this example, it does many other things too). The result of these two extra files, a few kilobytes of extra XML, is that the image is both OpenRaster AND OpenDocument "compatible" too. Admittedly it is an extremely small tiny subset of OpenDocument but it allows a small intersection between the two formats. You can test it for yourself, rename the file from .ora .odg and LibreOffice can open the image.

To better demonstrate the point, I wanted to "show it with code!" I decided to modify Pinta (a Paint program written in GTK and C#) and my changes are on GitHub. The relevant file is Pinta/Pinta.Core/ImageFormats/OraFormat.cs which is the OpenRaster importer and exporter.

This is a proof of concept, it is limited and not useful to ordinary users. The point is only to show that OpenRaster could borrow more from OpenDocument. It is a small bit of compatibility that is not important by itself but being part of the larger group could be useful.

December 10, 2014

Not exponential after all

We're saved! From the embarrassing slogan "Live exponentially", that is.

Last night the Los Alamos city council voted to bow to public opinion and reconsider the contract to spend $50,000 on a logo and brand strategy based around the slogan "Live Exponentially." Though nearly all the councilors (besides Pete Sheehey) said they still liked the slogan, and made it clear that the slogan isn't for residents but for people in distant states who might consider visiting as tourists, they now felt that basing a campaign around a theme nearly of the residents revile was not the best idea.

There were quite a few public comments (mine included); everyone was civil and sensible and stuck well under the recommended 3-minute time limit.

Instead, the plan is to go ahead with the contract, but ask the ad agency (Atlas Services) to choose two of the alternate straplines from the initial list of eight that North Star Research had originally provided.

Wait -- eight options? How come none of the previous press or the previous meeting mentioned that there were options? Even in the 364 page Agenda Packets PDF provided for this meeting, there was no hint of that report or of any alternate strap lines.

But when they displayed the list of eight on the board, it became a little clearer why they didn't want to make the report public: they were embarrassed to have paid for work of this quality. Check out the list:

  • Where Everything is Elevated
  • High Intelligence in the High Desert
  • Think Bigger. Live Brighter.
  • Great. Beyond.
  • Live Exponentially
  • Absolutely Brilliant
  • Get to a Higher Plane
  • Never Stop Questioning What's Possible

I mean, really. Great Beyond? Are we're all dead? High Intelligence in the High Desert? That'll certainly help with people who think this might be a bunch of snobbish intellectuals.

It was also revealed that at no point during the plan was there ever any sort of focus group study or other tests to see how anyone reacted to any of these slogans.

Anyway, after a complex series of motions and amendments and counter-motions and amendments and amendments to the amendments, they finally decided to ask Atlas to take the above list, minus "Live Exponentially"; add the slogan currently displayed on the rocks as you drive into town, "Where Discoveries are Made" (which came out of a community contest years ago and is very popular among residents); and ask Atlas to choose two from the list to make logos, plus one logo that has no slogan at all attached to it.

If we're lucky, Atlas will pick Discoveries as one of the slogans, or maybe even come up with something decent of their own.

The chicken ordinance discussion went well, too. They amended the ordinance to allow ten chickens (instead of six) and to try to allow people in duplexes and quads to keep chickens if there's enough space between the chickens and their neighbors. One commenter asked for the "non-commercial' clause to be struck because his kids sell eggs from a stand, like lemonade, which sounded like a very reasonable request (nobody's going to run a large commercial egg ranch with ten chickens); but it turned out there's a state law requiring permits and inspections to sell eggs.

So, folks can have chickens, and we won't have to live exponentially. I'm sure everyone's breathing a little more easily now.

Revamp of Volumetric Shell

Hi
These days I have to go back to volumetric Shell (Non intersecting extrusion/offset tool) and along with many tweaks Ive improved the core algorithm so now non manifold cases should never happen, borders quality is improved too.

This are some random dev screenshots I would like to share because I love to watch other devs random screens! LOL

uyop Capturejopu Captureuip jlktru tyiyioo


December 08, 2014

A look at new developer features

As the development window for GNOME 3.16 advances, I've been adding a few new developer features, selfishly, so I could use them in my own programs.

Connectivity support for applications

Picking up from where Dan Winship left off, we've merged support for application to detect the network availability, especially the "connected to a network but not to the Internet" case.

In glib/gio now, watch the value of the "connectivity" property in GNetworkMonitor.

Grilo automatic network awareness

This glib/gio feature allows us to show/hide Grilo sources from applications' view if they require Internet and LAN access to work. This should be landing very soon, once we've made the new feature optional based on the presence of the new GLib.

Totem

And finally, this means we'll soon be able to show a nice placeholder when no network connection is available, and there are no channels left.

Grilo Lua resources support

A long-standing request, GResources support has landed for Grilo Lua plugins. When a script is loaded, we'll look for a separate GResource file with ".gresource" as the suffix, and automatically load it. This means you can use a local icon for sources with the URL "resource:///org/gnome/grilo/foo.png". Your favourite Lua sources will soon have icons!

Grilo Opensubtitles plugin

The developers affected by this new feature may be a group of one, but if the group is ever to expand, it's the right place to do it. This new Grilo plugin will fetch the list of available text subtitles for specific videos, given their "hashes", which are now exported by Tracker.

GDK-Pixbuf enhancements

I can point you to the NEWS file for the latest version, but the main gains are that GIF animations won't eat all your memory, DPI metadata support in JPEG, PNG and TIFF formats, and, for image viewers, you can tell whether a TIFF file is multi-page to open it in a more capable viewer.

Batched inserts, and better filters in GOM

Does what it says on the tin. This is useful for populating the database quicker than through piecemeal inserts, it also means you don't need to chain inserts when inserting multiple items.

Mathieu also worked on fixing the priority of filters when building complex queries, as well as supporting more than 2 items in a filter ("foo OR bar OR baz" for example).

My Letter to the Editor: Make Your Voice Heard On 'Live Exponentially'

More on the Los Alamos "Live Exponentially" slogan saga: There's been a flurry of letters, all opposed to the proposed slogan, in the Los Alamos Daily Post these last few weeks.

And now the issue is back on the council agenda; apparently they're willing to reconsider the October vote to spend another $50,000 on the slogan.

But considering that only two people showed up to that October meeting, I wrote a letter to the Post urging people to speak before the council: Letter to the Editor: Attend Tuesday's Council Meeting To Make Your Voice Heard On 'Live Exponentially'.

I'll be there. I've never actually spoken at a council meeting before, but hey, confidence in public speaking situations is what Toastmasters is all about, right?

(Even though it means I'll have to miss an interesting sounding talk on bats that conflicts with the council meeting. Darn it!)

A few followup details that I had no easy way to put into the Post letter:

The page with the links to Council meeting agendas and packets is here: Los Alamos County Calendar.

There, you can get the short Agenda for Tuesday's meeting, or the full 364 page Agenda Packets PDF.

[Breathtaking raised to the power of you] The branding section covers pages 93 - 287. But the graphics the council apparently found so compelling, which swayed several of them from initially not liking the slogan to deciding to spend a quarter million dollars on it, are in the final presentation from the marketing company, starting on page p. 221 of the PDF.

In particular, a series of images like this one, with the snappy slogan:

Breathtaking raised to the power of you
LIVE EXPONENTIALLY

That's right: the advertising graphics that were so compelling they swayed most of the council are even dumber than the slogan by itself. Love the superscript on the you that makes it into an exponent. Get it ... exponentially? Oh, now it all makes sense!

There's also a sadly funny "Written Concept" section just before the graphics (pages 242- in the PDF) where they bend over backward to work in scientific-sounding words, in bold each time.

But there you go. Hopefully some of those Post letter writers will come to the meeting and let the council know what they think.

The council will also be discussing the much debated proposed chicken ordinance; that discussion runs from page 57 to 92 of the PDF. It's a non-issue for Dave and me since we're in a rural zone that already allows chickens, but I hope they vote to allow them everywhere.

December 07, 2014

How to code a nice user-guided foreground extraction algorithm? (Addendum)

After writing my last article on the Easy (user-guided) foreground extraction algorithm, I realized that you could maybe think I was exaggeratedly arguing the whole algorithm can be re-coded very quickly from scratch. After all, I’ve just illustrated how the things work by using G’MIC command lines, but there is already a lot of image processing algorithms implemented in G’MIC, so it is somehow a biased demonstration. So let me just give you the corresponding C++ code for the algorithm. Here again, you may find I cheat a little bit, because I use some functions of a C++ image processing library (CImg) that actually doe most of the hard implementation work for me.

But:

  1. You can use this library too in your own code. The CImg Library I use here works on multiple platforms and has a very permissive license, so you probably don’t have to re-code the key image processing algorithms by yourself. And even if you want to do so, you can still look at the source code of CImg. It is quite clearly organized and function codes are easy to find (the whole library is defined in a single header file CImg.h).
  2. I never said the whole algorithm could be done in few lines, only that it was easy to implement :)

So, dear C++ programmer, here is the simple prototype you need to make the foreground extraction algorithm work:

#include "CImg.h"
using namespace cimg_library;

int main() {

  // Load input color image.
  const CImg<float> image("image.png");

  // Load input label image.
  CImg<float> labels("labels.png");
  labels.resize(image.width(),image.height(),1,1,0);  // Be sure labels has the correct size.

  // Compute gradients.
  const float sigma = 0.002f*cimg::max(image.width(),image.height());
  const CImg<float> blurred = image.get_blur(sigma);
  CImgList<float> gradient = blurred.get_gradient("xy");
    // gradient[0] and gradient[1] are two CImg images which contain
    // respectively the X and Y derivatives of the blurred RGB image.

  // Compute the potential map P.
  CImg<float> P(labels);
  cimg_forXY(P,x,y)
    P(x,y) = 1/(1 +
                cimg::sqr(gradient(0,x,y,0)) + // Rx^2
                cimg::sqr(gradient(0,x,y,1)) + // Gx^2
                cimg::sqr(gradient(0,x,y,2)) + // Bx^2
                cimg::sqr(gradient(1,x,y,0)) + // Ry^2
                cimg::sqr(gradient(1,x,y,1)) + // Gy^2
                cimg::sqr(gradient(1,x,y,2))); // By^2

  // Run the label propagation algorithm.
  labels.watershed(P,true);

  // Display the result and exit.
  (image,P,labels).display("Image - Potential - Labels");

  return 0;
}

To compile it, using g++ on Linux for instance, you have to type something like this in a shell (I assume you have all the necessarily headers and libraries installed):

$ g++ -o foo foo.cpp -Dcimg_use_png -lX11 -lpthread -lpng -lz

Now, execute it:

$ ./foo

and you get this kind of result:

cimg_ef

Once you have the resulting image of propagated labels, it is up to you to use it the way you want to split the original image into foreground/background layers or keep it as a mask associated to the color image.

The point is that even if you add the code of the important CImg methods I’ve used here, i.e. CImg<T>::get_blur(), CImg<T>::get_gradient() and CImg<T>::watershed(), you won’t probably exceed about 500 lines of C++ code. Yes, instead of a single line of G’MIC code. Now you see why I’m also a big fan of coding image processing stuffs directly in G’MIC instead of C++ ;).

A last remark before ending this addendum:

Unfortunately, the label propagation algorithm itself is hardly parallelizable. It is based on a priority queue and basically, the choice of a new label to set depends on how the latest label has been set. This is a sequential algorithm by nature. So, when processing large images, a good idea is to do the computations and preview the foreground extraction result only on a smaller preview of the original image. Then compute the full-res mask only once, after all key points have been placed by the user.

Well that’s it, happy coding!

December 06, 2014

How to code a nice user-guided foreground extraction algorithm?


Principle of the algorithm:

About three months ago, I’ve added a new filter in the G’MIC plug-in for GIMP, which allows to interactively extract a foreground object present in a color image, from its background (considering a single-layered bitmap image as the input). But instead of doing this the “usual” way, e.g. by letting the user crop the object manually with any kind of selection tools, this filter relies on a semi-automatic (user-guided) algorithm to do the job. Basically, the only thing the user needs to do is placing key points which tell about the nature of the underlying pixels: either they belong to the foreground (symbolized by green dots) or to the background (red dots). The algorithm tries then to propagate those labels all over the image, so that every image pixel get a binary label (either foreground or background). The needed interaction is quite limited, and the user doesn’t need to be an image processing guru to control what he does: left/right mouse buttons to add a new foreground/background key point (or move an existing one), middle button and mouse wheel to zoom in/out and pan shift. Nothing really complicated, easy mode.

gmic_extract

Fig.1. Principle of the user-guided foreground extraction algorithm: the user puts some foreground/background key points over a color image (left), and the algorithm propagates those labels for all pixels, then splits the initial image into two foreground/background layers (middle and right).

Of course, the algorithm which performs the propagation of these binary labels is quite dumb (as most image processing algorithms actually) despite the fact it tries to follow the contours detected in the image as much as possible. So, sometimes it just assign the wrong label to a pixel. But from the user point of view, this is not so annoying because it is really easy to locally correct the algorithm estimation by putting additional key points in the wrongly labelled regions, and run a new iteration of the algorithm.

Here is a quick video that shows how foreground extraction can be done when using this algorithm. Basically we start by placing one or two key points, run the algorithm (i.e. press the Space bar), then correct the erroneous areas by adding or moving key points, run the algorithm again, etc.. until the contours of the desired object to extract are precise enough. As the label propagation algorithm is reasonably fast (hence dumb), the corresponding workflow runs smoothly.

At this stage, two observations have to be done:

1. As a user, you can easily experiment this manipulation in GIMP, if you have the G’MIC plug-in installed (and if this is not the case, do not wait one second longer and go install it! ;) ). This foreground extraction filter is located in Contours / Extract foreground [interactive]. When clicking on the Apply button, it opens a new interactive window where you can start adding your foreground/background key points. Press Space to update the extracted mask (i.e. run a new iteration of the algorithm) until it fits your requirements, then quit the window and your foreground (and optionally background) selection will be transferred back to GIMP. I must say I have been quite surprised by the good feedback I got for this particular filter. I’m aware this can be probably a time-saver in some cases, but I can’t get out of my head the overall simplicity of the underlying algorithm.

gmic_extract_gimp

Fig.2. Location of the “Extract foreground” filter in the G’MIC plug-in for GIMP.

2. As a developer, you need to convince yourself that this label propagation algorithm is trivial to code. Dear reader, if by chance you are a developer of an open-source software for image processing or photo retouching, and you think this kind of foreground extraction tool could be a cool feature to have, then be aware this can be re-coded from scratch in a very short amount of time. I’m not lying, the hardest part is actually coding the GUI. I illustrate this fact in the followings, and give some details about how this propagation algorithm is currently implemented in G’MIC.


Details of the implementation:

Now I assume that the hardest work has been already done, which is the design of the GUI for letting the user place foreground/background key points. Now, we have then two known images as the input of our algorithm.

framed_colander

Fig.3.a. The input color image (who’s that guy ?)

framed_colander_labels

Fig.3.b. An image of labels, containing the FG/BG key points

At this stage, the image of user-defined labels (in Fig.3.b.) contain pixels whose value can be only 0 (unlabeled pixels = black), 1 (labeled as background = gray) or 2 (labeled as foreground = white). The goal of the label propagation algorithm is then to set a value of 1 or to pixels that have a current label of 0. This label propagation is performed by a so-called Watershed algorithm, which is in fact nothing more than the usual Dijkstra’s algorithm applied on a regular grid (the support of the image) instead of a generic graph. The toughest parts of the label propagation algorithm are:

  1. Managing a priority queue (look at the pseudo-code on the Wikipedia page of the Dijkstra’s algorithm to see what I mean). This is a very classical algorithmic structure, so nothing terrible for a programmer.
  2. Defining a scalar field P of potentials (or priorities) to drive the label propagation. We have to define a potential/priority value for each image pixel, that will tells if the current pixel should be labeled in priority or not (hence the need for a priority queue). So P has the same size as the input image, but has only one channel (see it as a matrix of floating-point values if you prefer).

Suppose that you tells the propagation algorithm that all pixels have the same priority (so you feed it with a constant potential image P(x,y)=1). Then, your labels will propagate such that each reconstructed label will take the value of its nearest known (user-defined) label. This will not take the image data into account, so there is no chance to get a satisfactory result. Instead, the key idea is to define the potential image P such that it depends on the contour variations of the color image you want to segment. More precisely, you want pixels with low variations to have a high priority, while pixels on image structures and contours should be labeled last. In G’MIC, I’ve personally defined the potential image P from the input RGB image with the following (heuristic) formula:

potentials

where

gradient

is the usual definition of the image gradient, here appearing in the potential map P for each image channel R, G and B. To be more precise, we estimate those gradients from a slightly blurred version of the input image (hence the sigma added to R,G and B, which stands for the standard variation of the Gaussian blur applied on the RGB color image). Defined like this, the values of this potential map P are low for pixels on image contours, and high (with a maximum of 1) for pixels located on flat regions.

If you have the command-line interface gmic of G’MIC installed on your machine, you can easily compute and visualize such a propagation map P for your own image. Just type this simple command line in a shell:

$ gmic image.jpg -blur 0.2% -gradient_norm -fill '1/(1+i^2)'

For our sample image above, the priority map P looks like this (color white correspond to the value 1, and black to something close to 0):

framed_colander_potential

Fig.4. Estimated potential/priority map P.

Now, you have all the ingredients to extract your foreground. Run the label propagation algorithm on your image of user-defined labels, with the priority map P estimated from the input color image. It will output the desired image of propagated labels:

framed_colander

Fig.5.a. Input color image.

framed_colander_mask

Fig.5.b. Result of the labels propagation algorithm.

And you won’t believe it, but you can actually do all these things in a single command line with gmic:

$ gmic image.jpg labels.png --blur[0] 0.2% -gradient_norm[-1] -fill[-1] '1/(1+i^2)' -watershed[1] [2] -reverse[-2,-1]

 

In Fig.5.b., I let the foreground/background frontier pixels keep their values 0, to better see the estimated background (in grey) and foreground (in white). Those pixels are of course also labeled in the final version of the algorithm. And once you have computed this image of propagated labels, the extraction work is almost done. Indeed, this label image is a binary mask that you can use to easily split the input color image into two foreground/background layers with alpha channels:

framed_colander_foreground

Fig.6.a. Estimated foreground layer, after label propagation.

framed_colander_background

Fig.6.b. Estimated background layer, after label propagation.

So, now you have a better idea on how all this works… Isn’t that a completely stupid algorithm ? Apologies if I broke the magic of the thing! :D

Of course, you can run the full G’MIC foreground extraction filter, not only from the plug-in, but also from the command line, by typing:

$ gmic image.jpg -x_segment , -output output.png

The input file should be a regular RGB image. This command generates an output file in RGBA mode, where only a binary alpha channel has been added to the image (the RGB colors of each pixel remain unchanged).

And now ?

You know the secret of this nice (but simple) foreground extraction algorithm. So you have two options:

  1. If you are not a developer, harass the developers of your favorite graphics software to include this “so-simple-to-code” algorithm into it!
  2. If you are a developer interested by the algorithm, just try to code a prototype for it. You will be surprised by how fast this could be actually done!

In any case, please don’t hate me if none of these suggestions worked ;). See you in a next post!

See also the Addendum I wrote, that describes how this can be implemented in C++.

December 05, 2014

David Tschumperlé and OpenSource.graphics

Some of you may be familiar with G'MIC, the rather extensive image processing language created by David Tschumperlé that has a very popular plug-in for GIMP.

If you're a fan, here's a nice little treat for you. David has started a blog about image processing with open source software:

http://opensource.graphics




If you'd like a front seat to some of the more technically interesting things going on behind the scenes at G'MIC, this would be a good blog to follow I think. He's already come out of the gate with a neat 3D colorcube investigation of some images (seen above, Mairi).

December 04, 2014

Visualizing the 3D point cloud of RGB colors


Preliminary note:

This is the very first post of my blog on Open Source Graphics. So, dear reader, welcome! I hope you will find some useful (or at least distracting) articles here. Please feel free to comment, and tell me about your advice and experience on graphics creation, photo retouching and image processing.


Understanding how the image colors are distributed:

According to Wikipedia, the histogram of an image is “a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.”

This is a very usual way to better understand the global statistics of the pixel values in an image. Often, when we look at an image histogram (which is a feature that most image editors and photo retouching programs propose), we see something like this:

histogram_luminance1

Fig.1. Lena image (left), and histogram of the luminances (right).

Here, the histogram (on the right) tells about the distribution of the image luminances, for each possible value of the luminance, from 0 to 255 (assuming a 8-bits per channel color image as an input). Sometimes, the displayed histogram rather contains the distribution of the pixel values simultaneously for the 3 image channels RGB like in the figure below:

histogram_colors

Fig.2. Lena image (left), and its RGB histogram (right).

For instance, this RGB histogram clearly shows that the image contains a lot of pixels with high values of Red, which is in fact not so surprising for this particular image of Lena. But what we still miss is the relationship between the three color components: do the pixels with a lot of Red cover a wide range of different colors, or is it just the same reddish color repeated all over the image ? Look at the image below: it has exactly the same RGB histogram as the image above. I’ve just mirrored the Green and Blue channels respectively along the x and y-axes. But, the perception of the image colors is in fact very different. We see a lot more greens in it.

histogram_colors2

Fig.3. Partially mirrored Lena image (left), and its RGB histogram.

So clearly, it could be interesting to get more clues about the correlation between the different color components of the pixels in an image. This requires to visualize something different, as the 3D distribution (or histogram) of the colors in the RGB space: each pixel (x,y) of a color image can be indeed viewed as a point with 3D coordinates (R(x,y),G(x,y),B(x,y)) located inside a cube of size 256x256x256, in the 3D color space RGB. The whole set of image pixels forms then a 3D point cloud in this 3D space. To visualize this pointcloud, each displayed point takes a color that can be either its actual RGB value (to get the 3D colors distribution), or a color expressing the number of occurrences of this RGB color in the initial image (to get the 3D colors histogram). For the Lena image, it looks like this. A wireframe representation of the RGB cube boundaries has been also added to better locate the image colors:

coldis_lena

Fig.4. Lena image (left), colors distribution in 3D RGB space (middle) and colors histogram in 3D RGB space (right).

With this way of representing colors distributions, we get a better feeling about:

  1. The global variety of the colors in an image.
  2. Locally, the amount of dispersion of color tones and shades around a given color.

I personally prefer the 3D colors distribution view (Fig.4.middle), as it is easier to see which color corresponds to which point, even if we lost the information about the color occurrences (but this is still hardly visible in the 3D colors histogram). Now, if we compare the 3D colors distributions of the Lena image and its partially-mirrored version (remember, they have the same RGB histogram), we can see a big difference !

coldis_lenaflip

Fig.5. Comparisons of the colors distributions in the 3D RGB space, for the Lena image, and its partially-mirrored version.

Let me show you some other examples, to illustrate that visualizing 3D colors distributions is a nice additional way to get more global information about how the colors of an image are arranged:

coldis_vegas

Fig.6. Las Vegas image (left) and its colors distribution in the 3D RGB space (right).

The image above has been shot by Patrick David in Las Vegas. We can see in the 3D colors distribution view, which colors are present and the input image (which is already quite colorful), and which are not (purples are almost non-existent). Below is another image (a portrait of Mairi) shot by Patrick:

coldis_mary

Fig.7. Portrait of Mairi (left) and its colors distribution in the 3D RGB space (right).

Here, the 3D colors distribution reflects the two dominant colors of the image (greys and skin/hair colors) and the continuity of their shading into the black. Note that this is not something we can see from the RGB histogram of the same input image, so this kind of visualization really says something new about the global statistics of the image colors.

mary_histo

Fig.8. Portrait of Mairi (left) and its RGB histogram (right).

Let me show you some more extreme cases:

coldis_bottles

Fig.9. Bottles image (left) and its colors distribution in the 3D RGB space (right).

We can clearly see three main color branches, corresponding to each of the three main colors of the bottles (blue, pink and yellow-green), with all the shades (to white) the corresponding pixels take. Now, let us take the same image which has been desaturated:

coldis_bottlesbw

Fig.10. Desaturated bottles image (left) and its colors distribution in 3D RGB space (right).

No surprise, the only 3D color points you get belong to the diagonal of the RGB cube, i.e. the points where R = G = B. This last example shows the opposite case with a (synthetic) image containing a lot of different colors :

coldis_plasma

Fig.11. Synthetic plasma image (left) and its colors distribution in 3D RGB space (right).

I think all these examples illustrate quite well that we can often learn a lot about the colors dispersion of an image by looking at his 3D colors distribution, in addition to the usual histograms. This is somehow something I miss in many image editors and image retouching software.

So, my next step will be to show how I’ve create those views, and how you can do the same with your own images. And of course, only with open-source imaging tools ;)  I chose to use the command-line interface of G’MIC for this particular task, because it is quite flexible to manage 3D vector objects (and also because I know it quite well!).


Generating 3D colors distribution views with G’MIC:

I assume you have already the command-line interface of G’MIC (which is called gmic) installed on your system. The commands I show below should be typed in a shell (I use bash on a machine running Linux, personally). The first thing to do after installing gmic is to ensure we get the latest update of the commands proposed in the G’MIC framework:

$ gmic -update

[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Update commands from the latest definition file on the G'MIC server.
[gmic]-0./ End G'MIC interpreter.

Now, the main trick consists in using the specific G’MIC command -distribution3d, which converts an input color image as a 3D vector object corresponding to the desired colors distribution we want to visualize. I will also add two things to improve it a little bit:

  1. if the image is too big, I will reduce it to a reasonnable size, because the number of points in the generated 3D object is always equal to the number of pixels in the image, and we don’t want it to be unnecessarily large, for performance reasons.
  2. I will also merge the generated 3D pointcloud with the boundaries of the RGB color cube, in wireframe mode, as in the examples you’ve seen in the previous section.

This long (but comprehensive) command line below does the job:

$ gmic input_image.jpg -resize2dy "{min(h,512)}" -distribution3d -colorcube3d -primitives3d[-1] 1 -+3d

[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input file 'input_image.jpg' at position 0 (1 image 3283x4861x1x3).
[gmic]-1./ Resize 2d image [0] to 512 pixels along the y-axis, preserving 2d ratio.
[gmic]-1./ Get 3d color distribution of image [0].
[gmic]-1./ Input 3d RGB-color cube.
[gmic]-2./ Convert primitives of 3d object [1] to segments.
[gmic]-2./ Merge 3d objects [0,1].
[gmic]-1./ Display 3d object [0] = '[3d distribution]*' (177160 vertices, 177176 primitives).

It opens an interactive visualization window with the 3D colors distribution that you can rotate with your mouse:

gmic_view

Nice! But let me go one step further.

Now, I want to generate an animation (a video file basically) showing the 3D colors distribution object rotating, just beside the input image. For this, I can use the command -snapshot3d inside a -do…-while loop in order to get all the images of my animation. I won’t enter into much technical details because this post is not about the G’MIC programming language (for this, I refer the interested reader to these nicely written tutorial pages), but basically we can do this task easily by defining our own custom command for G’MIC. Just copy/paste the code snippet below into your $HOME/.gmic configuration file (or %APPDATA%/gmic for Windows users) in order to create a new command named -animate_color_distribution3d, that will be recognized by the G’MIC interpreter next time you invoke it:

animate_color_distribution3d :
    
  angle=0
  delta_angle=2

  # Create 3d object from input image.
  --local
    -resize2dy {min(h,512)} -distribution3d
    -colorcube3d -primitives3d[-1] 1
    -+3d -center3d -normalize3d -*3d 180
    -rotate3d 1,1,0,60
  -endl

  # Resize input image so that its height is exactly 300.
  -resize2dy[0] 300

  # Render animated frames (with height 300).
  -do
    --rotate3d[1] 0,1,0,$angle
    -snapshot3d[-1] 300,0,0,0,0,0 --a[0,-1] x
    -w[-1] -rm[-2]
    angle={$angle+$delta_angle}
  -while {$angle<360}
  -rm[0,1] -resize2dx 640

Once you’ve done that, the command is ready for use. It takes no arguments, so you can simply call it like this:

$ gmic input_image.jpg -animate_color_distribution3d -output output_video.avi

And after a while, your video file should be saved in your current path. That’s it !

Below, I show some examples which have been generated by this custom G’MIC command. The first picture is a beautiful (and colorful) drawing from David Revoy. I find its 3D color distribution particularly interesting.

The landscape picture below is also interesting, because of the almost bi-tonal nature of the image:

With the Joconde painting, we can also see that the number of effective color tones is quite reduced:

And if you apply the command -animate_color_distribution3d on a synthetic plasma image, as done below, you will get a really nice 3D pointcloud, just like the ones we can see in some sci-fi movies !


And now ?

Well, that’s it ! Nothing more to say for this post :). Surprisingly, it is already much longer than expected when I’ve started writing it. Next time you see a color image, maybe you could try to visualize its color distribution in the 3D RGB space along with the classical histograms. Maybe it will help you to better understand the dynamics of the image colors, who knows ?. At least we know that this is something we can do easily with FLOSS imaging tools. See you next time !

December 02, 2014

Ripping a whole CD on Linux

I recently discovered that my ancient stereo turntable didn't survive our move. So all those LPs I brought along, intending to rip to mp3 when I had more time, will never see bits.

So I need to buy new versions of some of that old music. In particular, I'd lately been wanting to listen to my old Flanders and Swann albums. Flanders and Swann were a terrific comedy music duo (think Tom Lehrer only less scientifically oriented) from the 1960s.

So I ordered a CD of The Complete Flanders & Swann, which contains all three of the albums I inherited from my parents. Woohoo! I ran a little script I have that rips a whole CD to a directory of separate MP3 songs, and I was all set.

Until I listened to it. It turns out that when the LP album was turned into a CD, they put the track breaks in the wrong place. These albums are recordings of live performances. Each song has a spoken intro, giving a little context for the song that follows. On the CD, each track starts with a song, and ends with the spoken intro for the next song. That's no problem if you always listen to whole albums in order. But I like to play individual tracks, or listen to music on random play. So this wasn't going to work at all.

I tried using audacity to copy the intro from the end of one track and paste it onto the beginning of another. That worked, but it was tedious and fiddly. A little research showed me a much better way.

First: Rip the whole CD

First I needed to rip the whole CD as one gigantic track. My script had been running cdparanoia tracknumber filename.wav. But it took some study of the cdparanoia manual before I finally found the way to rip a whole CD to one track: you can specify a range of tracks, starting at 0 and omitting the end track.

cdparanoia 0- outfile.wav

Use Audacity to split and save the tracks

Now what's the best way to split a recording into separate tracks? Fortunately the Audacity manual has a nice page on that very subject: Splitting a recording into separate tracks.

Mostly, the issue is setting labels -- with Tracks->Add Label at Selection or Tracks->Add Label at Playback Position. Use Ctrl-1 to zoom as much as you need to see where the short pauses are. Then listen to the audio, pausing or clicking and setting labels appropriately.

It's a bit fiddly. For instance, if you pause your listening to set a label, you might want to save the audacity project so you don't lose the label positions you've set so far. But you can't save unless you Stop the playback; and that loses the current playback position which you may not yet have set a label for. Even if you have set a label for it, you'll need to click to set the selection to the label you just made if you want to continue playing from where you left off. It all seems a little silly and unintuitive ... but after a few tries you'll find a routine that works for you.

When all your labels are set, then File->Export Multiple.... You will have to go through a bunch of dialogs involving metadata for each track; just hit return, since audacity ignores any metadata you type in and won't actually write it to the MP3 file. I have no idea why it always prompts for metadata then doesn't use it, but you can use a program like id3tool later to add proper metadata to the tracks.

So, no, the tools aren't perfect. On the other hand, I now have a nice set of Flanders and Swann tracks, and can listen to Misalliance, Ill Wind and The GNU Song complete with their proper introductions.

November 26, 2014

The 2015 Libre Calendar

So Jehan Pages contacted me a little while ago about participating in a project to produce a “Libre Calendar”. Once he described the idea, it was an easy choice to join up and help out!


Through his non-profit LILA in France, he has assembled 6 artists to produce works specifically for this calendar (Disclaimer: I'm one of the artists):


Aryeom Han


Henri Hebeisen


Gustavo Deveze


Brian Beck



The proceeds from the calendar will be split evenly between the artists, the LILA non-profit, and various F/OSS projects that the artists used (GIMP, Blender, Inkscape, etc...). The full list is on the site. (Second disclaimer: I'm deferring any of my proceeds to the projects).

This is a really nice way to donate a bit to the various projects and get a neat gift for it.

Head over to the site to see some sample images from the artists, and consider buying a calendar! Jehan is looking to meet a minimum order before moving forward (around 100 I believe).

New algorithm for bone distortion

As part of our recent developments we have changed the algorithm for the upcoming bone distortion feature. Check out the demonstration video!...

Yam-Apple Casserole

Yams. I love 'em. (Actually, technically I mean sweet potatoes, since what we call "yams" here in the US aren't actual yams, but the root from a South American plant, Ipomoea batatas, related to the morning glory. I'm not sure I've ever had an actual yam, a tuber from an African plant of the genus Dioscorea).

But what's up with the way people cook them? You take something that's inherently sweet and yummy -- and then you cover them with brown sugar and marshmallows and maple syrup and who knows what else. Do you sprinkle sugar on apples before you eat them?

Normally, I bake a yam for about an hour in the oven, or, if time is short (which it usually is), microwave it for about four and a half minutes, then finish up with 20-40 minutes in a toaster oven at 350°. The oven part seems to be necessary: it brings out the sweetness and the nice crumbly texture in a way that the microwave doesn't. You can read about some of the science behind this at this Serious Eats discussion of cooking sweet potatoes: it's because sweet potatoes have an odd enzyme, beta amylase, that breaks down carbohydrates into sugars, thus bringing out the vegetable's sweetness, but that enzyme only works in a limited temperature range, so if you heat up a sweet potato too fast the enzyme doesn't have time to work.

But Thanksgiving is coming up, and for a friend's dinner party, I wanted to make something a little more festive (and more easily parceled out) than whole baked yams.

A web search wasn't much help: nearly everything I found involved either brown sugar or syrup. The most interesting casserole recipes I saw fell into two categories: sweet and spicy yams with chile powder and cayenne pepper (and brown sugar), and for yam-apple casserole (with brown sugar and lemon juice). As far as I can tell it has never occurred to anyone, before me, to try either of these without added sugar. So I bravely volunteered myself as test subject.

I was very pleased with the results. The combination of the tart apples, the sweet yams and the various spices made a lovely combination. And it's a lot healthier than the casseroles with all the sugary stuff piled on top.

Yam-Apple Casserole without added sugar

Ingredients:

  • Yams, as many as needed.
  • Apples: 1-2 apples per yam. Use a tart variety, like granny smith.
  • chile powder
  • sage
  • rosemary or thyme
  • cumin
  • nutmeg
  • ginger powder
  • salt
(Your choice whether to use all of these spices, just some, or different ones.)

Peel and dice yams and apples into bite-sized pieces, inch or half-inch cubes. (Peeling the yams is optional.)

Drizzle a little olive oil over the yam and apple pieces, then sprinkle spices. Your call as to which spices and how much. Toss it all together until the pieces are all evenly coated with oil and the spices look evenly distributed.

Lay out in a casserole dish or cake pan and bake at 350°F until the yam pieces are soft. This takes at least an hour, two if you made big pieces or layered the pieces thickly in the pan. The apples will mostly disintegrate into little mushy bits between the pieces of yam, but that's fine -- they're there for flavor, not consistency.

Note: After reading about beta-amylase and its temperature range, I had the bright idea that it would be even better to do this in a crockpot. Long cooking at low temps, right? Wrong! The result was terrible, almost completely tasteless. Stick to using the oven.

I'm going to try adding some parsnips, too, though parsnips seem to need to cook longer than sweet potatoes, so it might help to pre-cooked the parsnips a few minutes in the microwave before tossing them in with the yams and apples.

November 25, 2014

Two weeks in Siberia with Morevna Project, open-source animation tools and anime

Media researcher Julia Velkova took the trouble to visit our small studio at Gorno-Alataysk and met the core team members. With this guest-post she is sharing her first impressions and revealing some “behind-the-scenes” of Morevna Project.

This post is about my almost two-week long stay in the city of Gorno-Altaysk in Siberia, Russia. As part of my research on the production of open-source animation films – made with open-source tools and released as commons with sources – I have been trying to figure out the webs of connections between this place in Siberia, free software graphics communities, and Morevna Project. I was curious to see the context of producing an open-source animation film in a place like Gorno-Altaysk and how does this context matter. I wanted also to meet more of the people involved in the production of Morevna Project.

Before coming to Gorno-Altaysk I have been talking a lot to Konstantin Dmitriev online and I have been following his Morevna anime film project and Synfig developments. I knew that the place from where he works is somewhere far away in Russia, and that there are some other people involved. It was however very difficult to get any better of idea of who these people were, and what were their roles only through the accounts of Konstantin – which were surely helpful but not enough neither for me, nor for my research.

So, in the same way as I went to Amsterdam and the Blender Institute earlier this year, I went to Gorno-Altaysk in the beginning of November to learn more.

Gorno-Altaysk is not Amsterdam. Until I actually tried to get there I could not realize how far it was. The trip took 3 days of travel. And, hey, we live in the 21st century and everything is fast – how was this possible? Well, I flew relatively quickly from Stockholm to Novosibirsk – the capital of Siberia, leaving on Monday and arriving on Tuesday morning. Konstantin met me there, and the next day in the evening we boarded the night train to a city called Biysk (Бийск). From there it took another 1:30 hours of bus ride to Gorno-Altaysk on a scenic road built by prisoners deported to Siberia by the soviet regime. We arrived on Thursday morning, at 08 am.

map-GA1

Frosty, and sunny, and squeezed between mountains the city looked cozier than the industrial, humid and highly polluted Novosibirsk. High buildings co-exist with traditional Russian log houses, reminants from the Soviet past reside side by side to a 4d cinema and a mall. The frosty air was saturated with the smell of winter – smoke from the chimneys and stoves running on coal.

A barking dog met us in the log house where Konstantin lives and where we went for breakfast. Konstantin works mostly from home. Entering his workplace, the first thing I noticed was this hand-made stereoscopic screen.

stereo-screen-proc

The screen appeared to be fully functional and we got to test it by watching a 20-min stereoscopic 2D/3D animation short made with Synfig, Blender and Krita, which Konstantin has been directing and working on in the past few months for a client from Novosibirsk. The project has been an arena for testing and developing open-source software based stereographic animation pipelines by combining Blender, Synfig, and Krita. It has also helped develop new tools to simplify and speed up the animation process, for example by developing RenderChan. Not least, it has been paying the bills at the moment for the three people involved in the project – Konstantin Dmitriev, Nikolai Mamashev and Anastasia Majzhegisheva (Nastya).

2014-04-19-logo

RenderChan mascot by Nastya Majzhegisheva

The image of the screen sharing a desk with a Lenovo Thinkpad x220T and a workstation with an 8-core CPU, all running Linux, together with the view of Konstantin cutting and moving sound and animation with a Wacom pen on his screen recalled in my mind the image of David Revoy’s techie sculpture made of Cintiq and Linux. I get reminded of the connections between tools, open-source software and graphics that integrate into people’s whole lifestyle.

Konstantin and his Lenovo tablet during the stay in Novosibirsk

Konstantin and his Lenovo tablet during the stay in Novosibirsk

I soon notice several things. Besides the stereoscopic animation for a client, Konstantin works also on a new website for Morevna Project; coordinates Synfig’s development with developer Ivan Mahonin; and teaches twice a week free animation classes to teenagers in the premises of a small local extracurriculum art school. The teaching is in fact shared with Nikolai Mamashev, the art director of the Morevna film demo. I start wondering how does this all relate to Morevna Project and its production – the object of my initial interest and reason to come here. It also brings up an even bigger question – what exactly is Morevna Project now? I mean – after all, it completed its first goal to make a demo film in 2012, but since then there has not been new animation produced. Instead, there have been appearing fan artworks drawn by Nastya; a Synfig training course; and improvements on Synfig’s code primarily done by Ivan Mahonin who has been working on and off on coding (in dependence of how the Synfig donations were developing).

I meet Nastya. She is 15, and she is local. In fact, everyone is local, and I am one of the very few foreigners and non-locals currently in town. Nastya tells me about her passion for drawing, animé and falling in love with Krita: “It was magic – to draw with a pen on a tablet. And later, when I tried Krita, we became soulmates”. This friendship has recently led her to a move from Windows to Linux for the sake of stability and better functionality of her drawing tools which she seems to use intensively. She names four different animation short projects in which she is involved as artist among which Morevna, and ‘Neighbour from Hell’, a short on which she works as an artist with two more girls in the animation classes led by Konstantin and Nikolai.

Forest background – work in progress, Nastya Majzhegisheva

Forest background – work in progress, Nastya Majzhegisheva

Nastya drawing on the old studio Cintiq

Nastya drawing on the old studio Cintiq

Below are two scenes from ‘Neighbour from Hell’ – the short for which Nastya draws the tree background.

Artwork: Anastasia (Nastya) Majzhegisheva.
Animation: Tamara (Toma) Hudyakova, Anastasia Popova.

I get to visit twice the animation classes during my stay. They take place in an ad-hoc studio at the premises of the local art school Adamant. In a room that has to be reconfigured every time and where any equipment of value is kept in a safe the first thing I see is a first generation Wacom Cintiq, representing the working place of Nastya. ‘Chinese animation studios sell off old equipment and, so we managed to get it for $300′, explains Konstantin. This is one of the two drawing tablets of this type that the students have, and the attempts of Nastya to draw on one of them in Krita in high-resolution quickly gets in conflict with the low amount of memory available on the connected workstation. At the moment the art school and Konstantin have no resources to fix this, and in the same time nobody in the area seems to be able to understand the importance of helping out with improving things. The art school lacks Internet too – another underprioritised and underfunded thing.This is of course sad considering that Konstantin’s classes represent the only animation school in town and in the whole nearby region. They are also probably the only ones in Siberia that teach only open-source software based pipelines.

Adamant art school where the animation classes take place

Adamant art school where the animation classes take place

Gradually, six students, all teenagers start arriving with their own laptops of all sorts of budget brands, most of which assembled and produced in Russia. Some have also drawing tablets – anything from a 2001 Wacom Graphire, to relatively new Wacom Bamboo pads. I overhear the following conversation:

A student, Tamara, shows her new drawing – a horse with a rider.

Vika: Did you draw him in Krita?
T: No, in GIMP.
V: I try to draw in Photoshop but I find it very complicated.
T: Well, this is why I draw in Gimp. I did not manage either well with Photoshop.
V: Can I see some more of your work?

Tamara shows her more drawings explaining:

I did this in Photoshop, this in Gimp, this in MyPaint.

In class

In class

Sample scene by Anna Erogova:
Artwork and animation by Anna Erogova (16 years old).
Made in Synfig and MyPaint.

Poet and Robber (sample scenes):
Artwork and animation by Igor Sidorov (13 years old).
Made in Synfig, MyPaint and Gimp.

Dolls and Rain (animation sample):
Artwork and animation by Vika Popova (16 years old).
Made in Synfig, and Krita.

Neighbour from Hell (sample scene 1):
Artwork: Anastasia (Nastya) Majzhegisheva (15 years old).
Animation: Tamara Hudyakova (19 years old).
Made in Synfig, Krita, Gimp and MyPaint.

Neighbour from Hell (sample scene 2):
Artwork: Anastasia (Nastya) Majzhegisheva (15 years old).
Animation: Anastasia Popova (19 years old).
FX: Anastasia Majzhegisheva.
Made in Synfig, Krita, Gimp and MyPaint.

It suddenly strikes me that everyone in the studio, including Konstantin and Nikolai, are passionate animé fans. And while the start of this passion has been different for everyone, in the end they have all been attracted by the specificity and peculiarity of the genre. As Nikolai describes it, ‘It is very different. It is perceived very differently. It is like food. Imagine that you usually eat one thing, but one day you get to try a totally different food that you can not comprehend at all – Chinese, Japanese, something spicy, specific that you can not understand at all. Then you are – wow, what is this? It was like that with animé for me. I was very impressed.’

The students in class are so obsessed by anime that they draw it, breath it, live it in every minute of their lives. They tell me that it is their way to experience life and learn about life, and in the same time it is their life. They say: it is unconventional. It has psychology, and pedagogy. They compete to tell me stories of uncontrollable inspiration which can come while writing a school exam when they start drawing on the exam sheet which they can not bring home. It is the animé passion that has brought them all to animation and to Konstantin’s studio and classes.

Anime/open-source tools gang

Anime/open-source tools gang

Inspired by the example set by the Blender Institute, Konstantin has been trying with Morevna Project to establish a similar environment but focused on 2D animation/animé film development. The animation classes transfer the knowledge on working with open-source graphics instruments locally and help create some of the future contributors to the project. This knowledge, and daily work with Synfig drives further the development of new features which when shared online expands the community of Synfig users. What I realized during my two-week stay in Gorno-Altaysk is that Morevna Project is a framework – it is a film project which represents a driving force for creating an environment and pipelines for 2D open-source animation which has driven the substantial development of Synfig in the last years. It is also a channel that transforms consumption and fandom into a culture of making; and a place for experimenting with models of sharing in which tools, artwork, and knowledge get created. And similarly to the spirit in which the things are done in the Blender Institute, what keeps ideas and projects developing is the wish for making things, and the fascination to the magic of animation – a will of such strength that slowly pushes things through despite the (still) smaller scale, and limitations of the local and national context in which they are made.

During my stay I met many people, and had the opportunity to record many hours of interviews and personal histories about animé, animation and open-source tools. This has been an invaluable experience to understand better the spectrum of similarities and differences of the different environments and specifics of open-source based animation production, and the nature of the graphics communities wrapped around these projects. In conclusion I want to say a big ‘thank you!’ to Konstantin, Nikolay, Nastya Majzhegisheva, Igor Dmitriev, Ivan Mahonin, Toma, Nastya Popova, Vika and Ivan  for the opportunity to meet you and get to know a little piece of your world.

I would also like to share here one of the interviews (in Russian) – with Morevna Project’s artist director Nikolai Mamashev who tells about his daily work with animation, Morevna project, anime, open source software, Blender, and Synfig. Enjoy listening!

Click here for the audio (OGG)

November 20, 2014

Unbound RGB with littleCMS slow

The last days I played with lcms‘ unbound mode. In unbound mode the CMM can convert colours with negative numbers. That allows to use for instance the LMS colour space, a very basic colour space to the human visual system. As well unbound RGB, linear gamma with sRGB primaries, circulated long time as the new one covers all colour space, a kind of replacement of ICC or WCS style colour management. There are some reservations about that statement, as linear RGB is most often understood as “no additional info needed”, which is not easy to build a flexible CMS upon. During the last days I hacked lcms to write the mpet tag in its device link profiles in order to work inside the Oyranos CMS. The multi processing elements tag type (mpet) contains the internal state of lcms’ transform as a rendering pipeline. This pipeline is able to do unbound colour transforms, if no table based elements are included. The tested device link contained single gamma values and matrixes in its D2B0 mpet tag. The Oyranos image-display application renderd my LMS test pictures correctly, in opposite to the 16-bit integer version. However the speed was decreased by a factor of ~3 with lcms compared to the usual integer math transforms. The most time consuming part might be the pow() call in the equation. It is possible that GPU conversions are much faster, only I am not aware of a implementation of mpet transforms on the GPU.

November 19, 2014

Synfig Training Package in Portuguese

Synfig Training Package is available in Portuguese language now!...

GIMP Magazine Issue #6 released

The newly released issue #6 of GIMP Magazine features a "Using GIMP for portrait and fashion photography" master class by Aaron Tyree who uses GIMP professionally, and a gallery of other artworks and photos made or processed with GIMP.

The team is planning to switch to monthly releases, however they need your support to cover the costs of publishing a free magazine. You can sponsor the project at Patreon or visit the magazine's gift shop to make a donation.

November 18, 2014

Unix "remind" file for US holidays

Am I the only one who's always confused about when holidays happen?

Partly it's software, I guess. In these days of everybody keeping their schedules on Google's or Apple's servers, maybe most people keep up on these things.

But being the dinosaur I am, I'm still resistant to keeping my schedule in the cloud on a public server. What if I need to check for upcoming events while I'm on a trip out in the remote desert somewhere? (Not to mention the obvious privacy considerations.) For years I used PalmOS PDAs, but when I switched to Android and discovered how poor the offline calendar options are, I decided that I should learn how to use the old Unix standby.

It's been pretty handy. I run remind ~/[remind-file-name] when I log in in the morning, and it gives me a nice summary of upcoming events:

DPU Solar surcharge meeting, 5:30-8:30 tomorrow
NMGLUG meeting in 2 days' time

Of course, I can also have it email me with reminders, or pop up a window, but so far I haven't felt the need.

I can also display a nice calendar showing upcoming events for this month or the next several months. I made a couple of aliases:

mycal () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=1 
        fi
        remind -c$months ~/Docs/Lists/remind
}

mycalp () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=2 
        fi
        remind -p$months ~/Docs/Lists/remind | rem2ps -e -l > /tmp/mycal.ps
        gv /tmp/mycal.ps &
}

The first prints an ascii calendar; the second displays a nice postscript calendar complete with little icons for phases of the moon.

But what about those holidays?

Okay, that gives me a good way of storing reminders about appointments. But I still don't know when holidays are. (I had that problem with the PalmOS scheduling program, too -- it never knew about holidays either.)

Web searching didn't help much. Unfortunately, "remind" is a terrible name in this age of search engines. If someone has already solved this problem, I sure wasn't able to find any evidence of it. So instead, I went to Wikipedia's list of US holidays, with the remind man page in another tab, and wrote remind stanzas for each one -- except Easter, which is much more complicated.

But wait -- it turns out that remind already has code to calculate Easter! It just needs a slightly more complicated stanza: instead of the standard form of

REM  1 Apr +1 MSG April Fool's Day %b
I need to use this form:
REM  [trigger(easterdate(today()))] +1 MSG Easter %b

The %b in each case is what gives you the notice of when the event is in your reminders, e.g. "Easter tomorrow" or "Easter in two days' time". The +1 is how far beforehand you want to be reminded of each event.

So here's my remind file for US holidays. I make no guarantees that every one is right, though I did check them for the next 12 months and they all seem to be working.

#
# US Holidays
#
REM      1 Jan    +3 MSG New Year's Day %b
REM Mon 15 Jan    +2 MSG MLK Day %b
REM      2 Feb       MSG Groundhog Day %b
REM     14 Feb    +2 MSG Valentine's Day %b
REM Mon 15 Feb    +2 MSG President's Day %b
REM     17 Mar    +2 MSG St Patrick's Day %b
REM      1 Apr    +9 MSG April Fool's Day %b
REM  [trigger(easterdate(today()))] +1 MSG Easter %b
REM     22 Apr    +2 MSG Earth Day %b
REM Fri  1 May -7 +2 MSG Arbor Day %b
REM Sun  8 May    +2 MSG Mother's Day %b
REM Mon  1 Jun -7 +2 MSG Memorial Day %b
REM Sun 15 Jun       MSG Father's Day
REM      4 Jul    +2 MSG 4th of July %b
REM Mon  1 Sep    +2 MSG Labor Day %b
REM Mon  8 Oct    +2 MSG Columbus Day %b
REM     31 Oct    +2 MSG Halloween %b
REM Tue  2 Nov    +4 MSG Election Day %b
REM     11 Nov    +2 MSG Veteran's Day %b
REM Thu 22 Nov    +3 MSG Thanksgiving %b
REM     25 Dec    +3 MSG Christmas %b

November 16, 2014

Arnab Goswami is changing the way the election results are delivered! From aggressive and dramatic delivery to statistics and even score card, inspire of the fact that there is actually nothing happening now than counting – it looks like Twenty 20, without breaks and cheer girls!

November 14, 2014

Tracking Usage

One of the long standing goals of Unity has been to provide an application focused presentation of the desktop. Under X11 this proves tricky as anyone can connect into X and doesn't necessarily have to give information on what applications they're associated with. So we wrote BAMF, which does a pretty good job of matching windows to applications, but it could never be perfect because there simply wasn't enough information available. When we started to rethink the world assuming a non-X11 display server we knew there was one thing we really wanted, to never ever have something like BAMF again.

This meant designing, from startup to shutdown, a complete tracking of an application before it started creating windows in the display server. We then were able to use the same mechanisms to create a consistent and secure environment for the applications. This is both good for developers and users as their applications start in a predictable way each and every time it's started. And we also setup the per-application AppArmor confinement that the application lives in.

Enough backstory, what's really important to this blog post is that we also get an event when an application starts and stops which is a reliable event. So I wrote a little tool that takes those events out of the log and presents them as usage data. It is cleverly called:

$ ubuntu-app-usage

And it presents a list of all the applications that you've used on the system along with how long you've used them. How long do you spend messing around on the web? Now you know. You're welcome.

It's not perfect in that it uses all the time that you've used the device, it'd be nice to query the last week or the last year to see that data as well. Perhaps even a percentage of time. I might add those little things in the future, if you're interested you can beat me too it.

Some postcard illustrations to help KDE through the winter…

KDE winter fundraiser

If you’re following KDE community news, you probably already know that we’re running a donation campaign to help funding KDE community costs for next year.
Everyone giving at least 30€ will recieve a cool postcard featuring Konqi, choosing one of the three models available.

So here are the three illustrations I made for these cards:

-Konqi Gift
Konqi Gift

-Konqi Freedom
Konqi Freedom

-Konqi Party
Konqi Party

So please consider giving something to this fundraiser, and enjoy the postcards! :)

On a side note, this week-end I’ll be at the Capitole du Libre and Akademy-Fr in Toulouse.
I’ll give a talk about contributing to KDE as a user, another one about latest Krita news, and I’ll spend some time on the KDE booth to talk to people and show some cool piece of software.
If you’re in the area, come and say hi ;)

November 13, 2014

Crockpot Green Chile Posole Stew

Posole is a traditional New Mexican dish made with pork, hominy and chile. Most often it's made with red chile, but Dave and I are both green chile fans so that's how I make it. I make no claims as to the resemblance between my posole and anything traditional; but it sure is good after a cold, windy day like we had today.

Dave is leery of anything called "posole" -- I think the hominy reminds him visually of garbanzo beans, which he dislikes -- but he admits that they taste fine in this stew. I call it "green chile stew" rather than "posole" when talking to him, and then he gets enthusiastic.

Ingredients (all quantities very approximate):

  • pork, about a pound; tenderloin works well but cheaper cuts are okay too
  • about 10 medium-sized roasted green chiles, whatever heat you prefer (or 1 large or 2 medium cans diced green chile)
  • 1 can hominy
  • 1 large or two medium russet potatoes (or equivalent amount of other type)
  • 1 can chicken broth
  • 1 tsp salt
  • 1 tsp red chile powder
  • 1/2 tsp cumin
  • fresh garlic to taste
  • black pepper and hot sauce (I use Tapatio) to taste

Start the crockpot heating: I start it on high then turn it down later. Add broth.

Dice potato. At least half the potato should be in small pieces, say 1/4" cubes, or even shredded; the other half can be larger chunks. I leave the skin on.

Pre-cook diced potato in the microwave for 7 minutes or until nearly soft enough to eat, in a loosely covered bowl with maybe 1" of water in the bottom. (This will get messy and the water gets all over and you have to clean the microwave afterward. I haven't found a solution to that yet.) Dump cooked potato into crockpot.

Dice pork into stew-sized pieces, trimming fat as desired. Add to crockpot.

De-skin and de-seed the green chiles and cut into short strips. (Or use canned or frozen.) Add to crockpot.

Add spices: salt, chile powder, cumin, and hot sauce (if your chiles aren't hot enough -- we have a bulk order of mild chiles this year so I sprinkled liberally with Tapatio).

Cover, reduce heat to low.

Cook 6-7 hours, occasionally stirring, tasting and correcting the seasoning. (I always add more of everything after I taste it, but that's me.)

Serve with bread, tortillas, sopaipillas or similar. French bread baked from the refrigerated dough in the supermarket works well if you aren't brave enough to make sopaipillas (I'm not, yet).

November 07, 2014

Working in Macromedia Flash 8


Here’s a time-lapse screen-capture of me working in Macromedia Flash 8. This little bit of animation is unlikely to make it into the finished movie, as I later decided on a different approach here. This just shows one aspect of how I use Flash; other work videos to come. Thanks to the Blender Institute for posting this and thinking about maybe possibly developing FLOSS vector animation tools.

Share/Bookmark

flattr this!

November 06, 2014

New GIMP Save/Export plug-in: Saver

The split between Save and Export that GIMP introduced in version 2.8 has been a matter of much controversy. It's been over two years now, and people are still complaining on the gimp-users list.

Early on, I wrote a simple Python plug-in called Save-Export Clean, which saved over an image's current save or export filename regardless of whether the filename was XCF (save) or a different format (export). The idea was that you could bind Ctrl-S to the plug-in and not be pestered by needing to remember whether it was XCF, JPG or what.

Save-Export Clean has been widely cited, and I hope it's helped some people who were bothered by the Save/Export split. But personally I didn't like it very much. It wasn't very flexible -- there was no way to change the filename, for one thing, and it was awfully easy to overwrite an original image without knowing that you'd done it. I went back to using GIMP's separate Save and Export, but in the back of my mind I was turning over ideas, trying to understand my workflow and what I really wanted out of a GIMP Save plug-in.

[Screenshot: GIMP Saver-as... plug-in] The result of that was a new Python plug-in called Saver. I first wrote it a year ago, but I've been tweaking it and using it since then, with Ctrl-S bound to Saverand Ctrl-Shift-S bound to Saver as...). I wanted to make sure that it was useful and working reliably ... and somehow I never got around to writing it up and announcing it formally ... until now.

Saver, like Save/Export Clean, will overwrite your chosen filename, whether XCF or another format, and will mark the image as saved so GIMP won't pester you when you exit.

What's different? Mainly, three things:

  1. A Saver as... option so you can change the filename or file type.
  2. Merges multiple layers so they'll show up properly in your JPG or PNG image.
  3. An option to save as .xcf or .xcf.gz and, at the same time, export a copy in another format, possibly scaled down. So you can maintain your multi-layer XCF image but also update the JPG copy that you're going to put on the web.

I've been using Saver for nearly all my saving for the past year. If I'm just making a quick edit of a JPEG camera image, Ctrl-S overwrites it without questioning me. If I'm editing an elaborate multi-layer GIMP project, Ctrl-S overwrites the .xcf.gz. If I'm planning to export that image for the web, I Ctrl-Shift-S to bring up the Saver As... dialog, make sure the main filename is .xcf.gz, set a name (ending in .jpg) for the exported copy; and from then on, Ctrl-S will save both the XCF and the JPG copy.

Saver is available on my github page, with installation instructions here: GIMP Saver and Save/Export Clean Plug-ins. I hope you find it useful.

November 04, 2014

Angry Birds maker Rovio ‘Plunder Pirates’ featured on App store

PlunderPirates_Render_03

Midoki studio’s latest release ‘Plunder Pirates’, a strategy game melding 4X exploration with tower defense, set in the Caribbean, was picked as ‘Editor’s choice’ on Apples iTunes app store. Art director Daniel Martinez-Normand has been instrumental in bringing the team to use Blender alongside their more traditional Maya workflow and is keen to share his and his studio’s experience of this transition.


First Blood.PlunderPirates_blender_screeengrab_04

November 2013, looming deadlines for their Crazy Taxi project and frustration with aspects of UV handling with the texture paint tools in Maya sent Daniel on a hunt for an alternative

tool for the job. He quickly found Blender tutorials that drew him in leading to a swift download and install of the package. The speed of this process impressed him further, especially when compared to the lengthy procedure to get Maya onto a machine and while he continued to have some initial trouble with naming conventions in Blender as well as the unique ‘right click’ methodology, he was enamored  enough to implement techniques learned from video tutorials into the team’s workflow.

SaviourPlunderPirates_Render_01

Chief among these was the use of the Ocean’ modifier, used to generate wave surfaces for pre-rendered action sequences. They had initially abandoned plans for these sorts of shots, believing that while they could achieve the look they required in-game, the setup and render time needed to produce an equivalent set of was too much for their four week schedule. However Daniel managed to get a working ocean scene up and running in Blender within a few days so they re-upped their expectations and went ahead with the sequence.

Workflow

  • Most models are made and exported to the game from Maya LT using FBX. Plunder Pirates uses Midoki’s own engine, and the model converter was designed to read FBX files with the 2012 specification.
  • Midoki do a lot of marketing images for social media, so once a model is finished they import it into Blender,  re-apply cycles materials, and subdivide the mesh with extra details if needed. Setting up a scene doesn’t take long and they can easily produce a couple of renders per week while carrying on work on the game.
  • Any UI assets that require pre-rendered images (such as buttons) are also rendered in Blender.
  • Finally, all the latest  characters are fully modelled and baked in Blender, and only exported to Maya for rigging and FBX export. Blender is much faster than Maya baking the textures and the quality of the bakes is stunning, adding Cycles baking to the mix has improved that further.

Testimony

“Blender is a solid and powerful professional tool, with an incredibly fast update cycle. And more importantly: in Blender you feel that every new tool and feature has been designed by someone who actually needs it. These are not tick boxes on a sales brochure as we are sadly used to. These are tools designed for a real purpose, and they work. And that’s something any studio, big or small, can benefit from.”   Daniel Martinez-Normand
He also says that if he had been as literate to the benefits of Blender three years ago, Midoki would be likely to have a Blender only pipeline as their use of Maya stems from their previous roles in the industry and its integration in their pipeline. Their long term goal is to improve the model converter so it can read FBX with different specifications, from either Maya or from Blender.

Studio

Midoki are a small games studio based in Leamington Spa in England,  as small town in the middle of the UK long associated with video game production being home to Pitbull, Radiant Worlds, Sega Hardlight and Codemasters among others. The company was created around three years ago with a dream team line up of staff and management, from it’s chairman Ian Livingstone (Games Workshop / Eidos) to company director Ian Hetherington (Psygnosis / SCEE) the company has brought together some of the industry’s leading talents. In their short history they have collaborated with Sega on a ‘Crazy Taxi’ title, produced a 3D explorations app (Recce) of three major world cities (London, San Francisco and New York) and subsequently used that technology to gameify those urban environments in their title ‘Go Deliver’. Plunder Pirates, their first global release, has already racked up more than 4 million downloads since launch

Links

 

November 01, 2014

Chinese version of Training Package is available as Pay-What-You-Want!

Pay any amount you want and get Chinese version of Synfig Training Package...

Hardware support news

Trackballs

I dusted off (literally) my Logitech Marble trackball to replace the Intuos tablet + mouse combination that I was using to cut down on the lateral movement of my right arm which led to back pains.

Not that you care about that one bit, but that meant that I needed a way to get a scroll wheel working with this scroll-wheel less trackball. That's now implemented in gnome-settings-daemon for GNOME 3.16. You'd run:


gsettings set org.gnome.settings-daemon.peripherals.trackball scroll-wheel-emulation-button 8

With "8" being the mouse button number to use to make the trackball ball into a wheel. We plan to add an interface to configure this in the Settings.

Touchscreens

Touchscreens are now switched off when the screensaver is on. This means you'll usually need to use one of the hardware buttons on tablets, or a mouse or keyboard on laptops to turn the screen back on.

Note that you'll need a kernel patch to avoid surprises when the touchscreen is re-enabled.

More touchscreens

The driver for the Goodix touchscreen found in the Onda v975w is now upstream as well.

October 31, 2014

Simulating a web page timeout

Today dinner was a bit delayed because I got caught up dealing with an RSS feed that wasn't feeding. The website was down, and Python's urllib2, which I use in my "feedme" RSS fetcher, has an inordinately long timeout.

That certainly isn't the first time that's happened, but I'd like it to be the last. So I started to write code to set a shorter timeout, and realized: how does one test that? Of course, the offending site was working again by the time I finished eating dinner, went for a little walk then sat down to code.

I did a lot of web searching, hoping maybe someone had already set up a web service somewhere that times out for testing timeout code. No such luck. And discussions of how to set up such a site always seemed to center around installing elaborate heavyweight Java server-side packages. Surely there must be an easier way!

How about PHP? A web search for that wasn't helpful either. But I decided to try the simplest possible approach ... and it worked!

Just put something like this at the beginning of your HTML page (assuming, of course, your server has PHP enabled):

<?php sleep(500); ?>

Of course, you can adjust that 500 to be any delay you like.

Or you can even make the timeout adjustable, with a few more lines of code:

<?php
 if (isset($_GET['timeout']))
     sleep($_GET['timeout']);
 else
     sleep(500);
?>

Then surf to yourpage.php?timeout=6 and watch the page load after six seconds.

Simple once I thought of it, but it's still surprising no one had written it up as a cookbook formula. It certainly is handy. Now I just need to get some Python timeout-handling code working.

October 30, 2014

appdata-tools is dead

PSA: If you’re using appdata-validate, please switch to appstream-util validate from the appstream-glib project. If you’re also using the M4 macro, just replace APPDATA_XML with APPSTREAM_XML. I’ll ship both the old binary and the old m4 file in appstream-glib for a little bit, but I’ll probably remove them again the next time we bump ABI. That is all. :)

October 27, 2014

Development Builds (with Sound Layer)

The new builds of development version with Sound Layer functionality are available for download....

October 24, 2014

Partial solar eclipse, with amazing sunspots

[Partial solar eclipse, with sunspots] We had perfect weather for the partial solar eclipse yesterday. I invited some friends over for an eclipse party -- we set up a couple of scopes with solar filters, put out food and drink and had an enjoyable afternoon.

And what views! The sunspot group right on the center of the sun's disk was the most large and complex I'd ever seen, and there were some much smaller, more subtle spots in the path of the eclipse. Meanwhile, the moon's limb gave us a nice show of mountains and crater rims silhouetted against the sun.

I didn't do much photography, but I did hold the point-and-shoot up to the eyepiece for a few shots about twenty minutes before maximum eclipse, and was quite pleased with the result.

An excellent afternoon. And I made too much blueberry bread and far too many oatmeal cookies ... so I'll have sweet eclipse memories for quite some time.

October 23, 2014

perf.gnome.org – introduction

My talk at GUADEC this year was titled Continuous Performance Testing on Actual Hardware, and covered a project that I’ve been spending some time on for the last 6 months or so. I tackled this project because of accumulated frustration that we weren’t making consistent progress on performance with GNOME. For one thing, the same problems seemed to recur. For another thing, we would get anecdotal reports of performance problems that were very hard to put a finger on. Was the problem specific to some particular piece of hardware? Was it a new problem? Was it an a problems that we have already addressed? I wrote some performance tests for gnome-shell a few years ago – but running them sporadically wasn’t that useful. Running a test once doesn’t tell you how fast something should be, just how fast it is at the moment. And if you run the tests again in 6 months, even if you remember what numbers you got last time, even if you still have the same development hardware, how can you possibly figure out what what change is responsible? There will have been thousands of changes to dozens of different software modules.

Continuous testing is the goal here – every time we make a change, to run the same tests on the same set of hardware, and then to make the results available with graphs so that everybody can see them. If something gets slower, we can then immediately figure out what commit is responsible.

We already have a continuous build server for GNOME, GNOME Continuous, which is hosted on build.gnome.org. GNOME Continuous is a creation of Colin Walters, and internally uses Colin’s ostree to store the results. ostree, for those not familiar with it is a bit like Git for trees of binary files, and in particular for operating systems. Because ostree can efficiently share common files and represent the difference between two trees, it is a great way to both store lots of build results and distribute them over the network.

I wanted to start with the GNOME Continuous build server – for one thing so I wouldn’t have to babysit a separate build server. There are many ways that the build can break, and we’ll never get away from having to keep a eye on them. Colin and, more recently, Vadim Rutkovsky were already doing that for GNOME Continuouous.

But actually putting performance tests into the set of tests that are run by build.gnome.org doesn’t work well. GNOME Continuous runs it’s tests on virtual machines, and a performance test on a virtual machine doesn’t give the numbers we want. For one thing, server hardware is different from desktop hardware – it generally has very limited graphics acceleration, it has completely different storage, and so forth. For a second thing, a virtual machine is not an isolated environment – other processes and unpredictable caching will affect the numbers we get – and any sort of noise makes it harder to see the signal we are looking for.

Instead, what I wanted was to have a system where we could run the performance tests on standard desktop hardware – not requiring any special management features.

Another architectural requirement was that the tests would keep on running, no matter what. If a test machine locked up because of a kernel problem, I wanted to be able to continue on, update the machine to the next operating system image, and try again.

The overall architecture is shown in the following diagram:

HWTest Architecture The most interesting thing to note in the diagram the test machines don’t directly connect to build.gnome.org to download builds or perf.gnome.org to upload the results. Instead, test machines are connected over a private network to a controller machine which supervises the process of updating to the next build and actually running, the tests. The controller has two forms of control over the process – first it controls the power to the test machines, so at any point it can power cycle a test machine and force it to reboot. Second, the test machines are set up to network boot from the test machines, so that after power cycling the controller machine can determine what to boot – a special image to do an update or the software being tested. The systemd journal from the test machine is exported over the network to the controller machine so that the controller machine can see when the update is done, and collect test results for publishing to perf.gnome.org.

perf.gnome.org is live now, and tests have been running for the last three months. In that period, the tests have run thousands of times, and I haven’t had to intervene once to deal with a . Here’s perf.gnome.org catching a regression (fix)

perf.gnome.org regressionI’ll cover more about the details of how the hardware testing setup work and how performance tests are written in future posts – for now you can find some more information at https://wiki.gnome.org/Projects/HardwareTesting.


October 22, 2014

Monkaa, Open Movie by Weybec

Monkaa has been made by the new Mumbai studio Weybec. It is a 5 minutes short animated movie, entirely made with Blender and GIMP and other Free/Open Software programs. It has been released as an Open Movie, including all production files and tutorials, as Creative Commons Attribution.

Although this short film has been produced independently, it was made possible thanks to support by the Blender Institute. Monkaa is a great educational example of design, animation and film making. Combined with all the extras and tutorials people will be enjoying the collection a lot – either as a DVD purchased in the blender.org e-store, or in Blender Cloud for the open production supporters.

(Or watch on youtube here)

Monkaa is a blue furred, pink faced monkey who consumes a crystallized meteorite, making Monkaa invincibly strong and too hot to handle. Exploring his superpower Monkaa zooms into an unexplored universe.

Produced by: Weybec – www.weybec.com
Released by Blender Institute, in Blender Cloud and the blender.org e-store

-Ton-

A surprise in the mousetrap

I went out this morning to check the traps, and found the mousetrap full ... of something large and not at all mouse-like.

[young bullsnake] It was a young bullsnake. Now slender and maybe a bit over two feet long, it will eventually grow into a larger relative of the gopher snakes that I used to see back in California. (I had a gopher snake as a pet when I was in high school -- they're harmless, non-poisonous and quite docile.)

The snake watched me alertly as I peered in, but it didn't seem especially perturbed to be trapped. In fact, it was so non-perturbed that when I opened the trap, the snake stayed right where it was. It had found a nice comfortable resting place, and it wasn't very interested in moving on a cold morning.

I had to poke it gently through the bars, hold the trap vertically and shake for a while before the snake grudgingly let go and slithered out onto the ground.

I wondered if it had found its way into the trap by chasing a mouse, but I didn't see any swellings that looked like it had eaten recently. I'm fairly sure it wasn't interested in the peanut butter bait.

I released the snake in a spot near the shed where the mousetrap is set up. There are certainly plenty of mice there for it to eat, and gophers when it gets a little larger, and there are lots of nice black basalt boulders to use for warming up in the morning, and gopher holes to hide in. I hope it sticks around -- gopher/bullsnakes are good neighbors.

[young bullsnake caught in mousetrap]

October 21, 2014

A GNOME Kernel wishlist

GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

October 20, 2014

KMZ Zorki 4 (Soviet Rangefinder)

Leica rangefinders

Rangefinder type cameras predate modern single lens reflex camera’s. People still use them. It’s just a different way of shooting. Since they’re no longer a mainstream type camera most manufacturers have stopped making them a long time ago. Except Leica, Leica still makes digital and film rangefinders, as you might guess, they come at significant cost. Even old Leica film rangefinders easily cost upwards of a 1000 EUR. While Leica wasn’t the only brand to manufacture rangefinder through photographic history, it was (and still is) certainly the most iconic brand.

Zorki rangefinders

Now the soviets essentially tried to copy Leica’s cameras, the result of which, the Zorki camera was produced at KMZ. Many different versions exist, having produced nearly 2 million cameras across more than 15 years, the Zorki 4 was without a doubt it’s most popular incarnation. Many consider the Zorki 4 to be the camera where the soviets got it right.

That said, the Zorki 4 more or less looks like a Leica M with it’s single coupled viewfinder/rangefinder window. In most other ways it’s more like a pre-M Leica, with it’s m39 lens screw mount. Earlier Zorki 4’s have a body finished with vulcanite which is though as nails, but if damaged is nearly impossible to fix/replace. Later Zorki’s have a body finished with relatively cheap leatherette, which is much more easily damaged, and is commonly starting to peel off, but it should be relatively easy to make better than new. Most Zorki’s come with either a Jupiter-8 50mm f/2.0 lens (being a Zeiss Sonnar inspired design), or an Industar-50 50mm f/3.5 (being a Zeiss Tessar inspired design). I’d highly recommend getting a Jupiter-8 if you can find one.

Buying a Zorki with a Jupiter

If your looking to buy a Zorki there are a few things to be aware of. Zorki’s were produced during the fifties, the sixties and the seventies in Soviet Russia often favoring quantity over quality presumably to be able to meet quota’s. The same is likely true for most soviet lenses as well. So they are both old and may not have met the high quality standards to begin with. So when buying a Zorki you need to keep in mind it might need repairs and CLA (clean / lube / adjust). My particular Zorki had a dim viewfinder because of dirt both inside and out, the shutterspeed dial was completely stuck at 1/60sec and the film takeup spool was missing. I sent my Zorki and Jupiter-8 to Oleg Khalyavin for repairs, shutter curtain replacement and CLA. Oleg was also able to provide me with a replacement film takeup spool or two. All in all having work done on your Zorki will easily set you back about 100 EUR including shipping expenses. Keep this in mind before buying. And even if you get your Zorki in a usable state, you’ll probably have to have it serviced at some point. You may very well want to consider having it serviced rather sooner than later, allowing yourself the benefit of enjoying a newly serviced camera.

Zorki’s come without a lens hood, and the Jupiter-8’s glass elements are typically only single coasted, so a hood isn’t exactly a luxury. A suitable aftermarket lens hood isn’t hard to find though.

Choosing a film stock

So now you have a nice Zorki 4, waiting for film to be loaded into it. As of this writing (2014) there is a smörgåsbord of film available. I like shooting black & white, and I often shoot Ilford XP2 Super 400. Ilford’s XP2 is the only B&W film left that’s meant to be processed along with regular color negative film in regular C41 chemicals (so it can be processed by a one-hour-photo-service). Like most color negative film, XP2 has a big exposure latitude, remaining usable between ISO 50 — 800, which isn’t a luxury since the Zorki does not come with a lightmeter. While Ilford recommends shooting it at ISO 400, I’d suggest shooting it as if it’s ISO 200 film, giving you two stops of both underexposure and overexposure leeway.

I haven’t shot any real color negative film yet in the Zorki, but Kodak New Portra 400 quickly comes to mind. An inexpensive alternative could possibly be Fuji Superia X-TRA 400, which can be found very cheaply as most store brand 400 speed film.

Shooting a Zorki

Once you have a Zorki, there are still some caveats you need to be aware of… Most importantly, don’t change shutter speed while the shutter isn’t cocked (cocking the shutter is done by advancing the film), not heeding this warning may result in internal damage to the camera mechanism. Other notable issues of lesser importance are minding the viewfinder’s parallax error (particular when shooting at short distances) and making sure you load the film straight.

As I’ve already mentioned the Zorki 4 does not come with a lightmeter, which means the camera won’t be helping you getting the exposure right, you are on your own. You could use a pricy dedicated light meter (or less pricy smartphone app), either of which is fairly cumbersome. Considering XP2’s exposure latitude means an educated guesswork approach becomes feasible. There’s a rule of thumb system called Sunny 16 for making educated guesstimates of exposure. Sunny 16 states that if you set your shutter speed to the closest reciprocal of your film speed, bright sunny daylight requires an aperture of f/16 to get a decent exposure. Other weather conditions require opening up the aperture according to this table:

Sunny f/16
Slightly Overcast f/11
Overcast f/8
Heavy Overcast f/5.6
Open Shade f/4

If you have doubts when classifying shooting conditions, you may want to err on the side of overexposure as color negative film tends to prefer overexposure over underexposure. If you’re shooting slide film you should probably avoid using Sunny 16 altogether, as slide film can be very unforgiving if improperly exposed.

Quick example: When shooting XP2 on an overcast day, assuming an alternate base ISO of 200 (as suggested earlier), the shutter speed should be set at 1/250th of a second and our aperture should be set at f8, giving a fairly large field of depth. Now if we can to reduce our field of depth we can trade +2 stops aperture for -2 stops of shutterspeed, where we end up shooting at 1/1000th of a second at f4.

Having film processed

After shooting a roll of XP2 (or any roll of color negative film) you need to take it to a local photo shop, chemist or supermarket to have a it processed, scanned and printed. Usually you’ll be able to have your film processed in C41 chemicals, scanned to CD and get a set of prints for about 15 EUR or so. Keep in mind that most shops cut your filmroll into strips of 4, 5 or 6 negatives depending on the sleeves they use. Also some shops might not offer scanning services without ordering prints, since scanning is an integral part of the printmaking process. Resulting JPEG scans are usually about 2 megapixel (1800×1200) equivalent, or sometimes slightly lower (1536×1024). A particular note when using XP2, since it’s processed as if it’s color negative film, also means it’s usually scanned as if it’s color negative film, where the resulting should-be-monochrome scans (and prints for that matter) can often have a slight color cast. This color cast varies, my particular lab usually does a fairly decent job, where the scans have a subtle warm color cast, which isn’t unpleasant at all. But I’ve heard about nasty purplish color casts as well. Regardless you need to keep in mind that you might need to convert the scans to proper monochrome manually, which can be easily done with any random photo editing software in a heartbeat. Same goes for rotating the images, aside from the usual 90 degree turns occasionally I get my images scanned upside down, where they need either 180 degree or 270 degree turns, you’ll need to do that yourself as well.

Post-processing the images

First remove all useless data from the source JPEG, and in particular for XP2, remove the JPEGs chroma (UV) channels, to remove any color cast:

$ jpegtran -copy none -grayscale -optimize -perfect 0001.JPG > ZRK_0001.JPG

Then add basic EXIF metadata:

$ exiv2 \
   -M"set Exif.Image.Artist Pascal de Bruijn" \
   -M"set Exif.Image.Make KMZ" \
   -M"set Exif.Image.Model Zorki 4" \
   -M"set Exif.Image.ImageNumber $(echo 0001.JPG | tr -cd '0-9' | sed 's#^0*##g')" \
   -M"set Exif.Image.Orientation 3" \
   -M"set Exif.Image.XResolution 300/1" \
   -M"set Exif.Image.YResolution 300/1" \
   -M"set Exif.Image.ResolutionUnit 2" \
   -M"set Exif.Image.YCbCrPositioning 1" \
   -M"set Exif.Photo.ComponentsConfiguration 1 2 3 0" \
   -M"set Exif.Photo.FlashpixVersion 48 49 48 48" \
   -M"set Exif.Photo.ExifVersion 48 50 51 48" \
   -M"set Exif.Photo.DateTimeDigitized $(stat --format="%y" 0001.JPG | awk -F '.' '{print $1}' | tr '-' ':')" \
   -M"set Exif.Photo.UserComment Ilford XP2 Super" \
   -M"set Exif.Photo.ExposureProgram 1" \
   -M"set Exif.Photo.ISOSpeedRatings 400" \
   -M"set Exif.Photo.FocalLength 50/1" \
   -M"set Exif.Photo.LensMake KMZ" \
   -M"set Exif.Photo.LensModel Jupiter-8 50/2" \
   -M"set Exif.Photo.FileSource 1" \
   -M"set Exif.Photo.ColorSpace 1" \
   ZRK_0001.JPG

Finally

Moar

If you want to read more about film photography you may want to consider adding Film Is Not Dead to your shelf.

October 19, 2014

What's next

Some thoughts on further steps in Synfig development....

TNT Drama Series ‘Legends’ Teaser

Loica_Legends_face_CU

Loica, a production studio based in Santiago, Chile have used a novel combination of photography, 3D scanning and Blender to produce a stunning promo for TNT’s recent drama series ‘Legends’ starring British actor Sean Bean.

Working with their office in Santa Monica, Hollywood they have collaborated with the Turner Creative team for a month in which they took still photography and 3D scans to the next level developing post production techniques to create a sequence that showcases a hyper real cinematic feel, fitting with the high production values of the show and expressing its psychological mystery.

Loica_Legends_titleFirst steps.

Starting with detailed photographs and 3D scans of the cast and props, they refined the models with sculpting tools, later adding layers such as hair, shaders and lighting to emphasize the realism and life of the scenes. They built Sean Bean as a low poly mesh object and using a multires modifier, they sculpted fine detail. The model was rigged to enable the team to easily pose and adapt the character into multiple positions to match the basis photographs, which combined with the scans gave them the ability to fully control the look and feel of each shot.

Shading

Shading was added in the form of several passes including diffuse layers enhanced with cloning and stencil work while reflection layers were used to add life to the eyes and a feeling of texture to the various materials of character’s clothes and props. Movement of light sources in the scenes was used to bring motion to the otherwise completely still characters giving a sense of time frozen in a moment.

Loica_Legends_shading2Details

Blender’s hair system was used in order to produce simulations of Sean’s beard hair and the team even went as far as adding smaller details such as skin hair on the nose and eyelashes. In motion, these touches bring a sense of true depth and realism to the shots.

‘Memory Loss’

The team has made good use of the Cycles renderer by using a modified glass shader to produce a heavy depth of field effect that they have named ‘Memory Loss’. They explored this route after finding that simple post effect depth of field effects weren’t producing enough of what they envisioned. Passing a 3D plane with this shader through the character allowed them to finely control the spatial and focal elements of the shot.

OverallLoica_Legends_shading

The studio has reported that the team had a great experience with Blender especially in terms of the single program workflow. To be able to model, texture, sculpt, preview and render in the same package helped streamline the workflow which accelerated production.

Studio

Loica’s show reel exhibits a broad range of styles without watering down the quality and beauty of the work they do. From the artsy UNICEF promo and cheeky use of visual effectsfor Volkswagen to the refined quality of ABC’s ‘Once Upon a Time’ promo they have proved themselves adept in multiple fields of graphic production. The promo can be viewed here and their website is http://loica.tv/

 

Stellarium 0.13.1 has been released!

The Stellarium development team after 3 months of development is proud to announce the first correcting release of Stellarium in series 0.13.x - version 0.13.1.

This release brings few new features and fixes:
- Added: Light layer for old_style landscapes
- Added: Auto-detect location via network lookup.
- Added: Seasonal rules for displaying constellations
- Added: Coordinates can be displayed as decimal degrees (LP: #1106743)
- Added: Support of multi-touch gestures on Windows 8 (LP: #1165754)
- Added: FOV on bottom bar can be displayed in DMS rather than fractional degrees (LP: #1361582)
- Added: Oculars plugins support eyepieces with permanent crosshairs (LP: #1364139)
- Added: Pointer Coordinates Plugin can displayed not only RA/Dec (J2000.0) (LP: #1365784, #1377995)
- Added: Angle Measure Plugin can measure positional angles to the horizon now (LP: #1208143)
- Added: Search tool can search position not only for RA/Dec (J2000.0) (LP: #1358706)
- Fixed: Galactic plane renamed to correct: Galactic equator (LP: #1367744)
- Fixed: Speed issues when computing lots of comets (LP: #1350418)
- Fixed: Spherical mirror distortion work correctly now (LP: #676260, #1338252)
- Fixed: Location coordinates on the bottom bar displayed correctly now (LP: #1357799)
- Fixed: Ecliptic coordinates for J2000.0 and grids diplayed correctly now (LP: #1366567, #1369166)
- Fixed: Rule for select a celestial objects (LP: #1357917)
- Fixed: Loading extra star catalogs (LP: #1329500, #1379241)
- Fixed: Creates spurious directory on startup (LP: #1357758)
- Fixed: Various GUI/rendering improvements (LP: #1380502, #1320065, #1338252, #1096050, #1376550, #1382689)
- Fixed: "missing disk in drive <whatever>" (LP: #1371183)

A huge thanks to our community whose contributions help to make Stellarium better!

October 18, 2014

Synfig Studio 0.64.2

The new stable version of Synfig Studio is released!...

October 16, 2014

Aspens are turning the mountains gold

Last week both of the local mountain ranges turned gold simultaneously as the aspens turned. Here are the Sangre de Cristos on a stormy day:

[Sangre de Cristos gold with aspens]

And then over the weekend, a windstorm blew a lot of those leaves away, and a lot of the gold is gone now. But the aspen groves are still beautiful up close ... here's one from Pajarito Mountain yesterday.

[Sangre de Cristos gold with aspens]

October 15, 2014

Quick update

Hi all

Just today realize how long since my last post, sadly being in Cuba prevents me from regular updates.
Last months ive being working polishing A LOT the FillHoles tools for purposes way beyond regular sculpting to the point of making it a very powerful tool on they own for mesh healing. I will probably make a short video soon featuring a complete workflow for that tools but honestly, this offline situation that has prolonged more than a year now is demotivating me a lot.
Ive started working on a quadrangulation tool that proves to be challenging and interesting enough to drive me trough this situation.
Hope in new post I will be a little more happy :)

Cheers to all


GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>Liberation</id>
  <metadata_license>CC0-1.0</metadata_license>
  <name>Liberation</name>
  <summary>Open source versions of several commercial fonts</summary>
  <description>
    <p>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
    </p>
  </description>
  <update_contact>richard_at_hughsie_dot_com</updatecontact>
  <url type="homepage">http://fedorahosted.org/liberation-fonts/</url>
</component>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>LiberationSerif</id>
  <metadata_license>CC0-1.0</metadata_license>
  <extends>Liberation</extends>
</component>

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

October 14, 2014

Blenderart Mag Issue #45 now available

Welcome to Issue #45, “Cycles Circus

Come jump on the Cycles Circus merry-go-round with us as we not only explore some fun features of cycles, but play on a comet and meet the geniuses of Ray and Clovis

So grab your copy today. Also be sure to check out our gallery of wonderful images submitted by very talented members of our community.

Table of Contents: 

  • Quick Comet Animation
  • Book Review: Cycles Materials and Textures
  • Baby Elephant
  • Ray and Clovis

And Lot More…

October 12, 2014

Synfig Studio 0.64.2 - Release Candidate #2

The second release candidate of upcoming Synfig Studio 0.64.2 is available for download now....

October 11, 2014

Railroading exponentially

or: Smart communities can still be stupid

I attended my first Los Alamos County Council meeting yesterday. What a railroad job!

The controversial issue of the day was the town's "branding". Currently, as you drive into Los Alamos on highway 502, you pass a tasteful rock sign proclaiming "LOS ALAMOS: WHERE DISCOVERIES ARE MADE". But back in May, the county council announced the unanimous approval of a new slogan, for which they'd paid an ad agency some $55,000: "LIVE EXPONENTIALLY".

As you might expect in a town full of scientists, the announcement was greeted with much dismay. What is it supposed to mean, anyway? Is it a reference to exponential population growth? Malignant tumor growth? Gaining lots of weight as we age?

The local online daily, tired of printing the flood of letters protesting the stupid new slogan, ran a survey about the "Live Exponentially" slogan. The results were that 8.24% liked it, 72.61% didn't, and 19.16% didn't like it and offered alternatives or comments. My favorites were Dave's suggestion of "It's Da Bomb!", and a suggestion from another reader, "Discover Our Secrets"; but many of the alternate suggestions were excellent, or hilarious, or both -- follow the link to read them all.

For further giggles, try a web search on the term. If you search without quotes, Ebola tops the list. With quotes, you get mostly religious tracts and motivational speakers.

The Council Meeting

(The rest of this is probably only of interest to Los Alamos folk.)

Dave read somewhere -- it wasn't widely announced -- that Friday's council meeting included an agenda item to approve spending $225,000 -- yes, nearly a quarter of a million dollars -- on "brand implementation". Of course, we had to go.

In the council discussion leading up to the call for public comment, everyone spoke vaguely of "branding" without mentioning the slogan. Maybe they hoped no one would realize what they were really voting for. But in the call for public comment, Dave raised the issue and urged them to reconsider the slogan.

Kristin Henderson seemed to have quite a speech prepared. She acknowledged that "people who work with math" universally thought the slogan was stupid, but she said that people from a liberal arts background, like herself, use the term to mean hiking, living close to nature, listening to great music, having smart friends and all the other things that make this such a great place to live. (I confess to being skeptical -- I can't say I've ever heard "exponential" used in that way.)

Henderson also stressed the research and effort that had already gone into choosing the current slogan, and dismissed the idea that spending another $50,000 on top of the $55k already spent would be "throwing money after bad." She added that showing the community some images to go with the slogan might change people's minds.

David Izraelevitz admitted that being an engineer, he initially didn't like "Live Exponentially". But he compared it to Apple's "Think Different": though some might think it ungrammatical, it turned out to be a highly successful brand because it was coupled with pictures of Gandhi and Einstein. (Hmm, maybe that slogan should be "Live Exponential".)

Izraelevitz described how he convinced a local business owner by showing him the ad agency's full presentation, with pictures as well as the slogan, and said that we wouldn't know how effective the slogan was until we'd spent the $50k for logo design and an implementation plan. If the council didn't like the results they could choose not to go forward with the remaining $175,000 for "brand implementation". (Councilor Fran Berting had previously gotten clarification that those two parts of the proposal were separate.)

Rick Reiss said that what really mattered was getting business owners to approve the new branding -- "the people who would have to use it." It wasn't so important what people in the community thought, since they didn't have logos or ads that might incorporate the new branding.

Pete Sheehey spoke up as the sole dissenter. He pointed out that most of the community input on the slogan has been negative, and that should be taken into account. The proposed slogan might have a positive impact on some people but it would have a negative impact on others, and he couldn't support the proposal.

Fran Berting said she was "not all that taken" with the slogan, but agreed with Izraelevitz that we wouldn't know if it was any good without spending the $50k. She echoed the "so much work has already gone into it" argument. Reiss also echoed "so much work", and that he liked the slogan because he saw it in print with a picture.

But further discussion was cut off. It was 1:30, the fixed end time for the meeting, and chairman Geoff Rodgers (who had pretty much stayed out of the discussion to this point) called for a vote. When the roll call got to Sheehey, he objected to the forced vote while they were still in the middle of a discussion. But after a brief consultation on Robert's Rules of Order, chairman Rogers declared the discussion over and said the vote would continue. The motion was approved 5-1.

The Exponential Railroad

Quite a railroading. One could almost think it had been planned that way.

First, the item was listed as one of two in the "Consent Agenda" -- items which were expected to be approved all together in one vote with no discussion or public comment. It was moved at the last minute into "Business"; but that put it last on the agenda.

Normally that wouldn't have mattered. But although the council more often meets in the evenings and goes as long as it needs to, Friday's meeting had a fixed time of noon to 1:30. Even I could see that wasn't much time for all the items on the agenda.

And that mid-day timing meant that working folk weren't likely to be able to listen or comment. Further, the branding issue didn't come up until 1 pm, after some of the audience had already left to go back to work. As a result, there were only two public comments.

Logic deficit

I heard three main arguments repeated by every council member who spoke in favor:

  1. the slogan makes much more sense when viewed with pictures -- they all voted for it because they'd seen it presented with visuals;
  2. a lot of time, effort and money has already gone into this slogan, so it didn't make sense to drop it now; and
  3. if they didn't like the logo after spending the first $50k, they didn't have to approve the other $175k.

The first argument doesn't make any sense. If the pictures the council saw were so convincing, why weren't they showing those images to the public? Why spend an additional $50,000 for different pictures? I guess $50k is just pocket change, and anyone who thinks it's a lot of money is just being silly.

As for the second and third, they contradict each other. If most of the board thinks now that the initial $50k contract was so much work that we have to go forward with the next $50k, what are the chances that they'll decide not to continue after they've already invested $100k?

Exponentially low, I'd say.

I was glad of one thing, though. As a newcomer to the area faced with a ballot next month, it was good to see the council members in action, seeing their attitudes toward spending and how much they care about community input. That will be helpful come ballot time.

If you're in the same boat but couldn't make the meeting, catch the October 10, 2014 County Council Meeting video.

And now for some hardware (Onda v975w)

Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Vrrrroooom.


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Update: On Google+ and in comments of this blog, it was pointed out that the seller on Aliexpress was trying to scam people. All my apologies, I just selected the cheapest from this website. I personally bought it on Amazon.fr using NewTec24 FR as the vendor.

October 09, 2014

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

What's nesting in our truck's engine?

We park the Rav4 outside, under an overhang. A few weeks ago, we raised the hood to check the oil before heading out on an adventure, and discovered a nest of sticks and grass wedged in above the valve cover. (Sorry, no photos -- we were in a hurry to be off and I didn't think to grab the camera.)

Pack rats were the obvious culprits, of course. There are lots of them around, and we've caught quite a few pack rats in our live traps. Knowing that rodents can be a problem since they like to chew through hoses and wiring, we decided we'd better keep an eye on the Rav and maybe investigate some sort of rodent-repelling technology.

Sunday, we got back from another adventure, parked the Rav in its usual place, went inside to unload before heading out for an evening walk, and when we came back out, there was a small flock of birds hanging around under the Rav. Towhees! Not only hanging around under the still-warm engine, but several times we actually saw one fly between the tires and disappear.

Could towhees really be our engine nest builders? And why would they be nesting in fall, with the days getting shorter and colder?

I'm keeping an eye on that engine compartment now, checking every few days. There are still a few sticks and juniper sprigs in there, but no real nest has reappeared so far. If it does, I'll post a photo.

October 08, 2014

Wed 2014/Oct/08

  • Growstuff's Crowdfunding Campaign for an API for Open Food Data

    During GUADEC 2012, Alex Skud Bailey gave a keynote titled What's Next? From Open Source to Open Everything. It was about how principles like de-centralization, piecemeal growth, and shared knowledge are being applied in many areas, not just software development. I was delighted to listen to such a keynote, which validated my own talk from that year, GNOME and the Systems of Free Infrastructure.

    During the hallway track I had the chance to talk to Skud. She is an avid knitter and was telling me about Ravelry, a web site for people who knit/crochet. They have an excellent database of knitting patterns, a yarn database, and all sorts of deep knowledge on the craft gathered over the years.

    At that time I was starting my vegetable garden at home. It turned out that Skud is also an avid gardener. We ended up talking about how it would be nice to have a site like Ravelry, but for small-scale food gardeners. You would be able to track your own crops, but also consult about the best times to plant and harvest certain species. You would be able to say how well a certain variety did in your location and climate. Over time, by aggregating people's data, we would be able to compile a free database of crop data, local varieties, and climate information.

    Growstuff begins

    Growstuff

    Skud started coding Growstuff from scratch. I had never seen a project start from zero-lines-of-code, and be run in an agile fashion, for absolutely everything, and I must say: I am very impressed!

    Every single feature runs through the same process: definition of a story, pair programming, integration. Newbies are encouraged to participate. They pair up with a more experienced developer, and they get mentored.

    They did that even for the very basic skeleton of the web site: in the beginning there were stories for "the web site should display a footer with links to About and the FAQ", and "the web site should have a login form". I used to think that in order to have a collaboratively-developed project, one had to start with at least a basic skeleton, or a working prototype — Growstuff proved me wrong. By having a friendly, mentoring environment with a well-defined process, you can start from zero-lines-of-code and get excellent results quickly. The site has been fully operational for a couple of years now, and it is a great place to be.

    Growstuff is about the friendliest project I have seen.

    Local crop data

    Tomato heirloom        varieties

    I learned the basics of gardening from a couple of "classic" books: the 1970s books by John Seymour which my mom had kept around, and How to Grow More Vegetables, by John Jeavons. These are nominally excellent — they teach you how to double-dig to loosen the soil and keep the topsoil, how to transplant fragile seedlings so you don't damage them, how to do crop rotation.

    However, their recommendations on garden layouts or crop rotations are biased towards the author's location. John Seymour's books are beautifully illustrated, but are about the United Kingdom, where apples and rhubarb may do well, but would be scorched where I live in Mexico. Jeavons's book is biased towards California, which is somewhat closer climate-wise to where I live, but some of the species/varieties he mentions are practically impossible to get here — and, of course, species which are everyday fare here are completely missing in his book. Pity the people outside the tropics, for whom mangoes are a legend from faraway lands.

    The problem is that the books lack knowledge of good crops for wherever you may live. This is the kind of thing that is easily crowdsourced, where "easily" means a Simple Matter Of Programming.

    An API for Open Food Data

    Growstuff has been gathering crop data from people's use of the site. Someone plants spinach. Someone harvests tomatoes. Someone puts out seeds for trade. The next steps are to populate the site with fine-grained varieties of major crops (e.g. the zillions of varieties of peppers or tomatoes), and to provide an API to access planting information in a convenient way for analysis.

    Right now, Growstuff is running a fundraising campaign to implement this API — allowing developers to work on this full-time, instead of scraping from their "free time" otherwise.

    I encourage you to give money to Growstuff's campaign. These are good people.

    To give you a taste of the non-trivialness of implementing this, I invite you to read Skud's post on interop and unique IDs for food data. This campaign is not just about adding some features to Growstuff; it is about making it possible for open food projects to interoperate. Right now there are various free-culture projects around food production, but little communication between them. This fundraising campaign attempts to solve part of that problem.

    I hope you can contribute to Growstuff's campaign. If you are into local food production, local economies, crowdsourced databases, and that sort of thing — these are your people; help them out.

    Resources for more in-depth awesomeness

October 07, 2014

Synfig Studio 0.64.2 - Release Candidate

The Release Candidate of new bugfix release is available for download!...

October 03, 2014

Thu 2014/Oct/02

  • Announcing the safety-list

    I'm happy to announce that we now have a safety-list mailing list. This is for discussions around safety, privacy, and security.

    This is some introductory material which you may have already read:

    Everyone is welcome to join! The list's web page is here: https://mail.gnome.org/mailman/listinfo/safety-list

    Thanks to the sysadmin team for their quick response in creating this list!

  • Talleres Libres gets a Sewing workshop

    Since a month ago, when I broke my collarbone after flying over the handlebars, I've been incapacitated in the bicycling and woodworking departments. So, I've been learning to sew. Oralia introduced me to her sewing machine, and I've been looking at leatherworking videos.

    The project: bicycle luggage — bike panniers, which are hard to get in my town.

    First        prototype of bike panniers

    Those are a work-in-progress of a pair of small panniers for Luciana's small bike. I still have to add strips of reinforcing leather on all seams, flaps to close the bags, and belts for mounting on the bike's luggage rack.

    I'm still at the "I have no idea what I'm doing" stage. When I get to the point of knowing what I'm doing, I'll post patterns/instructions.

October 02, 2014

Photographing a double rainbow

[double rainbow]

The wonderful summer thunderstorm season here seems to have died down. But while it lasted, we had some spectacular double rainbows. And I kept feeling frustrated when I took the SLR outside only to find that my 18-55mm kit lens was nowhere near wide enough to capture it. I could try stitching it together as a panorama, but panoramas of rainbows turn out to be quite difficult -- there are no clean edges in the photo to tell you where to join one image to the next, and automated programs like Hugin won't even try.

There are plenty of other beautiful vistas here too -- cloudscapes, mesas, stars. Clearly, it was time to invest in a wide-angle lens. But how wide would it need to be to capture a double rainbow?

All over the web you can find out that a rainbow has a radius of 42 degrees, so you need a lens that covers 84 degrees to get the whole thing.

But what about a double rainbow? My web searches came to naught. Lots of pages talk about double rainbows, but Google wasn't finding anything that would tell me the angle.

I eventually gave up on the web and went to my physical bookshelf, where Color and Light in Nature gave me a nice table of primary and secondary rainbow angles of various wavelengths of light. It turns out that 42 degrees everybody quotes is for light of 600 nm wavelength, a blue-green or cyan color. At that wavelength, the primary angle is 42.0° and the secondary angle is 51.0°.

Armed with that information, I went back to Google and searched for double rainbow 51 OR 102 angle and found a nice Slate article on a Double rainbow and lightning photo. The photo in the article, while lovely (lightning and a double rainbow in the South Dakota badlands), only shows a tiny piece of the rainbow, not the whole one I'm hoping to capture; but the article does mention the 51-degree angle.

Okay, so 51°×2 captures both bows in cyan light. But what about other wavelengths? A typical eye can see from about 400 nm (deep purple) to about 760 nm (deep red). From the table in the book:

Wavelength Primary Secondary
400 40.5° 53.7°
600 42.0° 51.0°
700 42.4° 50.3°

Notice that while the primary angles get smaller with shorter wavelengths, the secondary angles go the other way. That makes sense if you remember that the outer rainbow has its colors reversed from the inner one: red is on the outside of the primary bow, but the inside of the secondary one.

So if I want to photograph a complete double rainbow in one shot, I need a lens that can cover at least 108 degrees.

What focal length lens does that translate to? Howard's Astronomical Adventures has a nice focal length calculator. If I look up my Rebel XSi on Wikipedia to find out that other countries call it a 450D, and plug that in to the calculator, then try various focal lengths (the calculator offers a chart but it didn't work for me), it turns out that I need an 8mm lens, which will give me an 108° 26‘ 46" field of view -- just about right.

[Double rainbow with the Rokinon 8mm fisheye] So that's what I ordered -- a Rokinon 8mm fisheye. And it turns out to be far wider than I need -- apparently the actual field of view in fisheyes varies widely from lens to lens, and this one claims to have a 180° field. So the focal length calculator isn't all that useful. At any rate, this lens is plenty wide enough to capture those double rainbows, as you can see.

About those books

By the way, that book I linked to earlier is apparently out of print and has become ridiculously expensive. Another excellent book on atmospheric phenomena is Light and Color in the Outdoors by Marcel Minnaert (I actually have his earlier version, titled The Nature of Light and Color in the Open Air). Minnaert doesn't give the useful table of frequencies and angles, but he has lots of other fun and useful information on rainbows and related phenomena, including detailed instructions for making rainbows indoors if you want to measure angles or other quantities yourself.

Bullet used in NASA Tensegrity Robotics Toolkit, book Multithreading for Visual Effects

multithreading

nasa_bullet

Nasa is using Bullet in their new open source Tensegrity Robotics Toolkit. You can find more information and video link here: http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=17&t=9978

The new book Multithreading for Visual Effects includes a chapter on the OpenCL optimizations for upcoming Bullet 3.x. Other chapters include multithreading development experiences from OpenSubDiv, Houdini, Pixar Presto and Dreamworks Fluids and LibEE. You can get it at the publisher AK Peters/CRC Press or at Amazon.

Development on upcoming Bullet 2.83 and Bullet 3.x is making good progress, hopefully an update follows soon.

Chinese version of Synfig Training Package

Synfig Training Package is available in Chinese language now!...

October 01, 2014

Ivan Mahonin is back

We are happy to announce that our full time developer Ivan Mahonin is back to Synfig development!...

Testing Survival

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id […]

September 30, 2014

GTK+ widget templates now in Javascript

Let's get the features in early!

If you're working on a Javascript application for GNOME, you'll be interested to know that you can now write GTK+ widget templates in gjs.

Many thanks to Giovanni for writing the original patches. And now to a small example:

const MyComplexGtkSubclass = new Lang.Class({
Name: 'MyComplexGtkSubclass',
Extends: Gtk.Grid,
Template: 'resource:///org/gnome/myapp/widget.xml',
Children: ['label-child'],

_init: function(params) {
this.parent(params);

this._internalLabel = this.get_template_child(MyComplexGtkSubclass,
'label-child');
}
});

And now you just need to create your widget:

let content = new MyComplexGtkSubclass();
content._internalLabel.set_label("My updated label");

You'll need gjs from git master to use this feature. And if you see anything that breaks, don't hesitate to file a bug against gjs in the GNOME Bugzilla.

September 29, 2014

New video encoding features

Synfig Studio just got improvements for video encoding. Check out the demonstration video and development snapshots inside....

Shipping larger application icons in Fedora 22

In GNOME 3.14 we show any valid application in the software center with an application icon of 32×32 or larger. Currently a 32×32 icon has to be padded with 16 pixels of whitespace on all 4 edges, and also has to be scaled x2 to match other UI elements on HiDPI screens. This looks very fuzzy and out of place and lowers the quality of an otherwise beautiful installing experience.

For GNOME 3.16 (Fedora 22) we are planning to increase the minimum icon size to 48×48, with recommended installed sizes of 16×16, 24×24, 32×32, 48×48 and 256×256 (or SVG in some cases). Modern desktop applications typically ship multiple sizes of icons in known locations, and it’s very much the minority of applications that only ship one small icon.

Soon I’m going to start nagging upstream maintainers to install larger icons than 32×32. If you’re re-doing the icon, please generate a 256×256 or 64×64 icon with alpha channel, as the latter will probably be the minimum size for F23 and beyond.

At the end of November I’ll change the minimum icon size in the AppStream generator used for Fedora so that applications not fixed will be dropped from the metadata. You can of course install the applications manually on the command line, but they won’t be visible in the software center until they are installed.

If you’re unclear on what needs to be done in order to be listed in the AppStream metadata, refer to the guidelines or send me email.

September 28, 2014

Switching jobs

Today was my first day at Red Hat! This has been a public service announcement.

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

Trip to Nuremberg and Munich

This month I visited my friend and colleague Garrett in Germany. We visited the Christmas markets there. Lots of fun. Here are some pictures.

Petroglyphs, ancient and modern

In the canyons below White Rock there are many wonderful petroglyphs, some dating back many centuries, like this jaguar: [jaguar petroglyph in White Rock Canyon]

as well as collections like these:
[pictographs] [petroglyph collection]

Of course, to see them you have to negotiate a trail down the basalt cliff face. [Red Dot trail]

Up the hill in Los Alamos there are petroglyphs too, on trails that are a bit more accessible ... but I suspect they're not nearly so old. [petroglyph face]

Getting Around in GIMP - Luminosity Masks Revisited


Brorfelde landscape by Stig Nygaard (cb)
After adding an aggressive curve along with a mid-tone luminosity mask.

I had previously written about adapting Tony Kuyper’s Luminosity Masks for GIMP. I won’t re-hash all of the details and theory here (just head back over to that post and brush up on them there), but rather I’d like to re-visit them using channels. Specifically to have another look at using the mid-tones mask to give a little pop to images.

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP
Original tutorial on Luminosity Masks:
Getting Around in GIMP - Luminosity Masks





Let’s Build Some Luminosity Masks!

The way I had approached building the luminosity masks previously were to create them as a function of layer blending modes. In this re-visit, I’d like to build them from selection sets in the Channels tab of GIMP.

For the Impatient:
I’ve also written a Script-Fu that automates the creation of these channels mimicking the steps below.

Download from: Google Drive

Download from: GIMP Registry (registry.gimp.org)

Once installed, you’ll find it under:
Filters → Generic → Luminosity Masks (patdavid)
[Update]
Yet another reason to love open-source - Saul Goode over at this post on GimpChat updated my script to run faster and cleaner.
You can get a copy of his version at the same Registry link above.
(Saul’s a bit of a Script-Fu guru, so it’s always worth seeing what he’s up to!)


We’ll start off in a similar way as we did previously.

Duplicate your base image

Either through the menus, or by Right-Clicking on the layer in the Layer Dialog:
Layer → Duplicate Layer
Pat David GIMP Luminosity Mask Tutorial Duplicate Layer

Desaturate the Duplicated Layer

Now desaturate the duplicated layer. I use Luminosity to desaturate:
Colors → Desaturate…

Pat David GIMP Luminosity Mask Tutorial Desaturate Layer

This desaturated copy of your color image represents the “Lights” channel. What we want to do is to create a new channel based on this layer.

Create a New Channel “Lights”

The easiest way to do this is to go to your Channels Dialog.

If you don’t see it, you can open it by going to:
Windows → Dockable Dialogs → Channels

Pat David GIMP Luminosity Mask Tutorial Channels Dialog
The Channels dialog

On the top half of this window you’ll see the an entry for each channel in your image (Red, Green, Blue, and Alpha). On the bottom will be a list of any channels you have previously defined.

To create a new channel that will become your “Lights” channel, drag any one of the RGB channels down to the lower window (it doesn’t matter which - they all have the same data due to the desaturation operation).

Now rename this channel to something meaningful (like “L” for instance!), by double-clicking on its name (in my case it‘s called “Blue Channel Copy”) and entering a new one.

This now gives us our “Lights” channel, L :

Pat David GIMP Luminosity Mask Tutorial L Channel

Now that we have the “Lights” channel created, we can use it to create its inverse, the “Darks” channel...

Create a New Channel “Darks”

To create the “Darks” channel, it helps to realize that it should be the inverse of the “Lights” channel. We can get this selection through a few simple operations.

We are going to basically select the entire image, then subtract the “Lights” channel from it. What is left should be our new “Darks” channel.

Select the Entire Image

First, have the entire image selected:
Select → All

Remember, you should be seeing the “marching ants” around your selection - in this case the entire image.

Subtract the “Lights” Channel

With the entire image selected, now we just have to subtract the “Lights” channel. In the Channels dialog, just Right-Click on the “Lights” channel, and choose “Subtract from Selection”:

Pat David GIMP Luminosity Mask Tutorial L Channel Subtract

You’ll now see a new selection on your image. This selection represents the inverse of the “Lights” channel...

Create a New “Darks” Channel from the Selection

Now we just need to save the current selection to a new channel (which we’ll call... Darks!). To save the current selection to a channel, we can just use:
Select → Save to Channel

This will create a new channel in the Channel dialog (probably named “Selection Mask copy”). To give it a better name, just Double-Click on the name to rename it. Let’s choose something exciting, like “D”!

More Darker!

At this point, you’ll have a “Lights” and a “Darks” channel. If you wanted to create some channels that target darker and darker regions of the image, you can subtract the “Lights” channel again (this time from the current selection, “Darks”, as opposed to the entire image).

Once you’ve subtracted the “Lights” channel again, don’t forget to save the selection to a new channel (and name it appropriately - I like to name subsequent masks things like, “DD”, in this case - if I subtracted again, I’d call the next one “DDD” and so on…).

I’ll usually make 3 levels of “Darks” channels, D, DD, and DDD:

Pat David GIMP Luminosity Mask Tutorial Darks Channels
Three levels of Dark masks created.

Here’s what the final three different channels of darks looks like:

Pat David GIMP Luminosity Mask Tutorial All Darks Channels
The D, DD, and DDD channels

Lighter Lights

At this point we have one “Lights” channel, and three “Darks” channels. Now we can go ahead and create two more “Lights” channels, to target lighter and lighter tones.

The process is identical to creating the darker channels, just in reverse.

Lights Channel to Selection

To get started, activate the “Lights” channel as a selection:

Pat David GIMP Luminosity Mask Tutorial L Channel Activate

With the “Lights” channel as a selection, now all we have to do is Subtract the “Darks” channel from it. Then save that selection as a new channel (which will become our “LL” channel, and so on…

Pat David GIMP Luminosity Mask Tutorial Subtract D Channel
Subtracting the D channel from the L selection

To get an even lighter channel, you can subtract D one more time from the selection so far as well.

Here are what the three channels look like, starting with L up to LLL:

Pat David GIMP Luminosity Mask Tutorial All Lights Channels
The L, LL, and LLL channels

Mid Tones Channels

By this point, we’ve got 6 new channels now, three each for light and dark tones:

Pat David GIMP Luminosity Mask Tutorial L+D Channels

Now we can generate our mid-tone channels from these.

The concept of generating the mid-tones is relatively simple - we’re just going to intersect dark and light channels to produce whats left - midtones.

Intersecting Channels for Midtones

To get started, first select the “L” channel, and set it to the current selection (just like above). Right-Click → Channel to Selection.

Then, Right-Click on the “D” channel, and choose “Intersect with Selection”.

You likely won’t see any selection active on your image, but it’s there, I promise. Now as before, just save the selection to a channel:
Select → Save to Channel

Give it a neat name. Sayyy, “M”? :)

You can repeat for each of the other levels, creating an MM and MMM if you’d like.

Now remember, the mid-tones channels are intended to isolate mid values as a mask, so they can look a little strange at first glance. Here’s what the basic mid-tones mask looks like:

Pat David GIMP Luminosity Mask Tutorial Mid Channel
Basic Mid-tones channel

Remember, black tones in this mask represent full transparency to the layer below, while white represents full opacity, from the associated layer.


Using the Masks

The basic idea behind creating these channels is that you can now mask particular tonal ranges in your images, and the mask will be self-feathering (due to how we created them). So we can now isolate specific tones in the image for manipulation.

Previously, I had shown how this could be used to do some simple split-toning of an image. In that case I worked on a B&W image, and tinted it. Here I’ll do the same with our image we’ve been working on so far...

Split Toning

Using the image I’ve been working through so far, we have the base layer to start with:

Pat David GIMP Luminosity Mask Tutorial Split Tone Base

Create Duplicates

We are going to want two duplicates of this base layer. One to tone the lighter values, and another to tone the darker ones. We’ll start by considering the dark tones first. Duplicate the base layer:
Layer → Duplicate Layer

Then rename the copy something descriptive. In my example, I’ll call this layer “Dark” (original, I know):

Pat David GIMP Luminosity Mask Tutorial Split Tone Darks

Add a Mask

Now we can add a layer mask to this layer. You can either Right-Click the layer, and choose “Add Layer Mask”, or you can go through the menus:
Layer → Mask → Add Layer Mask

You’ll then be presented with options about how to initialize the mask. You’ll want to Initialize Layer Mask to: “Channel”, then choose one of your luminosity masks from the drop-down. In my case, I’ll use the DD mask we previously made:

Pat David GIMP Luminosity Mask Tutorial Add Layer Mask Split Tone

Adjust the Layer

Pat David GIMP Luminosity Mask Tutorial Split Tone Activate DD Mask
Now you’ll have a Dark layer with a DD mask that will restrict any modification you do to this layer to only apply to the darker tones.

Make sure you select the layer, and not it’s mask, by clicking on it (you’ll see a white outline around the active layer). Otherwise any operations you do may accidentally get applied to the mask, and not the layer.


At this point, we now want to modify the colors of this layer in some way. There are literally endless ways to approach this, bounded only by your creativity and imagination. For this example, we are going to tone the image with a cool teal/blue color (just like before), which combined with the DD layer mask, will restrict it to modifying only the darker tones.

So I’ll use the Colorize option to tone the entire layer a new color:
Colors → Colorize

To get a Teal-ish color, I’ll pull the Hue slider over to about 200:

Pat David GIMP Luminosity Mask Tutorial Split Tone Colorize

Now, pay attention to what’s happening on your image canvas at this point. Drag the Hue slider around and see how it changes the colors in your image. Especially note that the color shifts will be restricted to the darker tones thanks to the DD mask being used!

To illustrate, mouseover the different hue values in the caption of the image below to change the Hue, and see how it effects the image with the DD mask active:


Mouseover to change Hue to: 0 - 90 - 180 - 270

So after I choose a new Hue of 200 for my layer, I should be seeing this:

Pat David GIMP Luminosity Mask Tutorial Split Tone Dark Tinted

Repeat for Light Tones

Now just repeat the above steps, but this time for the light tones. So duplicate the base layer again, and add a layer mask, but this time try using the LL channel as a mask.

For the lighter tones, I chose a Hue of around 25 instead (more orange-ish than blue):

Pat David GIMP Luminosity Mask Tutorial Split Tone Light Tinted

In the end, here are the results that I achieved:

Pat David GIMP Luminosity Mask Tutorial Split Tone Result
After a quick split-tone (mouseover to compare to original)

The real power here comes from experimentation. I encourage you to try using a different mask to restrict the changes to different areas (try the LLL for instance). You can also adjust the opacity of the layers now to modify how strongly the color tones will effect those areas as well. Play!

Mid-Tones Masks

The mid-tone masks were very interesting to me. In Tony’s original article, he mentioned how much he loved using them to provide a nice boost to contrast and saturation in the image. Well, he’s right. It certainly does do that! (He also feels that it’s similar to shooting the image on Velvia).

Pat David GIMP Luminosity Mask Tutorial Mid Tones Mask
Let’s have a look.

I’ve deleted the layers from my split-toning exercise above, and am back to just the base image layer again.

To try out the mid-tones mask, we only need to duplicate the base layer, and apply a layer mask to it.

This time I’ll choose the basic mid-tones mask M.


What’s interesting about using this mask is that you can use pretty aggressive curve modifications to it, and still keep the image from blowing up. We are only targeting the mid-tones.

To illustrate, I’m going to apply a fairly aggressive compression to the curves by using Adjust Color Curves:
Colors → Curves

When I say aggressive, here is what I’m referring to:

Pat David GIMP Luminosity Mask Tutorial Aggresive Curve Mid Tone Mask

Here is the effect it has on the image when using the M mid-tones mask:


Aggressive curve with Mid-Tone layer mask
(mouseover to compare to original)

As you can see, there is an increase in contrast across the image, as well a nice little boost to saturation. You don’t need to worry about blowing out highlights or losing shadow detail, because the mask will not allow you to modify those values.

More Samples of the Mid-Tone Mask in Use

Pat David GIMP Luminosity Mask Tutorial
Pat David GIMP Luminosity Mask Tutorial
The lede image again, with another aggressive curve applied to a mid-tone masked layer
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Red Tailed Black Cockatoo at f/4 by Debi Dalio on Flickr (used with permission)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscape Ballon by Lennart Tange on Flickr (cb)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscapes by Tom Hannigan on Flickr (cb)
(mouseover to compare to original)



Mixing Films

This is something that I’ve found myself doing quite often. It’s a very powerful method for combining color toning that you may like from different film emulations. Consider what we just walked through.

These masks allow you to target modifications of layers to specific tones of an image. So if you like the saturation of, say, Fuji Velvia in the shadows, but like the upper tones to look similar to Polaroid Polachrome, then these luminosity masks are just what you’re looking for!

Just a little food for experimentation thought... :)

Stay tuned later in the week where I’ll investigate this idea in a little more depth.

In Conclusion

This is just another tool in our mental toolbox of image manipulation, but it’s a very powerful tool indeed. When considering your images, you can now look at them as a function of luminosity - with a neat and powerful way to isolate and target specific tones for modification.

As always, I encourage you to experiment and play. I’m willing to bet this method finds it’s way into at least a few peoples workflows in some fashion.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


September 27, 2014

Attending the Vienna GNOME/.NET hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME and .NET fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  1. Port SparkleShare's user interface to GTK+3.
  2. Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the GNOME Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.

Features

For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.

History

The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.

Notifications

If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  1. Saving the modification times of files
  2. Creating a binary Linux bundle
  3. SparkleShare folder location selection
  4. GNOME 3 integration
  5. …other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.

Finally…

I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

Vienna GNOME/.NET hackfest report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my love for the Human Interface Guidelines and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.

Sponsors

I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna and The GNOME Foundation.

September 26, 2014

Fanart by Anastasia Majzhegisheva – 16

Young Marya Morevna. Artwork by Anastasia Majzhegisheva.

Young Marya Morevna. Artwork by Anastasia Majzhegisheva.

September 25, 2014

AppStream Progress in September

Last time I blogged about AppStream I announced that over 25% of applications in Fedora 21 were shipping the AppData files we needed. I’m pleased to say in the last two months we’ve gone up to 45% of applications in Fedora 22. This is thanks to a lot of work from Ryan and his friends, writing descriptions, taking screenshots and then including them in the fedora-appstream staging repo.

So fedora-appstream doesn’t sound very upstream or awesome. This week I’ve sent another 39 emails, and opened another 42 bugs (requiring 17 new bugilla/trac/random-forum accounts to be opened). Every single file in the fedora-appstream staging repo has been sent upstream in one form or another, and I’ve been adding an XML comment to each one for a rough audit log of what happened where.

Some have already been accepted upstream and we’re waiting for a new tarball release; when that happens we’ll delete the file from fedora-appstream. Some upstreams are really dead, and have no upstream maintainer, so they’ll probably languish in fedora-appstream until for some reason the package FTBFS and gets removed from the distribution. If the package gets removed, the AppData file will also be deleted from fedora-appstream.

Also, in the process I’ve found lots of applications which are shipping AppData files upstream, but for one reason or another are not being installed in the binary rpm file. If you had to tell me I was talking nonsense in an email this week, I apologize. For my sins I’ve updated over a dozen packages to the latest versions so the AppData file is included, and fixed a quite a few more.

Fedora 22 is on track to be the first release that mandates AppData files for applications. If upstream doesn’t ship one, we can either add it in the Fedora package, or in fedora-appstream.

September 24, 2014

DevAssistant Heuristic Review Part 2: Inventory of Issues

This is Part 2 of a 3-part blog series; this post builds on materials featured in an earlier post called DevAssistant Heuristic Review Part 1: Use Case Walkthroughs.

In this part of the DevAssistant heuristic review, we’ll walk through an itemized list of the issues uncovered by the use case-based walkthrough we did in part 1.

Since this is essentially just a list of issues, let me preface it by explaining how I came up with this list. Basically, I combed through the walkthrough and noted any issues that were encountered and mentioned in it, large and small. The result of this was a flat list of issues. Next, I went through the list and tried to determine a set of categories to organize them under by grouping together issues that seemed related. (You could do this in a group setting via a technique called “affinity mapping” – another fancy UX term that in essence just basically means writing everything out on post-its and sticking related post-it notes together. Fancy name for playing with sticky pieces of paper :) )

Breaking the issues into categories serves a few purposes:

  • It makes the list easier to read through and understand, since related issues are adjacent to each other.
  • It breaks up the list so you can review it in chunks – if you’re only interested in a certain area of the UI, for example, you can hone in on just the items relevant to you.
  • It tends to expose weak areas in great need of attention – e.g., if you have 5 categories and one of them is attached to a particular screen and the other 4 are more generic, you know that one screen needs attention (and should probably be a high priority for redesign.)

All right, so here is the list!

Base UI Design

These issues apply not to specific dialogs or screens, but more to the basic mechanics of the UI and the mental model it creates for users.

  1. Takes time to get situated: When I first look at this UI, I’m not sure where to get started. Because the first tab says “Create Project” and the second says “Modify Project,” I get the impression that you might progress through the tabs in order from left-to-right. From poking around, though, this doesn’t appear to be the case. So at initial glance, it’s hard for me to understand the overall flow or direction I’m meant to go in through the interface.
  2. Projects created/imported feel very disconnected from DevAssistant: It feels like there is no follow-up once you create or import a project via DevAssistant. I expected to be able to browse a list of the projects I’d imported/created using DevAssistant after doing so, but there doesn’t appear to be any link to them. DevAssistant seems to forget about them or not recognize them afterwards. Sure, they live on the file system, but they may live in all different places, and I may need some reminders / instruction about how to work with them – the filesystem doesn’t provide any context on that front.
  3. Little user guidance after project creation / import: After the user creates a project, all they really get is a green “Done” notification on the setup wizard window. I think there’s a lost opportunity here to guide the user through everything you set up for them so they know how to take advantage of it. Maybe have a little guide (easily dismissed or optional) that walks the user through the options they selected? For example, if they chose the vim option, have a section that activates on the post-project creation screen that talks about how DevAssistant customizes vim and how they can make use of it in their workflow. Basically, nudge the users towards what they need to do next! Offer to open up Eclipse or vim for them? Offer to open up the project in their file manager? Etc. etc.

Clarity / Context

These are issues where the UI isn’t clear about what information it needs or what is happening or why the user would pick a specific option. The cop-out fix for these types of issues is to write a lot of documentation; the right way to fix them is to redesign! If an option is confusing to explain and benefits all users, just turn it on by default if it’s not harmful instead of putting the burden of selecting it on the user. If the pros/cons of a config option aren’t clear, explain them – add some contextual documentation right into the app via tool tips or more clear text in the UI.

  1. Tab names unclear: The names of the tabs across the top don’t give me a good idea of what I can actually do. What does “prepare environment” mean? Looking at the interface, I think that going through one of the wizards under “Create Project,” would prepare an environment for the selected type of project, so why would I click on “Prepare Environment?”
  2. Prepare Environment options confusing: When I look at the options under “Prepare Environment,” I see “Custom Project” (what does this mean vs. the “Custom Task” tab?) and “DevAssistant.” These options don’t help me understand what “Prepare Environment” means. :-/
  3. DevAssistant button under Prepare Environment confusing: Why would I want to set up the environment for DevAssistant and checkout sources? Is the “Dev Assistant” button under “Prepare Environment” meant specifically for developers who work on DevAssistant itself?
  4. Some options could use more explanation / optimization: Some of the options in the dialogs could use more explanation but they don’t have any tooltips or anything. For example, why would I choose Python 2 vs. Python 3 when creating a new project? What are the pros/cons? How do I take advantange of the customizations offered for vim so I can determine if they’re worth setting up? (Or why wouldn’t I want to set them up? If it doesn’t take up much disk space and it’s a good value add why not just do it if I have a vimrc?)
  5. Not sure what “deps only” means: This could be my ignorance not being a real developer, but most if not all of the config dialogs I ran into had a ‘Deps-Only’ option and it’s still unclear to me what that actually means. I get depenencies in the context of yum/RPM, and I get them in the context of specific stacks, but I’m not sure how DevAssistant would generically determine them? Also, what happens if I don’t check off ‘Only install dependencies’ and check off everything else? Do the dependencies get installed or not? If I check off ‘Only install dependencies’ and also check off everything else, does that mean none of the other things I checked off happen because I checked off ‘Only install dependencies?’ The grammar of the string makes it ambiguous and the option itself could use some wordsmithing to be a bit clearer.
  6. What happens if you choose an option that requires something not installed? It’s not clear to me what happens if you pick vim or Eclipse, for example, in one of the options dialogs on a system that does not have them installed. Does it project setup fail, or does it pull in those apps? Maybe DevAssistant could check what development environments it supports that you already have installed and gray out the ones you don’t have installed, with a way to select them while explicitly choosing to install the development environment as well?
  7. Users need appropriate context for the information you’re asking them: There were a few cases, particularly related to connecting to github accounts, where the user is asked for their name and email address. It isn’t clear why this information is being asked for, and how it’s going to be used. For example, when you ask for my name, are you looking for my full name (Máirín Duffy,) just my first name (Máirín,) or my nick (mizmo?) (And can you support fadas? Or should I type “Mairin” instead of “Máirín”?)

Human Factors

This is a bucket for issues that put unnecessary burden / inconvenience on the user. A good example of this in web application design is a 3 levels deep javascript tiered dropdown menu that disappears if you mouse off of it. :) It makes the user physically have to take action or go through more steps than necessary to complete a task.

  1. Hover help text not easily discovered / pain to access: After a while of poking around, I notice that there are nice explanations of what each button under each tab means in a hover. My initial thought – putting this valuable information under a hover makes it more challenging to access, especially since you can only read the description for one item at a time. (This makes it hard to compare items when you’re trying to figure out which one is the right one for you.)
    Hover tips for create project buttons.

    Hover tips for create project buttons.

  2. Window Jumps – This happens when clicking on buttons in the main setup wizard window and new windows are launched. For example, go to “Modify Project” tab. Click on Eclipse Import or Vim Setup. It moves the DevAssistant up and to the right. Click back. The window remains up and to the right. Why did it move the window? It should remember where the user placed the window and stay there I think.
  3. Project directory creation defaults to user’s homedir: I think a lot of users try to keep their home directory neat and orderly – defaulting to creating / importing projects to users’ home directories seems the wrong approach. One thing to try would be to make a devassistant directory during the first run of the application, and defaulting to creating and importing projects to there. Another option, which could be done in conjunction with defaulting to a ~/devassistant directory, could be to ask the user during first run or have a configuration item somewhere so that the user can set their preferred repository directory in one place, rather than every time they create/import a project.
  4. No way to create new project directory in file chooser for project directory selection: In a lot of the specific project creation dialogs, there’s an option to specify a project directory other than the user’s home. However, the file chooser that pops up when you click on “Browse” doesn’t allow you to create a new directory. This makes it more of a hassle (you have to go outside the DevAssistant application) to create a fresh directory to store your projects in.
  5. Holy modal dialogs, Batman! I encountered a few modal dialogs during the process that made interactions with the application a bit awkward. Some examples:
    • There was a very large Java error dialog that was hidden under another window on my desktop, and it made buttons on the main progress/setup window unclickable so I couldn’t dismiss the main window without dismissing the Java error window. (And the Java error window didn’t have any button, not even an ‘X’ in the upper right corner, to dismiss it.) (See Use Case 2 for more details on this specific scenario.)
      This was too long to display fully on my 2560x1440 monitor... the button to close it wasn't accessible. Luckily I know how to Alt+F4.

      This was too long to display fully on my 2560×1440 monitor… the button to close it wasn’t accessible. Luckily I know how to Alt+F4.

    • During the Django setup process (Use Case 1) and during the C project use case (Use Case 2), there was a small modal dialog that popped up a little after the setup process began, asking for permission to install 20 RPM packages. It halted the progress being made – similar to how old Anaconda would pop up little dialogs during the install process. It’s better to ask questions like this up-front.
    • Another time during the Django setup process (Use Case 1), there was a tiny modal dialog that aligned to the upper left corner of the screen. I completely missed it, and this halted the project creation process. (It was a dialog asking for my name, related to Github account connection.)

      http://blog.linuxgrrl.com/wp-content/uploads/2014/08/Screenshot-from-2014-08-30-215454-1024×576.png

  6. If the user fills something out incorrectly, it’s not possible to recover in some cases. This is just another vote to ask users needed information up front and avoiding modal dialogs – I filled out the wrong email address in one of the pop up modals during the project creation process and realized too late I used the wrong email address.
  7. Git name / email address very sticky and not easily fixed: During Use Case 1, I was prompted for my name and didn’t realize it was the name that would be attached to my git commits. Once it’s input though, there’s no way to update it. It’s unclear where it’s stored – I blew away the project I was creating when I input it, thinking it was stored in the .git/config file, but DevAssistant still remembered it. Configuration items like this that apply to all projects should be editable within the UI if they can be input in the UI.
  8. Text fields for inputting long paths are unnecessarily short: This is pointed out specifically in Use Case 2, but I think all the setup dialogs were affected by it. The text dialog for putting the path to your project in, for example, was only long enough in my case for “/home/duffy/Repo” to be visible. The field should be longer than that by default.

Status / Transactional

These are issues that revolve around the current status of the task the user is trying to complete. An example of a common issue related to this in UIs in general is lack of progress bars for long-running tasks.

  1. When main window is waiting on user input, it should be made clear: During the Django setup process in Use Case 1, I had opted to use Github for the new project I was creating. After filling out the config screen and pressing ‘Run,’ it looked like the project creation process was going to be a while so I multi-tasked and worked on something else. I checked on the main window a few times – at a certain point, it said “In progress” but it wasn’t actually doing anything – a tiny little window popped up in the upper-left corner, halting the whole process. It would have been better to ask me for that information up front, as to not halt the process. But it also would have been good, if the main window is waiting on something, for it to let the user know it’s waiting and isn’t “In progress.” (Maybe it could say, “Paused, waiting on user input?”)

  2. Ask users all of the information you need up front, so they can walk away from the computer while you set things up: Speaking of that last point – this was an issue we had in the old Anaconda, too. During the installation process, sometimes error messages or dialogs would pop up during the install process and they would halt install progress. So the user may have gone to get a coffee, come back, and everything wasn’t done because anaconda sat there a lot of the time asking if it was okay to import some GPG key. When you have a long-running process (a git repo sync, for example,) I think it’s better to ask the user for everything you need up front rather than as the application needs it. It’s akin to someone coming up to your desk to ask a question, going away for a couple of minutes, then tapping you on the shoulder and asking you another question, then coming back 3 minutes later to ask another one – people like that are annoying, right?! (Well, small children get away with this though, being as cute as they are. :) )
  3. Transaction status unclear after errors or even after completion: When I canceled the Django project creation in Use Case 1 because I input the wrong email address, I wasn’t sure of the state of the project and how to proceed cleanly. What directories had been created on my system? What packages had been installed? Etc. I would have liked to clean up if possible, but it wasn’t clear to me what had happened and it didn’t seem like there was an easy way to undo it. Ideally, after hitting cancel, the application would have explained to me something about the state of the system and recommended a forward course of action (is it better to blow everything away and start over? Or re-run the command using the same parameters?)
  4. Little/Vague progress indication: There’s a yellow “in progress” string on the wizard screen after you hit run, and the cursor goes into spinner mode if you focus on that dialog, but there could be better progress indication. A spinner built into the window (here’s an example) is a good option if it’s not possible to do a progress bar. Progress bars are a little better in that they give the user an indication of how much time it might take to complete.

Layout / Aesthetics

These are issues around the actual layout, typography, padding, arrangement of widgets, widget choices in a given screen or dialog. They tend to be surface issues, and usually it’s a better use of time to rethink and redesign a dialog completely rather than fix these kinds of issues only on the surface (which could be akin to putting a different shade of lipstick on.)

  1. Package install confirmation dialog layout issues: So this is pretty surface-level critique – but there’s a few issues with the layout of the dialog asking the user if it’s okay to install packages via Yum. Here’s what it looks like initially:

    Usually the button that moves you forward is the farthest to the right, and the cancel is to the left. Here, the ‘Show packages’ button is on the right. I think maybe, ‘show packages’ should not be on the same line as ‘Ok’ and ‘Cancel,’ but instead maybe it should be a link above the buttons that expands into the full list (limited to a specific max vertical height of course, as to not make the buttons disappear so they are not clickable!) The list itself, has numbers, ‘1’ and ‘2’ before some of the package names – it’s unclear to me why they are there. Also, the list is very long but there’s no way to filter or search through it, so that might be a nice thing to offer. What are people concerned about when evaluating a list of packages to be installed on their system? The number of packages and total size might be useful – the size isn’t listed so that could be a good piece of information to add. I could go on and on about this dialog, but hopefully this demonstrates that the dialog could use some more iteration.
  2. Layout of options via linear checkbox list can cause confusion about relationship between options: In several cases during the walkthrough, I became unsure of how to fill out the setup wizard for various project types because I wasn’t sure how the different checkbox options would work. In one case, it seemed as if particular selections of checkboxes could never result in success (e.g., in use case 4 when I tried to create a custom project with only the ‘deps only’ checkbox selected.) In other cases, some of the options seemed to be vaguely related to each other (Eclipse or vim) and others seemed less related (github vs python 3 or 2.) I think probably each screen needs to be reviewed and potentially rearranged a bit to better represent the dependencies between the checkboxes, base requirements, etc. For example, categorizing the options – put Eclipse and VIM under a “Development Environment” category, put git / github / etc. options under a “Version Control” category, etc.

Feature Set

These issues are around the features that are available (whether or not they are appropriate / useful or accessible / placed correctly) as well as features that might be needed that are missing.

  1. No way to import pre-existing project that wasn’t created with DevAssistant? This one seems like a bit of a show stopper. I tried to import a project that wasn’t created using DevAssistant (which is the majority of upstream projects at this point,) and it didn’t work. It bailed out after detecting there’s no .devassistant in the repo. If there is a way to do this, it’s not clear to me having gone through the UI. It would be nice if it could import an existing project and create a .devassistant for it and help manage it.
  2. The button to create a github project is under the ‘Modify Project’ tab, not the ‘Create Project’ tab: This is a bit of an oddity – the create project tab is more focused on languages / frameworks… creating a new project in GitHub is definitely creating a new project though, so it doesn’t make sense for it to be under “Modify Project.”

Bugs

These are just outright bugs that don’t have so much to do with the UI specifically.

  1. Screen went black during root authentication during Django setup: I think this was some kind of bug, and wasn’t necessarily DevAssistant’s fault.
  2. Could not create a Django project on development version of Fedora: My Django project creation failed. The error message I got was “Package python-django-bash-completion-1.6.6-1.fc21.noarch.rpm is not signed. Failed to install dependencies, exiting.” Now, while this is a little unfair to point out since I was using F21 Alpha TC4 – DevAssistant should probably be able to fail more gracefully when the repos are messed up. When errors happen, the best error messages explain what happened and what the user could try to do to move forward. There’s no suggestion here for what to do. I tried both the “Back” and “Main window” buttons. Probably, at the least, it should offer to report the bug for me, and give me explicit suggestions, e.g., “You can click ‘Back’ to try again in case this is a temporary issue, or you may click ‘Main Window’ to work on a different project.” It probably could offer a link to the documentation or maybe some other help places (ask.fedoraproject.org questions tagged DevAssistant maybe?)
  3. Unable to pull from or push to github: During the Use Case 1 walkthrough, I was left with a repo that I couldn’t pull from or push to github. It looks like DevAssistant created a new RSA key, successfully hooked it up to my Github account, but for some reason the system was left in a state that couldn’t interact with github.
  4. C project / Eclipse project creation didn’t work: There was a Java/Eclipse error message pop up and an error with simple_threads.c. Seems like a bug? (Full error log)
  5. Tooltip for Eclipse import talks about running it in the projects directory: This seems like a bug – the string is written specifically for the command line interface. It should be more generic.

Up Next

Next, we’ll talk about some ways to address some of these issues, and hopefully walk through some sketchy mockups. This one might take a bit longer because I haven’t sketched anything out yet. If I’m not able to post Part 3 this week, expect it sometime next week.

DevAssistant Heuristic Review Part 1: Use Case Walkthroughs

You might be asking yourself, “What the heck is a heuristic review?”

It’s just a fancy term; I learned it from reading Jakob Nielsen‘s writings. It’s a simple process of walking through a user interface (or product, or whatever,) and comparing how it works to a set of general principles of good design, AKA ‘heuristics.’

To be honest, the way I do these generally is to walk through the interface and document the experience, giving particular attention to things that jump out to me as ‘not quite right’ (comparing them to the heuristics in my head. :) ) This is maybe more accurately termed an ‘expert evaluation,’ then, but I find that term kind of pompous (I don’t think UX folks are any better than the folks whose software they test,) so ‘heuristic review’ it shall be!

Anyway, Sheldon from the DevAssistant team was interested in what UX issues might pop out to me as I kicked the tires on it. So here’s what we’re going to do:

  • Here in Part 1, I’ll first map out all the various pieces of the UI so we can get a feel for everything that is available. Then, I’ll walk through four use cases for working with the tool, detailing all the issues I run into and various thoughts around the experience.
  • In Part 2, I’ll analyze the walkthrough results and create a categorized master list of all the issues I encountered.
  • In Part 3, I’ll suggest some fixes / redesigns to address the issues catalogued in Part 3.

Okay – ready for this? :) Let’s go!

Setup Wizard Mapping

(This is the initial dialog that appears when you start DevAssistant.)

Screenshot of the initial DevAssistant screen

Screenshot of the initial DevAssistant screen

I’m starting this review of DevAssistant’s GUI by walking through each tab and mapping out a hierarchy of how it is arranged at a high level. This helps me get an overall feel for how the application is laid out.

  • Create Project
    • C
    • C++
    • Java
      • Simple Apache Maven Project
      • Simple Java ServerFaces Projects
    • Node.js
      • Express.JS Web Framework
      • Node.JS application
    • Perl
      • Basic class
      • Dancer
    • PHP
      • Apache, MySQL, and PHP Helper
    • Python
      • Django
      • Flask
      • Lib
      • Python GTK+ 3
    • Ruby
      • Ruby on Rails
  • Modify Project
    • C/C++ projects
      • Adding header
      • Adding library
    • Docker
      • develop
    • Eclipse Import
    • Github
      • Create github repository
    • Vim Setup
  • Prepare Environment
    • Custom Project
    • DevAssistant
  • Custom Task
    • Make Coffee

Use Case Testing

I’m going to come up with some use cases based on what I know about DevAssistant and try to complete them using the UI.

Use Cases for Testing

  1. Create a new website using Django.
  2. Create a new C project, using Eclipse as a code editor.
  3. Import a project I already have on my system that I have cloned from Github, and import it into Eclipse.
  4. Begin working on an upstream project by locally cloning that project and creating a development environment around it.

Use Case 1: Create a new website using Django

I’m not much of a Django expert, so this may end up being hilarious. So I know Django is Python-based, and this is a new project, so I click on the “Create Project” tab, then I click on “Python.” I select “Django” from the little grey menu that pops up. The little grey menu looks a little bit weird and isn’t the type of widget I was expecting, but it works I guess, and I succesfully click on “Django.” Note: the items in the submenu under Python are organized alphabetically.

An example of the little grey menu - this one appears when you click on the Python button. Not all of the buttons have a grey menu.

An example of the little gray menu – this one appears when you click on the Python button. Not all of the buttons have a gray menu.

A new screen pops up, and the old one disappears. I had the old screen (the main DevAssistant window) placed in the lower right of my screen. The new screen that appears (Project: Create Project-> Python -> Django) jumps up and to the right – it’s centered perfectly on my left monitor. It looks like I’m meant to feel that this is a second page to the same window (for example, the way the subitems of GNOME control center work.) Instead, thought, it feels like a separate window because it’s a little bit larger than the first window and it jumped across the screen so dramatically.

Django project setup window

Django project setup window

This new window is a bit overwhleming for me. First it asks for a project name. I like ponies, and Django does too, so I call my project “Ponies.”

Next, it wants to know where to create the project. It suggests /home/duffy, but man is my home pretty messy. I click on “Browse” to pick somewhere else, thinking I might create a “Projects” subdirectory under /home/duffy to keep things nice and clean. There isn’t a way to create a directory in this file chooser, so I drop down to a terminal and create the folder I want, then fill out the field to say, “/home/duffy/Projects” and move on.

Now, it’s time to look through available options. Hm. This is definitely the most overwhelming part of the screen. Looking through the options… two seem to be related to coding environments – there’s a checkbox for eclipse, and there’s a checkbox for vim. There’s an option to use Python3 instead of Python 2. There’s an option to add dockerfile and create a docker image. There’s a virtualenv option, and a deps-only option. I think I understand all of these options except for “Deps-only,” which is labeled, “Only install dependencies.” If I don’t only install dependencies, then what happens? What is the alternative to clicking that box? I’m not sure.

Anyway, back to the editors. I like vim, but this is a fresh desktop spin installation and I know that doesn’t come with vim preinstalled. I wonder what will happen if I pick vim. I decide to do it.

Oh, and there’s a Github option. It will create a GitHub repo and push the sources there. That is pretty slick; I click that checkbox too and provide my github username. Then I click “Run” in the lower right corner. (Note that a lot of new GNOME 3 apps have the button to progress forward in the upper right.)

Next, pops up a screen that has a log that spits out some log style spew, looking like it’s installing some RPMs. Quickly, a modal dialog pops up that says:


Installing 20 RPM packages by Yum. Is this ok?

[ No ] [ Yes ] [ Show packages ]

The modal dialog has the same problem of being centered to the whole desktop rather than centered along where the parent window was. I like that it offers to show me which packages it’s going to install. I click on “Show packages.” I get a very nice scrollable display in the same window, neat and clean. I click “hide packages” to hide the list. Then I click “Yes” to move forward.

Now things got a bit weird. My whole screen went black. A gnome-shell style black dialog is in the center of this black screen and it is asking for my root password. I don’t think the screen behind the dialog should be black. It feels a little weird. (turns out this was a F21 TC4 issue only.) I type in my root password and click to continue.

And it seems the process failed. (To be fair, I am doing this on an alpha test candidate – F21 TC4 – so the issue may be with the repos and not DevAssistant’s fault.) It says:


Resolving RPM dependencies ...
Failed to install dependencies, exiting.

I like the option to copy the error message to the clipboard, and to view and copy to clipboard the debug logs. It errored out because of a packaging issue, it looks like:


Package python-django-bash-completion-1.6.6-1.fc21.noarch.rpm is not signed
1
Failed to install dependencies, exiting.
Failed to install dependencies, exiting.

There is also a “Back” button and a “Main window” button; I’m not sure which to click. I try “Back” first. That brings me back to the screen I filled all the details in for my project; however I know it won’t work now when I click run.

So at this point, I emailed Sheldon to let him know that I ran into some breakage, and he told me that it wasn’t necessary to test on F21 TC4 – what’s in F20 at this point is reasonably recent and worth doing a heuristic review on. So let’s continue from this point, using F20. :)

This time, I pick the same options on the create new Django project screen, and when I press forward, it says it’s installing 21 packages. Okay. It seems to be going, and I realize after wasting precious minutes of life reading crap on Twitter that it has been quite some time. I check back on the DevAssistant window – it looks like it’s still working, but it’s kind of not clear what it’s really doing.

Then I notice the tiny little dialog peering down at me from the extreme upper left corner of my laptop screen (it is easy to find in this screenshot; harder when other windows are open):

Hey, little guy! Whatcha doing up there?

Hey, little guy! Whatcha doing up there?

So this is another window positioning issue. I drag that little guy (who has some padding and alignment issues himself, but nothing earth-shattering) closer to the center of the screen so I can fill him out. The problem is, I’m not really sure of the context – why does it want my name? Does it just want my nickname, my first name, my full name, my IRC handle…? I end up typing ‘mairin’ and hit enter.

This little dialog is centered with the main window, thankfully.

This little dialog is centered with the main window, thankfully.

And then, something clicks. “I bet it wants my name and email address for the git config.” Well, crap. I already typed in “mairin,” and that’s not the name I want on my commits. I hit “Cancel” on the email dialog shown above, and try to “start over” by going back to the main window and creating the “Ponies” project again. But… ugh:

I changed the path from ~/Projects to ~/Code just because.

I changed the path from ~/Projects to ~/Code just because.

So there are a few problems here:

  • The form field for my name lacked enough context for me to understand what information the software really wanted from me.
  • I figured out what the software wanted from me too late – and there isn’t any way for me to go back and fix it via the user interface, as far as I can tell.
  • There’s a transactional issue: in order to completely finish creating the project as I requested, DevAssistant needed some additional information. I bailed out of providing that information, leaving the project in an unknown state. (Will it work, and just miss the features that required information I didn’t provide? Since I bailed out early, which features will be missing? Is there a way to fix it by filling them in afterwards? Should I just delete from the filesystem and start over again?

The latter is what I did – I went into nautilus, nuked my ~/Code/Ponies directory, and ran through the Django project creation process (same options) from the main DevAssistant window one more time.

Unfortunately, it remembered the name I had given it. Normally this is a wonderful thing – interfaces that ask the same question of a user over and over again are annoying. In this instance, however, the politeness of remembering my name was a bit unforgiving – how could I correct my name now? Will all projects I create in the future using DevAssistant have “mairin” as my name instead of the “Máirín Duffy” of my vain desires??

Well, a rose by any other name would smell as sweet – whatever. Let’s carry on. So I am asked my GitHub password after my email address, which I provide, and soon afterwars I am greeted with a completed progress screen, a link to the project I created on GitHub, and a perusable log of everything DevAssistant just did:

That was definitely an easy way to create a project on GitHub.

That was definitely an easy way to create a project on GitHub.

So I think I’m done at this point? Maybe? I’m not 100% clear where to go from here. Some potential issues I’ll note at this point:

  • The project I created on GitHub through this process is completely empty. I was expecting some Django-specific boilerplate content to be present in the repo by default, and maybe some of the files suggested by GitHub (README, LICENSE, .gitignore.) But maybe that part happens later?
  • There’s an ssh issue in the logs. Ah. Now we see why my repo on GitHub is empty:

    Problem pushing source code: ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory
    Host key verification failed.
    fatal: Could not read from remote repository.

I didn’t see a seahorse dialog pop up asking me to unlock my ssh key. I open up seahorse – it looks like DevAssistant made an RSA key for me. I’m not sure what’s going on here, then. It never asked me for a passphrase to create a new key.

I have an interesting test case in that I have a new laptop that I didn’t copy my real ssh key over to yet. I wonder how this would have gone done if I did have my real ssh key on this system…

Then, I get an email from GitHub:


The following SSH key was added to your account:

DevAssistant
98:a0:c9:9e:aa:44:08:ae:c4:96:a0:f9:1c:96:34:04

If you believe this key was added in error, you can remove the key and disable
access at the following location:

https://github.com/settings/ssh

If the new ssh key was added to my account, then why didn’t this work? :-/

My big question now is: what do I do next? Here is what I have:

  • A new boilerplate Django project in my home directory.
  • An empty GitHub project.
  • Some stuff that got added to vim (how do I use it?)

What I don’t have that I was expecting:

  • Some kind of button or link or something to the boilerplate code that was created locally with some tips / hints / tricks for how to work with it. (Links to tutorials? Open up some of the key files you start working with in that environment in tabs in Geany or Eclipse or some IDE? Okay so I selected vim – tell me how to open this up in vim?)
  • Some acknowledgement of the ‘Ponies’ project I just created in the DevAssistant UI. I feel that my ponies have been forgotten. There isn’t any tab or space in the interface where I can view a list of projects I created using DevAssistant and manage them (e.g., like changing the ssh key or changing my name / email address associated with the project.)

I’m feeling a bit lost. Like when the lady in my GPS is telling me how to get to Manhattan and she stops talking to me somewhere in the Bronx.

Use Case 2: Create a new C project, using Eclipse as a code editor.

Back to the main window in DevAssistant! I click on the “C” button and right away am greeted with the “Create Project -> C” screen, which I dutifully fill out to indicate a desire to use Eclipse and to upload to GitHub:

Screenshot from 2014-08-30 23:14:44

A modal alert dialog ask me if it’s okay to install 139 packages (and again, helpfully offers a list of them if I want it.)

(The alignment within the dialog is a bit off; there’s a lot of extra padding on the bottom and the buttons are a bit high up. The OK button should probably be the right-most one, and a different widget used for ‘show packages,’ (like a disclosure triangle, maybe.)

The dialog that asks for permission to install required dependencies. The alignment within the dialog is a bit off; there's a lot of extra padding on the bottom and the buttons are a bit high up. The OK button should probably be the right-most one, and a different widget used for 'show packages,' (like a disclosure triangle, maybe.)

The dialog that asks for permission to install required dependencies. The alignment within the dialog is a bit off; there’s a lot of extra padding on the bottom and the buttons are a bit high up. The OK button should probably be the right-most one, and a different widget used for ‘show packages,’ (like a disclosure triangle, maybe.)

First, I click to show the list of packages. Now I see why there is so much padding on the bottom of the dialog. :) But it’s not enough space to comfortably skim the list of dependencies:

Selection_101

I drag out the window size to make it a bit bigger to more comfortably view the list. Some package names have a “1:” in front of them, some have “2:” in front of them, some have nothing in the front. I’m not sure why.

Selection_102

Anyway, enough playing around. I agree it’s okay to install the dependencies.

I watch the dialog. 139 packages is a lot of packages. While they are downloaded, there’s no progress bar or animation or anything to let me know that it’s still actively working and not crashed or otherwise unstable. The only indications I have are the cursor getting set to spinner mode when I go to the DevAssistant window, and the text, “Downloading Packages:” at the bottom of the visible log in the DevAssistant window:

Screenshot from 2014-08-30 23:20:00

After a little while, unfortunately, things didn’t go so well:

DevAssistant setup wizard_104

Here’s the full error log.

So now I’m not sure what state I’m left in. The “DevAssistant setup wizard” window has grayed out “Main window” and “Debug logs” buttons – the only live button is “Copy to clipboard.” I click on “x” in the upper right corner and it tries to quit but it doesn’t seem to do anything. Then I notice the large Java error popup window hidden behind my browser window:

This was too long to display fully on my 2560x1440 monitor... the button to close it wasn't accessible. Luckily I know how to Alt+F4.

This was too long to display fully on my 2560×1440 monitor… the button to close it wasn’t accessible. Luckily I know how to Alt+F4.

Once I closed that window, the main DevAssistant wizard window changed, and I was able to get access to the main window and back buttons.

On to the next use case!

Use Case 3: Import a project I already have on my system that I have cloned from Github, and import it into Eclipse

All right, so what project should I import? I’m going to import my selinux-coloring-book repo. :) This is a git repo I created on github and have synced locally. Let’s see if I can import it and open it in eclipse.

So I go back to the main DevAssistant setup wizard window and I click on the ‘Modify Project’ tab along the top (is this the right one to use? I’m not sure):

The "Modify Project" tab

The “Modify Project” tab

I’m not sure whether I should do “Eclipse Import” or “Github.” If I hover over the Github button, it says:

Subassistants of Github assistant provide various ways to work with Github repos. Available subassistants: Create Github repository.

While the first sentence of the description makes this seem like the right choice, the last sentence gives me the sense that the only thing this button can do is create a new github repo since that seems to be the only available subassistant (whatever a subassistant is.)

The Eclipse Import hover message is:

This assistant can import already created project into Eclipse. Just run it in the projects directory.

This seems like what I want, except the last line has me confused. I’m running a UI, so why is it telling to me to run something in a directory? (I’m assuming this is maybe a shared help text with a command-line client, so it wasn’t written with the GUI in mind?) Anyway, I’m going to go with the “Eclipse Import” button.

Again, the main DevAssistant window disappears and a new window pops up, ignoring my window placement of the first window and centering itself on top of my windows in the middle of my active screen. Here’s what that new window looks like:

Window shown after the Eclipse Import process is started.

Window shown after the Eclipse Import process is started.

So I notice a few issues on this screen (although note some of them may be because I’m not a real developer and I haven’t used Eclipse in years):

  • There are two text fields where the user can specify a path on the file system. While such paths are usually pretty long, the fields aren’t wide enough to show much beyond the portion of my path that points to my home directory – /home/duffy. So these fields should probably be wider, given the length of these kinds of paths.
  • I’m not sure what Deps-Only is going to do – what kind of project is it assuming I have? Is it going to somehow detect the dependencies (from make files?) and install them without importing the project? Why would I want to do that?
  • The options are listed out with checkboxes – and you can click them all at once. Does that make sense to do? I guess it does – I could specify the Eclipse workspace directory (although is that in ~/workspace or $PROJECT-PATH/eclipse?), and the path to the project, and that I only want deps-only. It seems like maybe ‘deps-only’ is an option that is subject to path though – if I don’t specify a path, how is it going to detect deps?
  • In fact, the process fails completely if I only select the “Deps-Only” checkbox and nothing else. So this selection shouldn’t be possible.

I end up just specifying my path (which points to /home/duffy/Repositories/selinux-coloring-book,) checking nothing else off, and clicking “run.” This doesn’t work – I get a blank “Log from current process” window that says “Failed” on the bottom:

DevAssistant setup wizard_110

On a whim, I check to make sure Eclipse is installed – yeah it is. That’s not the issue. I go back and check of the “Eclipse” checkbox in addition to the “Path” checkbox – looks like the failed C project creation made a ~/workspace directory, so I use that one. I hit run again…

DevAssistant setup wizard_110

I’m not sure where to go from here, so I’ll move on to the next use case.

Use Case 4: Begin working on an upstream project by locally cloning that project and creating a development environment around it

What upstream project should I work on? Let’s try something I don’t already have synced locally that isn’t too huge. I’ll choose fedora-tagger.

Okay back to the main DevAssistant window:

Screenshot of the initial DevAssistant screen

Screenshot of the initial DevAssistant screen

Where do I start? Not “Create Project,” because that’s for a new project. I look under “Prepare Environment:”

Prepare environment tab

Prepare environment tab

I’m not going to be working on DevAssistant or OpenStack, so I examine the hover text for “Custom Project:”

Only use this with projects whose upstream you trust, since the project can specify arbitrary custom commands that will run on your machine. The custom assistant sets up environment for developing custom project previously created with DevAssistant.

Hm. So two things here:

  • This won’t work, because fedora-tagger wasn’t created with DevAssistant.
  • I don’t think I would trust any project with this option, unless I could at least examine the commands it specified to run before running them. It would be nice to have a way to do that. Looking at the dialog linked to this button, it doesn’t look like there is.

Okay, so now what will I do? It doesn’t seem like there is a way to complete this use case under “Prepare Environment” unless my upstream is OpenStack or DevAssistant. The “Custom Task” tab won’t work, because the only option there is “Make Coffee.” So I’ll try to “Modify Project” tab:

The "Modify Project" tab

The “Modify Project” tab

Even though I know from an earlier use case that the hover text for the “Github” button seems to indicate that it can only create new Github projects, I try it. Nope, it just lets you create a new project. Hm. Well fedora-tagger is a python project. There’s a C/C++ projects button – maybe I should pick a C project instead.

So I’ll try a C project. What is written in C? I know a lot of GNOME stuff is written in C; I’m sure I could find something there. So I look at the tooltip on the C/C++ projects button:

This assistant will help you ito [sic] modify your C/C++ projects already created by devassistant. Avalaible [sic] subassistants: Adding header, Adding library

Well, I don’t know any GNOME C projects that were created with DevAssistant. :-/ At this point, I’m not sure how to move forward. I click on the “Get help…” link in the upper right:

Problem loading page - Mozilla Firefox_122

It looks like doc.devassistant.org doesn’t work at all. Nor does devassistant.org …. it must be down. It’s still up on readthedocs.org though… okay let’s see:

Preparing: Custom – checkout a custom previously created project from SCM (git only so far) and install needed dependencies

This seems to be what I need? But the button for “Custom Project” said that the project should have been created with DevAssistant. Let’s try it anyway. :) I’ll use gnome-hello, which is a small C demo project.

First screen for "Custom Project"

First screen for “Custom Project”

A couple things on this screen:

  • All of the items are checkboxes, except for URL which has a red asterisk (‘*’) – why? Is that one required? Then it should probably have a checked and grayed checkbox?
  • Again, the text fields for typically long path strings are pretty narrow.

I paste in the gnome-hello git url (git://git.gnome.org/gnome-hello) and hit “Run.”

Custom Project completion dialog.

Custom Project completion dialog.

Hmm, okay. So it didn’t find a .devassistant and bailed out. What did it do on my filesystem?

Project directory created by Custom Project wizard

Project directory created by Custom Project wizard

I would have preferred it not dump the git repo in my home dir – but I suppose that the path was an option I could have specified on the screen before. It might be better for me to be able to specify that I keep my git repos in ~/Repositories… so by default DevAssistant uses that so I don’t have to input it every time.

I don’t think there’s anything else I can do here, so I’ll finish here.

On to analysis!

In Part 2, we’ll go through the walkthrough and pull out all of the issues encountered, then sort them into different categories. Look for that post soon. :)

Fanart by Anastasia Majzhegisheva – 15

Maya Morevna by Anastasia Majzhegisheva

Maya Morevna by Anastasia Majzhegisheva

Winter comes here in Siberia. So, Anastasia brings a paper-an-pencil artwork of Maya Morevna, with a winter spirit. And yes – there is no typo in character name here.

September 22, 2014

Pi Crittercam vs. Bushnell Trophycam

I had the opportunity to borrow a commercial crittercam for a week from the local wildlife center. [Bushnell Trophycam vs. Raspberry Pi Crittercam] Having grown frustrated with the high number of false positives on my Raspberry Pi based crittercam, I was looking forward to see how a commercial camera compared.

The Bushnell Trophycam I borrowed is a nicely compact, waterproof unit, meant to strap to a tree or similar object. It has an 8-megapixel camera that records photos to the SD card -- no wi-fi. (I believe there are more expensive models that offer wi-fi.) The camera captures IR as well as visible light, like the PiCam NoIR, and there's an IR LED illuminator (quite a bit stronger than the cheap one I bought for my crittercam) as well as what looks like a passive IR sensor.

I know the TrophyCam isn't immune to false positives; I've heard complaints along those lines from a student who's using them to do wildlife monitoring for LANL. But how would it compare with my homebuilt crittercam?

I put out the TrophyCam first night, with bait (sunflower seeds) in front of the camera. In the morning I had ... nothing. No false positives, but no critters either. I did have some shots of myself, walking away from it after setting it up, walking up to it to adjust it after it got dark, and some sideways shots while I fiddled with the latches trying to turn it off in the morning, so I know it was working. But no woodrats -- and I always catch a woodrat or two in PiCritterCam runs. Besides, the seeds I'd put out were gone, so somebody had definitely been by during the night. Obviously I needed a more sensitive setting.

I fiddled with the options, changed the sensitivity from automatic to the most sensitive setting, and set it out for a second night, side by side with my Pi Crittercam. This time it did a little better, though not by much: one nighttime shot with a something in it, plus one shot of someone's furry back and two shots of a mourning dove after sunrise.

[blown-out image from Bushnell Trophycam] What few nighttime shots there were were mostly so blown out you couldn't see any detail to be sure. Doesn't this camera know how to adjust its exposure? The shot here has a creature in it. See it? I didn't either, at first. It's just to the right of the bush. You can just see the curve of its back and the beginning of a tail.

Meanwhile, the Pi cam sitting next to it caught eight reasonably exposed nocturnal woodrat shots and two dove shots after dawn. And 369 false positives where a leaf had moved in the wind or a dawn shadow was marching across the ground. The TrophyCam only shot 47 photos total: 24 were of me, fiddling with the camera setup to get them both pointing in the right direction, leaving 20 false positives.

So the Bushnell, clearly, gives you fewer false positives to hunt through -- but you're also a lot less likely to catch an actual critter. It also doesn't deal well with exposures in small areas and close distances: its IR light source seems to be too bright for the camera to cope with. I'm guessing, based on the name, that it's designed for shooting deer walking by fifty feet away, not woodrats at a two-foot distance.

Okay, so let's see what the camera can do in a larger space. The next two nights I set it up in large open areas to see what walked by. The first night it caught four rabbit shots that night, with only five false positives. The quality wasn't great, though: all long exposures of blurred bunnies. The second night it caught nothing at all overnight, but three rabbit shots the next morning. No false positives.

[coyote caught on the TrophyCam] The final night, I strapped it to a piñon tree facing a little clearing in the woods. Only two morning rabbits, but during the night it caught a coyote. And only 5 false positives. I've never caught a coyote (or anything else larger than a rabbit) with the PiCam.

So I'm not sure what to think. It's certainly a lot more relaxing to go through the minimal output of the TrophyCam to see what I caught. And it's certainly a lot easier to set up, and more waterproof, than my jury-rigged milk carton setup with its two AC cords, one for the Pi and one for the IR sensor. Being self-contained and battery operated makes it easy to set up anywhere, not just near a power plug.

But it's made me rethink my pessimistic notion that I should give up on this homemade PiCam setup and buy a commercial camera. Even on its most sensitive setting, I can't make the TrophyCam sensitive enough to catch small animals. And the PiCam gets better picture quality than the Bushnell, not to mention the option of hooking up a separate camera with flash.

So I guess I can't give up on the Pi setup yet. I just have to come up with a sensible way of taming the false positives. I've been doing a lot of experimenting with SimpleCV image processing, but alas, it's no better at detecting actual critters than my simple pixel-counting script was. But maybe I'll find the answer, one of these days. Meanwhile, I may look into battery power.

On fêtera la sortie de GNOME 3.14 mardi soir à Lyon

In French, for a change :)

Mardi soir, le 23 septembre, quelques-uns d'entre nous se retrouveront vers 18h30 au Smoking Dog pour quelques boissons, et poursuivront avec un dîner indien prés du métro St-Jean.

N'hésitez pas à vous inscrire sur le Wiki, que vous soyez utilisateurs de GNOME, développeurs ou simplement des amis du logiciel libre.

À mardi!

September 21, 2014

Fresh software from the 3.14 menu

Here is a small recap of the GNOME 3.14 features I worked on. Some are already well publicised, through blogs:
And obviously loads of bug fixes, and patch reviews. And I do mean loads :)

To look forward to

If all goes according to plan, I'll be able to merge the aforementioned automatic rotation support into systemd/udev. The kernel API is pretty bad, which makes the user-space code look bad...

The first parts of ebooks support in gnome-documents have already been written, scheduled for 3.16.

And my favourites

Note: With links that will open up like a Christmas present when GNOME 3.14 is released.

There are a lot of big, new features in GNOME 3.14. The Adwaita rewrite made it possible to polish the theme greatly. The captive portals support is very useful, the travelling you will enjoy this (I certainly have!).

But my favourite new feature has to be the gestures support in gnome-shell. I'll make good use of that :)

September 20, 2014

Concept of Baba Yaga Character

Here is a concept for another character – Baba Yaga. You can find a lot of references to this name in Russian folklore, but in our story this old lady is an outstanding scientist of cybernetics. That strong and unconditional person bears a dark past, directly related to the birth of the main antagonist of the story – Koshchei The Deathless. Much thanks to Anastasia Majzhegisheva for the artwork.

Baba Yaga concept by Anastasia Majzhegisheva.

Baba Yaga concept by Anastasia Majzhegisheva.

Fri 2014/Sep/19

  • I finally got off my ass and posted my presentation from GUADEC: GPG, SSH, and Identity for Beginners (PDF) (ODP). Enjoy!

  • Growstuff.org, the awesome gardening website that Alex Bailey started after GUADEC 2012, is running a fundraising campaign for the development of an API for open food data. If you are a free-culture minded gardener, or if you care about local food production, please support the campaign!

September 19, 2014

Fanart by Anastasia Majzhegisheva – 14

Marya Morevna. Pencil artwork by Anastasia Majzhegisheva.

Marya Morevna. Pencil artwork by Anastasia Majzhegisheva.

Mirror, mirror

A female hummingbird -- probably a black-chinned -- hanging out at our window feeder on a cool cloudy morning.

[female hummingbird at the window feeder]

September 18, 2014

Woodcut/Hedcut(ish) Effect



Rolf as a woodcut/hedcut

I was working on the About page over on PIXLS.US the other night. I was including some headshots of myself and one of Rolf Steinort when I got pulled off onto yet another tangent (this happens often to me).

The rest of my GIMP tutorials can be found here:




This time I was thinking of those awesome hand-painted(!) portraits used by the Wall Street Journal by the artist Randy Glass.





Of course, the problem was that I had neither the time or skill to hand paint a portrait in this style.

What I did have was a rudimentary understanding of a general effect that I thought would look neat. So I started playing around. I finally got to something that I thought looked neat (see lede image), but I didn't take very good notes while I was playing.

This meant that I had to go back and re-trace my steps and settings a couple of times before I could describe exactly what it was I did.

So after some trial and error, here is what I did to create the effect you see.

Desaturate


Starting with your base image, desaturate using a method you like. I'm going to use an old favorite of mine, Mairi:


The base image, desaturated.

Duplicate this layer, and on the duplicate run the G'MIC filter, “Graphic Novel” by Photocomix.

Filters → G'MIC
Artistic → Graphic Novel

Check the box to "Skip this step" for "Apply Local Normalization", and adjust the "Pencil amplitude" to taste (I ended up at about 66). This gives me this result:


After running G'MIC/Graphic Novel

I then adjusted the opacity to taste on the G'MIC layer, reducing it to about 75%. Then create a new layer from visible (Right-click layer, “New from visible”).

Here is what I have so far:


On this new new layer (should be called “Visible” by default), run the GIMP filter:

Filters → Artistic → Engrave

If you don't have the filter, you can find the .scm at the registry here.

The only settings I change are the “Line width”, which I set to about 1/100 of the image height, and make sure the “Line type” is set to “Black on bottom”. Oh, and I set the “Blur radius” to 1.

This leaves me with a top layer looking like this:


After running Engrave

(If you want to see something cool, step back a few feet from your monitor and look at this image - the Engrave plugin is neat).

Now on this layer, I will run the G'MIC deformation filter “random” to give some variety to the lines:

G'MIC → Deformations → Random

I used an amplitude of about 2.35 in my image. We are looking to just add some random waviness to the engrave lines. Adjust to taste.

I ended up with:


Results after applying G'MIC/Random deformation to the engrave layer.

At this point I will apply a layer mask to the layer. I will then copy the starting desaturated layer and paste it into the layer mask.


I added a layer mask to the engraved layer (Right-click the layer, “Add layer mask...” - initialize it to white). I then selected the lowest layer, copied it (Ctrl/Cmd + C), selected the layer mask and pasted (Ctrl/Cmd + V). Once pasted, anchor the selection to apply it to the mask.

This is what it looks like with the layer mask applied:


The engrave layer with the mask applied

At this point I will use a brush and paint over the background with black to mask more of the effect, particularly from the background and edges of her face and hair. Once I'm done, I'm left with this:


After cleaning up the edges of the mask with black

I'll now set the layer blending mode to “Darken Only”, and create a new layer from visible again.

Add a layer mask to the new visible layer (should be the top layer), copy the layer mask from the layer below it (the engrave layer), and paste it into the top layer mask:


Now adjust the levels of the top layer (not the mask!), by selecting it, and opening the levels dialog:

Colors → Levels...

Adjust to taste. In my image I pulled the white point down to about 175.

At this point, my image looks like this:


After adjusting levels to brighten up the face a bit

At this point, create a new layer from visible again.

Now make sure that your background color is white.

On this new layer, I'll run a strange filter that I've never used before:

Filters → Distorts → Erase Every Other Row...

In the dialog, I'll set it to use “Columns”, and “Fill with BG”. Once it's done running, set the layer mode to “Overlay”. This leaves me with this:


After running “Erase Every Other Row...”

At this point, all that's left is to do any touchups you may want to do. I like to paint with white and a low opacity in a similar way to dodging an image. That is, I'll paint white with a soft brush on areas of highlights to accentuate them.

Here is my final result after doing this:



I'd recommend playing with each of the steps to suit your images. On some images, it helps to modify the parameters of the “Graphic Novel” filter to get a good effect. After you've tried it a couple of times through you should get a good feel for how the different steps change the final outcome.

As always, have fun and share your results! :)

Summary

There seems to be many steps, but it's not so bad once you've done it. In a nutshell:

  1. Desaturate the image, and create a duplicate of the layer.
  2. Run G'MIC/Graphic Novel filter, skip local normalization. Set layer opacity to about 40-60% (experiment).
  3. Create new layer from visible.
    1. Run Filters → Artistic → Engrave (not Filters → Distorts → Engrave!).
      • Set the Line Height ~ 1/100 of image height, black on bottom
    2. On the same engrave layer, run G'MIC → Deformation → Random
      • Set amplitude to taste
    3. Change layer mode to “Darken only”
    4. Add a layer mask, use the original desaturated layer for the mask
  4. Create new layer from visible
    1. Add a layer mask, using the original desaturated layer for mask again (or the mask from previous layer)
    2. Adjust levels of layer to brighten it up a bit
  5. Create (another) new layer from visible
    1. Set background color to white
    2. Run Filters → Distorts → Erase Every Other Row...
      • Set to columns, and fill with BG color
    3. Set layer blend mode to “Overlay”

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


September 17, 2014

What’s in a job title?

Over on Google+, Aaron Seigo in his inimitable way launched a discussion about  people who call themselves community managers.. In his words: “the “community manager” role that is increasingly common in the free software world is a fraud and a farce”. As you would expect when casting aspertions on people whose job is to talk to people in public, the post generated a great, and mostly constructive, discussion in the comments – I encourage you to go over there and read some of the highlights, including comments from Richard Esplin, my colleague Jan Wildeboer, Mark Shuttleworth, Michael Hall, Lenz Grimmer and other community luminaries. Well worth the read.

My humble observation here is that the community manager title is useful, but does not affect the person’s relationships with other community members.

First: what about alternative titles? Community liaison, evangelist, gardener, concierge, “cat herder”, ombudsman, Chief Community Officer, community engagement… all have been used as job titles to describe what is essentially the same role. And while I like the metaphors used for some of the titles like the gardener, I don’t think we do ourselves a service by using them. By using some terrible made-up titles, we deprive ourselves of the opportunity to let people know what we can do.

Job titles serve a number of roles in the industry: communicating your authority on a subject to people who have not worked with you (for example, in a panel or a job interview), and letting people know what you did in your job in short-hand. Now, tell me, does a “community ombudsman” rank higher than a “chief cat-herder”? Should I trust the opinion of a “Chief Community Officer” more than a “community gardener”? I can’t tell.

For better or worse, “Community manager” is widely used, and more or less understood. A community manager is someone who tries to keep existing community members happy and engaged, and grows the community by recruiting new members. The second order consequences of that can be varied: we can make our community happy by having better products, so some community managers focus a lot on technology (roadmaps, bug tracking, QA, documentation). Or you can make them happier by better communicating technology which is there – so other community managers concentrate on communication, blogging, Twitter, putting a public face on the development process. You can grow your community by recruiting new users and developers through promotion and outreach, or through business development.

While the role of a community manager is pretty well understood, it is a broad enough title to cover evangelist, product manager, marketing director, developer, release engineer and more.

Second: The job title will not matter inside your community. People in your company will give respect and authority according to who your boss is, perhaps, but people in the community will very quickly pigeon-hole you – are you doing good work and removing roadblocks, or are you a corporate mouthpiece, there to explain why unpopular decisions over which you had no control are actually good for the community? Sometimes you need to be both, but whatever you are predominantly, your community will see through it and categorize you appropriately.

What matters to me is that I am working with and in a community, working toward a vision I believe in, and enabling that community to be a nice place to work in where great things happen. Once I’m checking all those boxes, I really don’t care what my job title is, and I don’t think fellow community members and colleagues do either. My vision of community managers is that they are people who make the lives of community members (regardless of employers) a little better every day, often in ways that are invisible, and as long as you’re doing that, I don’t care what’s on your business card.

 

A follow up to yesterday's Videos new for 3.14

The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

Making of GNOME 3.14

The release of GNOME 3.14 is slowly approaching, so I stole some time from actual design work and created this little promo to show what goes into a release that probably isn’t immediately obvious (and a large portion of it doesn’t even make it in).

Watch on Youtube

I’d like to thank all the usual suspects that make the wheels spinning, Matthias, Benjamin and Allan in particular. The crown goes to Lapo Calamandrei though, because the amount of work he’s done on Adwaita this cycle will really benefit us in the next couple of releases. Thanks everyone, 3.14 will be a great release*!

* I keep saying that every release, but you simply feel it when you’re forced to log in to your “old” GNOME session rather than jhbuild.

September 16, 2014

Videos 3.14 features

We've added a few, but nonetheless interesting features to Videos in GNOME 3.14.

Auto-rotation of videos

If you capture videos in portrait orientation on your phone, we are now able to rotate them automatically in the movie player, as well as in the thumbnails.

Better streaming

You can now seek anywhere inside streamed videos, even if we didn't download all the way to that point. That's particularly useful for long videos, or slow servers (or a combination of both).

Thumbnails generation

Finally, videos without thumbnails in your videos directory will have thumbnails automatically generated, without having to browse them in Files. This makes the first experience of videos more pleasing to the eye.

What's next?

We'll work on integrating Victor Toso's work on grilo plugins, to show information about the film or TV series on your computer, such as grouping episodes of a series together, showing genres, covers and synopsis for films.

With a bit of luck, we should also be able to provide you with more video content as well, through partners.

Back from Akademy 2014

So last week-end I came back from Akademy 2014, it was a loooong road, but really worth it of course!
Great to meet so much nice people, old friends and new ones. Lots of interesting discussions.

I won’t tell again everything that happened as it’s been already well covered in the dot and several blog posts on planet.kde, with lots of great photos in this gallery.

On my part, I’m especially happy to have met Jens Reuterberg and other people from the new Visual Design Group. We could discuss about the tools we have and how we could try to improve/resurrect Karbon and Krita vector tools.. And share ideas about some redesign like for the network manager…

Then another important point was the BoF we had with all other french people, about our local communication on the web and about planning for Akademy-Fr that will be co-hosted again with Le Capitole du Libre in Toulouse in November.

Thanks again to everyone who helped organize it, and to KDE e.V. for the travel support that allowed me to be there.

PS: And Thanks a lot Adriaan for the story, that was very fun.. Héhé sure I’ll think about drawing it, when I’ll have time.. ;)

September 14, 2014

Global key bindings in Emacs

Global key bindings in emacs. What's hard about that, right? Just something simple like

(global-set-key "\C-m" 'newline-and-indent)
and you're all set.

Well, no. global-set-key gives you a nice key binding that works ... until the next time you load a mode that wants to redefine that key binding out from under you.

For many years I've had a huge collection of mode hooks that run when specific modes load. For instance, python-mode defines \C-c\C-r, my binding that normally runs revert-buffer, to do something called run-python. I never need to run python inside emacs -- I do that in a shell window. But I fairly frequently want to revert a python file back to the last version I saved. So I had a hook that ran whenever python-mode loaded to override that key binding and set it back to what I'd already set it to:

(defun reset-revert-buffer ()
  (define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)

That worked fine -- but you have to do it for every mode that overrides key bindings and every binding that gets overridden. It's a constant chase, where you keep needing to stop editing whatever you wanted to edit and go add yet another mode-hook to .emacs after chasing down which mode is causing the problem. There must be a better solution.

A web search quickly led me to the StackOverflow discussion Globally override key bindings. I tried the techniques there; but they didn't work.

It took a lot of help from the kind folks on #emacs, but after an hour or so they finally found the key: emulation-mode-map-alists. It's only barely documented -- the key there is "The “active” keymaps in each alist are used before minor-mode-map-alist and minor-mode-overriding-map-alist" -- and there seem to be no examples anywhere on the web for how to use it. It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?

Okay, here's what it means. First you define a new keymap and add your bindings to it:

(defvar global-keys-minor-mode-map (make-sparse-keymap)
  "global-keys-minor-mode keymap.")

(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)

Now define a minor mode that will use that keymap. You'll use that minor mode for basically everything.

(define-minor-mode global-keys-minor-mode
  "A minor mode so that global key settings override annoying major modes."
  t "global-keys" 'global-keys-minor-mode-map)

(global-keys-minor-mode 1)

Now build an alist consisting of a list containing a single dotted pair: the name of the minor mode and the keymap.

;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode
                                              global-keys-minor-mode-map)))

Finally, set emulation-mode-map-alists to a list containing only the global-minor-mode-alist.

(setf emulation-mode-map-alists '(global-minor-mode-alist))

There's one final step. Even though you want these bindings to be global and work everywhere, there is one place where you might not want them: the minibuffer. To be honest, I'm not sure if this part is necessary, but it sounds like a good idea so I've kept it.

(defun my-minibuffer-setup-hook ()
  (global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)

Whew! It's a lot of work, but it'll let me clean up my .emacs file and save me from endlessly adding new mode-hooks.

September 13, 2014

About panels and blocks - new elements for FreeCAD

I've more or less recently been working on two new features for the Architecture workbench of FreeCAD: Panels and furniture. None of these is in what we could call a finished state, but I thought it would be interesting to share some of the process here. Panels are a new type of object, that inherits all...

September 12, 2014

Thu 2014/Sep/11

September 11, 2014

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken=AQEqep2nxSZJIg&amp;ek=b2_anet_digest&amp;li=82&amp;m=group_discussions&amp;ts=textdisc-6&amp;itemID=5914453683503906819&amp;itemType=member&amp;anetID=98449
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=17&m=group_discussions&ts=grouppost-disc-6&itemID=5914453683503906819&itemType=member&anetID=98449

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to &amp; everywhere in the link.

If you take the link from the text email and replace &amp; with &, the link works, and takes you to the specific discussion.

Pagination

Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

&count=1000&paginationToken=
It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
    sys.exit(0)
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&amp;", "&") + \
       "&count=1000&paginationToken="
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

September 08, 2014

Dot Reminders

I read about cool computer tricks all the time. I think "Wow, that would be a real timesaver!" And then a week later, when it actually would save me time, I've long since forgotten all about it.

After yet another session where I wanted to open a frequently opened file in emacs and thought "I think I made a bookmark for that a while back", but then decided it's easier to type the whole long pathname rather than go re-learn how to use emacs bookmarks, I finally decided I needed a reminder system -- something that would poke me and remind me of a few things I want to learn.

I used to keep cheat sheets and quick reference cards on my desk; but that never worked for me. Quick reference cards tend to be 50 things I already know, 40 things I'll never care about and 4 really great things I should try to remember. And eventually they get burned in a pile of other papers on my desk and I never see them again.

My new system is working much better. I created a file in my home directory called .reminders, in which I put a few -- just a few -- things I want to learn and start using regularly. It started out at about 6 lines but now it's grown to 12.

Then I put this in my .zlogin (of course, you can do this for any shell, not just zsh, though the syntax may vary):

if [[ -f ~/.reminders ]]; then
  cat ~/.reminders
fi

Now, in every login shell (which for me is each new terminal window I create on my desktop), I see my reminders. Of course, I don't read them every time; but I look at them often enough that I can't forget the existence of great things like emacs bookmarks, or diff <(cmd1) <(cmd2).

And if I forget the exact keystroke or syntax, I can always cat ~/.reminders to remind myself. And after a few weeks of regular use, I finally have internalized some of these tricks, and can remove them from my .reminders file.

It's not just for tech tips, either; I've used a similar technique for reminding myself of hard-to-remember vocabulary words when I was studying Spanish. It could work for anything you want to teach yourself.

Although the details of my .reminders are specific to Linux/Unix and zsh, of course you could use a similar system on any computer. If you don't open new terminal windows, you can set a reminder to pop up when you first log in, or once a day, or whatever is right for you. The important part is to have a small set of tips that you see regularly.

September 05, 2014

SVG Working Group Meeting Report — London

The SVG Working Group had a four day Face-to-Face meeting just before The Graphical Web conference in Winchester (UK). The meetings were hosted by Mozilla in their London office.

Here are some highlights of the meeting:

Day 1

Minutes

  • Symbol and marker placement shorthands:

    Map makers use symbols quite extensively. We decided at a previous meeting to add the ‘refX’ and ‘refY’ attributes (from <marker>) to <symbol> so that symbols can be aligned to a particular point on a map without having to do manual position adjustments. We have since been asked to provide ‘shorthand’ values for ‘refX’ and ‘refY’. I proposed adding ‘left’, ‘center’, and ‘right’ to ‘refX’ (defined as 0%, 50%, and 100%) of the view box as well as ‘top’, ‘center’, and ‘bottom’ to ‘refY’. These values follow those used in the ‘transform-origin’ property. We debated the usefulness and decided to postpone the decision until we had feedback from those using SVG for maps (see Day 4).

    For example, to center a symbol at the moment, one has to subtract off half the width and height from the ‘x’ and ‘y’ attributes of the <use> element:

      <symbol id="MySquare" viewBox="0 0 20 20">
        <rect width="100%" height="100%"
    	  style="fill:none;stroke:black;stroke-width:2px"/>
      </symbol>
      <use x="100" y="100" width="100" height="100"
           xlink:href="#MySquare"/>
    

    By using ‘refX’ and ‘refY’ set to ‘center’, one no longer needs to perform the manual calculations:

      <symbol id="MySquare" viewBox="0 0 20 20"
                      refX="center" refY="center">
        <rect width="100%" height="100%"
    	  style="fill:none;stroke:black;stroke-width:2px"/>
      </symbol>
      <use x="150" y="150" width="100" height="100"
                 xlink:href="#MySquare"/>
    
    A square symbol centered in an SVG.

    An example of a square <symbol> centered inside an SVG.

  • Marker and symbol overflow:

    One common ‘gotcha’ in using hand-written markers and symbols is that by default anything drawn outside the marker or symbol viewport is hidden. People sometimes naively draw a marker or symbol around the origin. Since this is the upper-left corner of the viewport, only one quarter of the marker or symbol is shown. We decided to change the default to not hide the region outside the viewport, however, if this is shown to break too much existing content, the change might be reverted (it is possible that some markers/symbols have hidden content outside the viewport).

  • Two triangle paths with markers on corners. Only one-fourth of each marker on the left path is shown.

    Example of markers drawn around origin point. Left: overflow=’hidden’ (default), right: overflow=”visible’.

  • Variable-stroke width:

    Having the ability to vary stroke width along a path is one of the most requested things for SVG. Inkscape has the Live Path Effect ‘Power Stroke’ extension that does just that. However, getting this into a standard is not a simple process. We must deal with all kinds of special cases. The most difficult part will be to decide how to handle line joins. (See my post from the Tokyo meeting for more details.) As a step towards moving this along, we need to decide how to interpolate between points. One method is to use a Centripital Catmull-Rom function. Johan Engelen quickly added this function as an option to Inkscape’ Power Stroke implementation (which he wrote) for us to test.

Day 2

Minutes

  • Path animations:

    In the context of discussing the possibility of having a canonical path decomposition into Bezier curves (for speed optimization) we briefly discussed allowing animation between paths with different structures. Currently, SVG path animations require the start and end paths to have the same structure (i.e. same types of path segments).

  • Catmull-Rom path segments.

    We had a lengthy discussion on the merits of Catmull-Rom path segments. The main advantage of Catmull-Rom paths is that the path goes through all the specified points (unlike Bezier path segments where the path does not go through the handles). There are some disadvantages… adding a new segment changes the shape of the previous segment, the paths tend not to be particularly pretty, and if one is connecting data points, the curves have the tendency to over/under shoot the data. The majority of the working group supports adding these curves although there is some rather strong dissent. The SVG 2 specification already contains Catmull-Rom paths text.

    After discussing the merits of Catmull-Rom path segments we turned to some technical discussions: what exact form of Catmull-Rom should we use, how should start and end segments be specified, how should Catmull-Rom segments interact with other segment types, how should paths be closed?

    Here is a demo of Catmull-Rom curves.

Day 3

Minutes

  • <tref> decission:

    One problem I see with the working group is that it is dominated by browser interests: Opera, Google (both Blink), Mozilla (Gecko), and Adobe (Blink, Webkit, Gecko). (Apple and Microsoft aren’t actively involved with the group although we did have a Microsoft rep at this meeting.) This leaves those using SVG for other purposes sometimes high and dry. Take the case of <tref>. This element is used in the air-traffic control industry to shadow text so it is visible on the screen over multi-color backgrounds. Admittedly, this is not the best way to do this (the new ‘paint-order’ property is a perfect fit for this) but the fact is that it is being used and flight-control software can’t be changed at a moments notice. Last year there was a discussion on the SVG email list about deprecating <tref> due to some security issues. From reading the thread, it appeared the conclusion was reached that <tref> should be kept around using the same security model that <use> has.

    Deprecating <tref> came up again a few weeks ago and it was decided to remove the feature altogether and not just deprecate it (unfortunately I missed the call). The specification was updated quickly and Blink removed the feature immediately (Firefox had never implemented it… probably due to an oversight). It has reached the point of no-return. It seems that Blink in particular is eager to remove as much cruft as possible… but one person’s cruft is someone else’s essential tool. (<tref> had other uses too, such as allowing localization of Web pages through a server.)

  • Blending on ‘fill’ and ‘stroke’:

    We have already decided to allow multiple paint servers (color, gradient, pattern, hatch) on fills and strokes. It has been proposed that blending be allowed. This would follow the model of the ‘background-blend-mode’ property. (Blending is already allowed between various element using the ‘mix-blend-mode’ property’, available in Firefox (nightly), Chrome, and the trunk version of Inkscape.)

  • CSS Layout Properties:

    The SVG attributes: ‘x’, ‘y’, ‘cx’, ‘cy’, ‘r’, ‘rx’, ‘ry’ have been promoted to properties (see SVG Layout Properties). This allows them to be set via CSS. There is an experimental implementation in Webkit (nightly). It also allows them to be animated via CSS animations.

A pink square centered in SVG if attributes supported, nothing otherwise.

A test of support of ‘x’, ‘y’, ‘width’, and ‘height’ as properties. If supported, a pink square will be displayed on the center of the image.

Day 4

Minutes

  • Shared path segments (Superpaths):

    Sharing path segments between paths is quite useful. For example, the boundary between two countries could be given as one sub-path, shared between the paths of the two countries. Not only does this reduce the amount of data needed to describe a map but it also allows the renderer to optimize the aliasing between the regions. There is an example polyfill available.

    We discussed various syntax issues. One requirement is the ability to specify the direction of the inserted path. We settled for directly referencing the sub-path as d=”m 20,20 #subpath …” or d=”m 20,20 -#subpath…”, the latter for when the subpath should be reversed. We also decided that the subpath should be inserted into the path before any other operation takes place. This would nominally exclude having separate properties for each sub-path but it makes implementation easier.

  • Here, MySubpath is shared between two paths:

      <path id="MySubpath" d="m 150,80 c 20,20 -20,120 0,140"/>
      <path d="m 50,220 c -40,-30 -20,-120 10,-140 30,-20 80,-10
                       90,0 #MySubpath c 0,20 -60,30 -100,0 z"
    	style="fill:lightblue" />
      <path d="m 150,80 c 20,-14 30,-20 50,-20 20,0 50,40 50,90
                       0,50 -30,120 -100,70 -#MySubPath z"
    	style="fill:pink" />
    

    This SVG code would render as:

    Two closed paths sharing a common section.

    The two closed paths share a common section.

  • Stroke position:

    An often requested feature is to be able to position a stroke with some percentage inside or outside a path. We were going to punt this to a future edition of SVG but there seems to be quite a demand. The easiest way to implement this is to offset the path and then stroke that (remember, one has to be able to handle dashes, line joins, and end caps). If we can come up with a simple algorithm to offset a stroke we will add this to SVG 2. This is actually a challenging task as an offset of a Bezier curve is not a Bezier… thus some sort of approximation must be used. The Inkscape ‘Path->Linked Offset’ is one example of offsetting. So is the Inkscape Power Stroke Live Path Effect (available in trunk).

  • Symbol and marker placement shorthands, revisited:

    After feedback from mappers, we have decided to include the symbol and marker placement shorthands: ‘left’, ‘center’, ‘right’, ‘top’, and ‘bottom’.

  • Units in path data:

    Currently all path data is in User Units (pixels if untransformed). There is some desire to have the ability to specify a unit in the path data. Personally, I think this is mostly useless, especially as units (cm, mm, inch, etc.) are useless as there is no way to set a preferred pixel to inch ratio (and never will be). The one unit that could be useful is percent. In any case, we will be investigating this further.

Lots of other technical and administrative topics were discussed: improved DOM, embedding SVG in HTML, specification annotations, testing, etc.

September 04, 2014

Something Wicked This Way Comes...

I've been working on something, and I figured that I should share it with anyone who actually reads the stuff I publish here.

I originally started writing here as a small attempt at bringing tutorials for doing high-quality photography using F/OSS to everyone. So far, it's been amazing and I've really loved meeting and getting to know many like-minded folks.

I'm not leaving. Having just re-read that previous paragraph makes it sound like I am. I'm not.

I am, however, working on something new that I'd like to share with you, though. I've called it:


I've been writing to hopefully help fill in some gaps on high-quality photographic processes using all of the amazing F/OSS tools that so many great groups have built, and now I think it's time to move that effort into its own home.

F/OSS photography deserves its own site focused on demonstrating just how amazing these projects are and how fantastic the results can be when using them.

I'm hoping pixls.us can be that home. Pixel-editing for all of us!

I'm been building the site in my spare time over the past couple of weeks (I'm building it from scratch, so it's going a little slower than just slapping up a wordpess/blogger/CMS site). I want the new site to focus on the content above all else, and to make it as accessible and attractive as possible for users. I also want to keep the quality of the content as high as possible.

If anyone would like to contribute anything to help out: expertise, artwork, images, tutorials and more, please feel free to contact me and let me know. I'm in the process of porting my old GIMP tutorials over to the new site (and probably updating/re-writing a bunch of it as well), so we can have at least some content to start out with.

If you want to follow along my progress at the moment while I build out the site, I'm blogging about it on the site itself at http://pixls.us/blog. As mentioned in the comments, I actually do have an RSS feed for the blog posts, I just hadn't linked to it yet (working on it quickly). The location (should your feedreader not pick it up automatically now) is: http://pixls.us/blog/feed.xml.

If you happen to subscribe in a feedreader, please let me know if anything looks off or broken so I can fix it! :)

Things are in a constant state of flux at the moment (did I mention that I'm still building out the back end?), so please bear with me. Please don't hesitate for a moment to let me know if something looks strange, or with any suggestions as well!

When it's ready to go, I'm going to ask for everyones help to get the word out, link to it, talk about it, etc. The sooner I can get it ready to go, the sooner we can help folks find out just how great these projects are and what they can do with them!

Excelsior!

September 03, 2014

(Locally) Testing ansible deployments

I’ve always felt my playbooks undertested. I know about a possible solution of spinning up new OpenStack instances with the ansible nova module, but felt it to be too complex as a good idea to implement. Now I’ve found a quicker way to test your playbooks by using Docker.

In principal, all my test does is:

  1. create a docker container
  2. create a copy of the current ansible playbook in a temporary directory and mount it as a volume
  3. inside the docker container, run the playbook

This is obviously not perfect, since:

  • running a playbook locally vs connecting via ssh can be a different beast to test
  • can become resource intensive if you want to test different scenarios represented as docker images.

There is possibly more, but for myself in small it is a workable solution so far.

Find the code on github if you’d like to have a look. Improvements welcome!

 


September 02, 2014

Using strace to find configuration file locations

I was using strace to figure out how to set up a program, lftp, and a friend commented that he didn't know how to use it and would like to learn. I don't use strace often, but when I do, it's indispensible -- and it's easy to use. So here's a little tutorial.

My problem, in this case, was that I needed to find out what configuration file I needed to modify in order to set up an alias in lftp. The lftp man page tells you how to define an alias, but doesn't tell you how to save it for future sessions; apparently you have to edit the configuration file yourself.

But where? The man page suggested a couple of possible config file locations -- ~/.lftprc and ~/.config/lftp/rc -- but neither of those existed. I wanted to use the one that already existed. I had already set up bookmarks in lftp and it remembered them, so it must have a config file already, somewhere. I wanted to find that file and use it.

So the question was, what files does lftp read when it starts up? strace lets you snoop on a program and see what it's doing.

strace shows you all system calls being used by a program. What's a system call? Well, it's anything in section 2 of the Unix manual. You can get a complete list by typing: man 2 syscalls (you may have to install developer man pages first -- on Debian that's the manpages-dev package). But the important thing is that most file access calls -- open, read, chmod, rename, unlink (that's how you remove a file), and so on -- are system calls.

You can run a program under strace directly:

$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.

Pruning the output

And of course, you'll see tons of crap you're not interested in, like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid of that first. The easiest way is to use grep. Let's say I want to know every file that lftp opens. I can do it like this:

$ strace lftp sitename |& grep open

I have to use |& instead of just | because strace prints its output on stderr instead of stdout.

That's pretty useful, but it's still too much. I really don't care to know about strace opening a bazillion files in /usr/share/locale/en_US/LC_MESSAGES, or libraries like /usr/lib/i386-linux-gnu/libp11-kit.so.0.

In this case, I'm looking for config files, so I really only want to know which files it opens in my home directory. Like this:

$ strace lftp sitename |& grep 'open.*/home/akkana'

In other words, show me just the lines that have either the word "open" or "read" followed later by the string "/home/akkana".

Digression: grep pipelines

Now, you might think that you could use a simpler pipeline with two greps:

$ strace lftp sitename |& grep open | grep /home/akkana

But that doesn't work -- nothing prints out. Why? Because grep, under certain circumstances that aren't clear to me, buffers its output, so in some cases when you pipe grep | grep, the second grep will wait until it has collected quite a lot of output before it prints anything. (This comes up a lot with tail -f as well.) You can avoid that with

$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.

Back to that strace | grep

Okay, whichever way you grep for open and your home directory, it gives:

open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks is ~/.local/share/lftp/bookmarks -- and I probably can't use that to set my alias.

But wait, why doesn't it show lftp trying to open those other config files?

Using script to save the output

At this point, you might be sick of running those grep pipelines over and over. Most of the time, when I run strace, instead of piping it through grep I run it under script to save the whole output.

script is one of those poorly named, ungoogleable commands, but it's incredibly useful. It runs a subshell and saves everything that appears in that subshell, both what you type and all the output, in a file.

Start script, then run lftp inside it:

$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename

After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp, then another Ctrl-D to exit the subshell script is using. Now all the strace output was in /tmp/lftp.strace and I can grep in it, view it in an editor or anything I want.

So, what files is it looking for in my home directory and why don't they show up as open attemps?

$ grep /home/akkana /tmp/lftp.strace

Ah, there it is! A bunch of lines like this:

access("/home/akkana/.lftprc", R_OK)    = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755)     = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0

So I should have looked for access and stat as well as open. Now I have the list of files it's looking for. And, curiously, it creates ~/.config/lftp if it doesn't exist already, even though it's not going to write anything there.

So I created ~/.config/lftp/rc and put my alias there. Worked fine. And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks later when I had a need for that. All thanks to strace.