April 21, 2015

Next Kickstarter Date and Layer Styles Update

kickstarter-logo

While we continue to work on bugs for the next release (2.9.3), we have also been planning and working on the next Kickstarter!

We have been gathering your feedback across the forum, social media, and our chat room (IRC). We want to make the next feature release (3.1, planned for end of this year)  the best possible. We are planning on launching the next Kickstarter on May 4. Two weeks! We have two big projects in mind – as well as some exciting stretch goals. The first project is performance improvements. This includes speeding up the application and painting with seriously large brushes. Creating and working with large canvas sizes will be much more responsive.

The second big goal is adding an animation system. This will help artists create sprite sheets for their game jams, animatics for story boarding, and potentially even produce an entire animated film! While this new system won’t be as feature rich as dedicated animation software, it will be substantially more powerful than Photoshop’s animation tools. It will include things like onion skinning and tweenable properties. We will provide more details in the coming weeks.

Our target goal for this Kickstarter is going to be €20,000 (about $21,000). With everyone’s help, we think this is attainable. Like the last Kickstarter, the money will cover a developer’s salary. Every 1,500 (about $1,600) we go over the goal, we will add a stretch goal.

Some of the stretch goals will be further animation features, others will be workflow improvements, new features for the  brushes and more. There are too many stretch goals to list here! Like the last Kickstarter, what gets included will be voted on by the kickstarter backers.

We will let you know when the Kickstarter is launched to get all of the details. You can always sign up for the mailing list (at the bottom of this post) to stay up to date with all the news.

Layer Styles Update

Adding layer styles to Krita is a really BIG task. It turned out to be much more work than we planned for. This is why it hasn’t made it into the Krita 2.9 release yet. There is still work to be done, but we really want to get this into your hands for you to start playing around with. Starting with the next release (2.9.3), we will be including layer styles into Krita. While there are some features that we are still working on, we want you to play around with what we have. We will be continuing to work on it, so rest assured that the bugs and kinks will be ironed out in the future. Until then, check out this teaser video Wolthera made:

April 20, 2015

My latest ten months working on G’MIC

A few days ago, I released a new version of G’MIC (GREYC’s Magic for Image Computing, numbered 1.6.2.0). This is a free and generic image processing framework I’ve been developing since 2008. For this particular occasion, I thought it would be good (for me) to write a quick summary of what features I’ve been working on these last ten months (since the last release of G’MIC in the 1.5.x.x branch). Hopefully, you’ll be interested too ! This project takes a large amount of my free time (basically every week-end and evenings, except Wednesday, table tennis time :) ), so it’s probably good for me to take a little break and analyze what has been done recently around the G’MIC project.

I will summarize the important features added to G’MIC recently, which are: Various color-related filters, a foreground extraction algorithm, new artistic filters, and other (more technical) cool features. As you will see this quick summary has ended up a not-so-quick summary, so you are warned. Just before I start, I’d like to thank Patrick David again a thousand times for his proof reading. Kudos Pat! And now, let’s start !

1. The G’MIC Project : Context and Presentation

The G’MIC project was born in August 2008, in the IMAGE Team of the GREYC laboratory (a public research unit, located in Caen / France, affiliated to the CNRS, the largest public research institute in France), a team to which I belong. Since then, I’ve developed it apace. G’MIC is an open-source software, distributed under the CeCILL license (a French GPL-compatible license).

gmic_logo2

Fig.1.1 Mascot and logo of the G’MIC project, an open-source framework for image processing.

What G’MIC proposes is basically several different user interfaces to manipulate generic image data (sequences or still 2D/3D images, with an arbitrary number of channels, float-valued , etc., so definitely not only 2D color images). To date, the most popular G’MIC interface is its plug-in for GIMP.

gmic_gimp_overview2

Fig.1.2. Overview of the G’MIC plug-in for GIMP.

But G’MIC also provides a command-line interface (similar to what ImageMagick has, but more flexible, perhaps?), and a web interface G’MIC OnlineOther ways of using G’MIC exist, such as ZArt, a qt-based image stream processor, or a nice plug-in for Krita (developed by Lukas Trvdy), or through some features implemented in the recent (and nice!) software Photoflow. These latter interfaces are still a bit more confidential, but not for long I guess. Actually, all of these interfaces are based on libgmic. This is the C++, portable, thread-safe and multi-threaded library (in particular through the use of OpenMP) which implements all of the G’MIC core image processing routines and structures. It also embeds its own script language interpreter, so that advanced users are allowed to write and integrate their customized image processing functions into G’MIC easily. Today, G’MIC already understands more than 900 different commands, all configurable, for a libgmic library file that takes a bit less than 5 MB. The available commands cover a wide range of the image processing field, with algorithms for the manipulation of image geometry and colors, for image filtering (denoising, sharpening with spectral, variational or non-local methods, …), for image registration and motion estimation, for the drawing of graphical primitives, for image segmentation and contour extraction, for 3D object rendering, artistic filters, and so on…

The set of tools G’MIC provides is really useful, for both converting, visualizing and exploring image data, and for creating, and applying complex image processing pipelines. In my opinion, this is one piece of free software that image processing savvy folks should have tried at least once. At least for me, there is an everyday use. :)

2. New G’MIC features for color processing

So, here is some of the stuff I’ve added to G’MIC recently, related to the manipulation (creation of modification) of the colors in an image.

2.1. Color curves in an arbitrary color space

The concept of color curves is well known for any artist of photograph who wish to modify the colors of their images. Basically, a color curve tool allows you to define and apply on each R,G,B component of your image, a 1d continuous function f:[0,255] -> [0,255]. Each original red, green or blue value x of a pixel (assumed to be in the range 0..255) is then mapped into a new value f(x) (in 0..255 too). To define these transfer functions (one for each of the R,G,B components), the user classically defines a set of key-points that are interpolated by spline functions.

But what happens if you want to define and apply color curves in a color space different than RGB ? Well… mostly nothing, as most of the image retouching tools (GIMP included) only propose to color curves for RGB, unfortunately.

So, one of the things I’ve done recently in G’MIC has been to implement an interactive color curves filter that works in other color spaces than RGB, that is, CMY, CMYK, HSI, HSL, HSV, Lab, Lch and YCbCr. The new command -x_color_curves is in charge of doing that (using the CLI interface). Plug-in users may use it via the filter Colors / Curves [interactive]. Here, using the plug-in is even nicer because you can easily save your favorite curves, as new Favorite filters, and apply the same color curves again and again on hundred of images afterwards. Here is how it looks:

Fig.2.1. Interactive definition of color curves in the Lab colorspace and application on a color image. On the left, the three user-defined curves for each of the components L (lightness), a and b (chrominances). On the left, the corresponding color transformation.

Fig.2.1. Interactive definition of color curves in the Lab color space and application on a color image. On the left, the three user-defined curves for each of the components L (lightness), a and b (chrominances). On the right, the corresponding color transformation it creates on the color image.

I’ve also recorded a quick video to show the action of this filter live, using it from the G’MIC plug-in for GIMP:

2.2. Comics colorization

I also had the chance to talk with David Revoy, a talented French illustrator who is the man behind the nice Pepper & Carrot webcomics (among other great things). He told me that the process of comics colorization is a really boring and tedious task, even using a computer, as he explains it on his blog. He also told me that there were some helpers for colorizing comics, but all proprietary software. Nice! There was obviously something worth trying for open-source graphics here.

So I took my keyboard and started coding a color interpolation algorithm that I hoped to be smart enough for this task. The idea is very simple: Instead of forcing the artist to do all the colorization job by himself, we just ask him to put some colored key-points here and here, inside the different image regions to fill-in. Then, the algorithm tries to guess a probable colorization of the drawing, by analyzing the contours in the image and by interpolating the given colored key-points with respect to these contours. Revoy and I also discussed with Timothée Giet (on irc, channel #krita :)), another French artist, about what could be a nice “minimal” interface to make the process of putting these colored key-points comfortable enough for the artist. And finally, the result of these discussion is the new G’MIC command -x_colorize and the corresponding filter Black & White / Colorize [interactive] in the G’MIC plug-in. Apparently, this seemed to be fair enough to do the job!

From what I’ve seen, this has raised quite a bit of interest among the users of Krita (not so surprising as there are many comics creators among them!). Maybe I’m wrong, but I feel this is one of the reasons the Krita team decided to put some additional efforts on their own G’MIC plug-in (which is something to become an awesome piece of code IMHO, probably even better than the current G’MIC plug-in for GIMP I’ve done).

Here is a quick illustration on how this works for real. The images below are borrowed from the David Revoy‘s web site. He has already written very nice articles about this particular feature, so consider reading them for more details about it.

Using G'MIC for comics colorization. Step 1 : Open your lineart image.

Fig.2.2. Using G’MIC for Comics colorization. Step 1 : Open your lineart image (image borrowed from the David Revoy web site).

Fig.2.3.

Fig.2.3. Using G’MIC for Comics colorization. Step 2 : Place some colored key-points and let the colorization algorithm interpolate the colors for you (image borrowed from the David Revoy web site)

I believe Revoy is using this colorizing algorithm in a regular manner for his creations (in particular for Pepper & Carrot), as illustrated in Fig.2.2-2.3. One can clearly see the different colored key-points put on the image and the outcome of the algorithm, all this being done under the G’MIC plug-in for Krita. After this first colorization step, the artist may add shadows and lights on the flat color regions generated by the algorithm. The nice thing is he can easily work on each color region separately because it is easy to select exactly one single flat color in the output color layer. And for those who prefer to work on more layers, the algorithm is able to output multiple layers too, one for each single reconstructed color region.

So, again, thanks a lot to David Revoy and Timothée Giet for the fruitful discussions and their feedback. We have now a very cool Comics colorization tool in a free software. This is the kind of academic / artistic collaboration I’m fond of.

Below you can see a small video I’ve made that shows how this filter runs inside the G’MIC plug-in for GIMP :

2.3. B&W picture colorization

But could this also work for old B&W photographs ? The answer is Yes! By slightly modifying the colorization algorithm, I’ve been able to propose a way to reconstruct the color chrominance of a B&W image, exactly the same way, from user-defined colored key-points. This is illustrated in the figure below, with the colorization of an old B&W portrait. The pixel-by-pixel content of a photograph being obviously more detailed than a Comics, we often need to put more key-points to get an acceptable result. So, I’m not sure this speeds up the colorization process so much in this case, but still, this is cool stuff. These two colorization techniques are available from the same filter, namely Black & white / Colorize [interactive].

Fig.2.4. Colorisation d'une vieille photo Noir & Blanc avec l'algorithme de colorisation de G'MIC._ (Ugh, vieil indien, excuse moi pour la couleur improbable de tes cheveux, qui te fait ressembler à [Kim Kardashian](http://media.melty.fr/article-2637254-ajust_930-f205979/kim-a-t-elle-le-meme-sex-appeal-en-blonde.jpg)...)

Fig.2.4. Colorization of an old B&W photograh with the G’MIC colorization algorithm.
(Ugh, old man, sorry for the improbable color of you hairs, which makes you look like Kim Kardashian…)

2.4. Color transfer

Here, what I call « color transfer » is the process of modifying the colors of an image A by replacing them by the colors of another image B (reference image) so that the modified image A’ gets the same “color ambiance” than the reference image B, and this is being done in a fully automatic way. This is an ill-posed problem, quite complex to solve, and there is already a lot of scientific papers published on this topic (like this one for instance). The main issue consists in generating a new image A’ that keeps a “natural” aspect, without creating synthetic-looking flat color areas, or on the contrary high color discontinuities that are not initially present in the original image A. In short, this is far from being trivial to solve.

I’ve worked on this topic recently (with one of my colleagues Julien Rabin), and we’ve designed a pretty cool algorithm for transferring colors from an image to another. It’s not perfect of course, but this is really a good start. I’ve implemented the algorithm and put it in G’MIC, available from the plug-in filter Colors / Transfer colors [advanced]. This filter requires several input layers, one being the reference image B containing the colors to transfer to the other layers. Here is how it looks when run from the G’MIC plug-in for GIMP, with the original color image (on the left in the preview window), the reference image (bottom-left) and the filter outcome (on the right).

_Fig.2.5. Aperçu du filtre de transfert de couleurs dans le greffon G'MIC pour GIMP._

Fig.2.5. Overview of the new Color transfer filter in the G’MIC plug-in for GIMP.

Sometimes, this filter gives awesome results! Two color transfer examples are illustrated below. Of course, don’t expect it to work nicely with pathological cases (as transferring a colorful image to a monochrome one for instance). But it’s not lying to say it works already quite well. And the best of all, it is quite fast to render.

gmic_transfer6

_Fig.2.6. Deux exemples de résultats de transfert de couleurs d'une photo de référence (image du milieu) vers une photo source (image de gauche). Les images de droite sont les résultats générés par l'algorithme de transfert de G'MIC._

Fig.2.6. Two different color transfer examples from a reference picture (middle) to a color image (left). The images on the right give the outcome of the G’MIC color transfer algorithm.

I’ve also recorded a quick video tutorial that shows how this kind of results can be obtained easily from the G’MIC plug-in for GIMP. We can also think of other usages for this interesting new algorithm, like the homogenization of colors between successive frames of a video, or ensuring the color stability of a stereoscopic pair for instance. Here is the video :

2.5. Website for analog film emulation

I have the pleasure of being a friend with Pat David, an american photograph who is very active in the libre graphics community, sharing his experience and tutorials on his must-read blog. I met him at LGM’2014 in Leipzig, and honestly, many useful G’MIC filters have been suggested or improved by him. Particularly, all the filters in the Film emulation category could not have been done without his work on the design of color transfer functions (a.k.a. CLUTs). Below is a figure that shows a small subset of the 300+ color transformations concocted by Pat that has been made available in G’MIC.

_Fig.2.7. Aperçu de quelques résultats de simulation de films argentiques, disponibles dans G'MIC._ (Cette image provient du site web de Pat David : http://blog.patdavid.net/2013/08/film-emulation-presets-in-gmic-gimp.html)

Fig.2.7. Overview of some analog film emulation outcomes, available in G’MIC.
(This image comes from the Pat David‘s blog)

Once added (two years ago), these filters raised much interest in the photo retouching community, and have been re-implemented in RawTherapee. To make these filters even more accessible, we have recently set up a web page dedicated to Film Emulation, where you can see/compare all the available presets and download the corresponding CLUT files. Note that you can also find the G’MIC Film Emulation filters in the G’MIC Online web site, and try them live directly in your web browser. Here again, analog film emulation was somehow a missing feature in the libre software world and the G’MIC infrastructure made it easy to fill this gap (who needs DXO FilmPack anymore now ? ;)).

3. An algorithm for foreground/background extraction.

In photo retouching, it is not uncommon to differentiate the processing of foreground and background objects. So one often needs to select the foreground object first (using the “lasso” tool for instance) before doing anything else. For objects with a complex geometry, this is a tedious task. G’MIC now integrates a user-guided segmentation algorithm to speed up this foreground extraction work. The G’MIC command -x_segment (using the CLI interface) and the plug-in filter Contours / Extract foreground [interactive] let the user play with this algorithm.

This works exactly the same way as the Comics colorization filter, except that the colored key-points are replaced by labelled key-points, with labels being either “foreground” or “background”. Then, the extraction algorithm interpolates those labels all inside the image, taking care of the contours, and deduce a foreground/background binary map. Thus, the filter is able to decompose an image into two layers, one with the foreground pixels only, and the other with the remaining pixels (background). The figure below shows a typical use of this filter : Starting from a color image of a flower (top-left), the user put few keypoints on it (top-right) and the extraction algorithm is able to generate the two background / foreground layers on the last row, from this very sparse set of points.

Doing this takes only few seconds, while the same operation made manually would have been much much longer to perform (the flower’s contour being not simple to cut).

_Fig.3.1. Décomposition avant-plan / arrière plan avec le filtre « Contours / Extract foreground » de G'MIC._

Fig.3.1. Foreground / background user-guided decomposition of an image, with the filter « Contours / Extract foreground » of G’MIC.

After that, it is quite easy to process the background and foreground separately. Here for instance, I’ve just changed the hue and saturation of the foreground layer a little bit, in order to get a more purple flower without changing the colors of the background. Selecting objects in images is a common operation in photo retouching, I’ll let you imagine in how many cases this filter can be useful.

_Fig.3.2. Après modification de la teinte et de la saturation des couleurs de l'avant-plan uniquement._

Fig.3.2. The same image, after changes on the hue and saturation applied on the extracted foreground only.

I’ve done another G’MIC video tutorial  to illustrate the whole process in real-time, using the G’MIC plug-in for GIMP:

4. Some new artistic filters.

G’MIC has always been a source of dozens of artistic filters. Below is an overview of some of the newest additions.

4.1. Engrave effect.

The filter Black & White / Engrave tries to transform an image into an etching. The high number of parameters lets you precisely tune the rendering done by the algorithm, and many different effects may be obtained with this single filter.

Fig.4.1. Overview of the « Engrave » filter, available in the G’MIC plug-in for GIMP.

What I find particularly interesting with this filter is the ability it has (with properly chosen parameters) to convert photographs into comics-like renderings. This is illustrated with the two images below.

_Fig.4.2. Conversion de photo (en haut à gauche) sous forme de dessin façon Comics, avec le filtre "Engrave" de G'MIC._

Fig.4.2. Photo to Comics conversion, using the Engrave filter from G’MIC.

_Fig.4.3. Conversion d'une photo de chimpanzé (à gauche) sous forme de dessin façon Comics (à droite), avec le filtre "Engrave" de G'MIC._

Fig.4.3. Another example of photo to Comics conversion, with the Engrave filter from G’MIC.

I’ve also recorded a video tutorial to show the different steps used to generate this kind of results. It takes only few seconds to achieve (maybe minutes for the slowest of you!).

4.2. Delaunay triangulation.

The algorithm of Delaunay triangulation has been added to G’MIC (through the -delaunay3d command). A new filter, namely Artistic / Polygonize [delaunay] inside the plug-in for GIMP uses it to transform color images into nice geometric abstraction. Each generated triangle color can be random, constant or related to the pixels under the triangle. The triangulation tries to stick to the image contours (more or less, depending on the chosen parameters).

_Fig.4.4. Aperçu du filtre "Polygone [delaunay]" et application sur une image pour un rendu de type "vitrail"._

Fig.4.4. Overview of the Polygone [delaunay] filter, and application on a color image to simulate a stained glass effect.

Applying this filter on image sequences gives also nice outcomes. I wonder if importing these triangulated images into Blender could be of any interest (even better, a full G’MIC plug-in for Blender, maybe ? :p )

_Fig.4.5. Application de la triangulation de Delaunay sur une séquence d'images._

Fig.4.5. Applying the Delaunay triangulation on an image sequence.

4.3. Other artistic filters.

As you can see, G’MIC is a very active project, and the number of available artistic filters increases every day. Thus, I can’t describe in details all the new filters added these ten last months, and because a picture is worth a thousand words, you will find below a quick overview of some of them.

_Fig.4.6. Aperçu du filtre "Arrays & Tiles / Grid [hexagonal]", potentiellement utile pour les créateurs de cartes des jeux de type Wargames ?_

Fig.4.6. Overview of the filter Arrays & Tiles / Grid [hexagonal], maybe useful for wargames map makers ?

_Fig.4.7. Aperçu du filtre "Patterns / Crystal" qui transforme vos images en cristaux multicolores._

Fig.4.7. Overview of the filter Patterns / Crystal which convert your images into colorful crystals.

_Fig.4.8. Aperçu du filtre "Rendering / Lightning" qui dessine un éclair dans une image._

Fig.4.8. Overview of the filter Rendering / Lightning which draws lightnings in your images.

_Fig.4.9. Aperçu du filtre "Arrays & Tiles / Ministeck" qui transforme vos images en représentation de type [Ministeck](http://de.wikipedia.org/wiki/Ministeck) (jeu pour enfants)._

Fig.4.9. Overview of the filter Arrays & Tiles / Ministeck which transforms your images into Ministeck games (child games).

_Fig.4.10. Aperçu du filtre "Sequences / Spatial transition", qui prend en entrée plusieurs calques et qui génère une séquence d'images de transition entre chaque calque consécutif, avec des motifs de transition réglables._

Fig.4.10. Overview of the filter Sequences / Spatial transition, which takes several layers as input, and generates an image sequence corresponding to a custom transition between each consecutive layers.

5. A quick view of the other improvements

Of course, the few new filters I’ve presented above are only a small part of the overall work that has been done on G’MIC (the most visible part). Here is a list of other notable (mostly technical) improvements:

5.1. Global improvements and the libgmic library.

  • A lot of work has been done to clean and optimize the entire source code. The size of the libgmic library has been drastically reduced (currently less than 5 MB), with an improved C++ API. Using some of the C++11 features (rvalue-references) allows me to avoid temporary buffer copies which is really nice in the context of image processing as the allocated memory may become huge quickly. With the help of Lukas Tvrdyone of the Krita developers, we have been able to improve the compilation of the project a little bit on Windows (with Visual Studio), and run various code analysis tools to detect and fix memory leaks and strange behavior. Many sessions of valgrind, gprof, g++ with the -fsanitize=address option, PVS Studio has successfully led to effective code improvements. Not very visible or exciting for me, but probably very satisfying for the final user :) As a consequence, I’ve decided the code was clean enough to regularly propose G’MIC pre-releases that are considered as stable enough while getting the latest features developed. The method of releasing G’MIC is now closer to a Rolling Release scheme.
  • The compilation of the default G’MIC binaries on Windows has been improved. It now uses the latest g++-4.9.2 / MinGW as the default compiler, and new installers for Windows 32 bits / 64 bits are now available.
  • G’MIC has new commands to compress/uncompress arbitrary data on fly (through the use of the zlib). So now, the embedded G’MIC commands are stored in a compressed form and the Internet updates are now faster.
  • The G’MIC project got its own domain name http://gmic.eu, independent from Sourceforge. We have improved the web pages a lot, particularly thanks to the wonderful tutorials on the use of the G’MIC command-line tool, written by Garry Osgood (again, thanks Garry!). This is clearly a page to bookmark for people who slowly want to learn the basics of G’MIC.

5.2. New possibilities for image filtering.

A bunch of new commands dedicated to image filtering, and their associated filters in the G’MIC plug-in for GIMP have been added, for image sharpening (Mighty Details), deconvolution by an arbitrary kernel (Richardson-Lucy algorithm), guided filtering, Fast Non-Local Means, Perona-Malik anisotropic diffusion, box filtering, DCT transforms, and so on. Honestly, G’MIC now implements so many different image filtering techniques that it is a must-have for any image processing person (even only for the denoising techniques it provides). For the users of the plug-in, this means potentially more diverse filters in the future. The figure below shows an application of the Mighty Details filter on a portrait, to simulate a kind of Dragan effect.

_Fig.5.1 Application du filtre « Mighty Details » sur un portrait pour le rehaussement de détails._

Fig.5.1 Application of the Mighty Details filter on a portrait to enhance details.

5.3. Improvements of the G’MIC plug-in for GIMP.

The plug-in for GIMP being the most used interface of the G’MIC project, it was necessary to make a lot of improvements:

  • The plug-in now has an automatic update system which ensures you always get the latest new filters and bug fixes. At the date of the 1.6.2.0 release, the plug-in counts 430 different filters, plus 209 filters still considered in development (in the Testing/ category). For a plug-in taking 5.5 Mb on the disk, let us say this makes an awesome ratio of number of filters / disk space. A complete list of the filters available in the plug-in for GIMP can be seen here.
  • During the execution of a filter that takes a bit of time, the plug-in now displays information about the elapsed time and the used memory in the progress bar.
_Fig.5.2. Affichage des ressources prises par un filtre dans le greffon G'MIC pour GIMP._

Fig.5.2. Display of the used resources in the progress bar when running a G’MIC filter in GIMP.

  • The plug-in GUI has been slightly improved: the preview window is more accurate when dealing with multi-layer filters. The filters of the plug-in can be interactive now: they can open their own display window, and handle user events. A filter can also decide to modify the value of its parameters. All of this allowed me to code the new interactive filters presented below (as the one for the Comics colorization, for the color curves in an arbitrary color space, or for the interactive foreground extraction). This is something that was not possible before, except when using the command-line interface of G’MIC.
  • G’MIC filter now has knowledge about all the input layer properties, such as their position, their opacities and their blending mode. A filter can modify these properties for the output layers too.
  • A new filter About / User satisfaction survey has been added. You can give some anonymous information about your use of G’MIC. The result of this survey is visible here (and updated in almost real-time). Of course, this result image is generated by a G’MIC command itself :).

5.4. G’MIC without any limits : Using the command-line interface.

gmic is the command-line interface of G’MIC, usable from a shell. That is undoubtly the most powerful G’MIC interface, without any limitations inherent to the input/output constraints of the other interfaces (like being limited to 2D images, 8bits / channels, 4 channels max on GIMP, or a single input image in the G’MIC Online site, etc.). With the CLI interface, you can load/save/manipulate sequences of 3D volumetric images, with an arbitrary number of channels, float-valued, without limits other than the available memory. The CLI interface of G’MIC has also gotten a lot of improvements:

  • It now uses by default OpenCV to load/save image sequences. We can then apply image processing algorithms frame by frame on video files (command -apply_video), on webcam streams (command -apply_camera), or generate video files from still images for instance. New commands -video2files and -files2video have been added to easily decompose/recompose video files into/from several frames. Processing video files is almost childs play with G’MIC now. The G’MIC manual (command -help) has been also improved, with a colored output, proposed corrections in case of typos, links to tutorials when available, etc..
_Fig.5.3. Aperçu du fonctionnement de l'aide en ligne de commande._

Fig.5.3. Overview of the help command with the command line interface gmic.

  • The invokation of gmic on the command line without any arguments makes it enter a demo mode, where one can select between a lot of different small interactive animations from a menu, to get an idea of what G’MIC is capable of. A good occasion to play Pacman, Tetris or the minesweeper on the pretext to test a new serious image processing software!
_Fig.5.4. Aperçu de G'MIC, lancé en mode démo !_

Fig.5.4. Overview of the demo mode of G’MIC.

5.5. Other G’MIC interfaces being developed.

  • ZArt is another available G’MIC interface initially developed as a demonstration platform, for processing images from the webcam. We use it classically for public exhibitions like the “Science Festival” we have once a year in France. It is very convenient to illustrate what image processing are, and what they can do, for the general public. ZArt has been significantly improved. First, it is now able to import any kind of video files (instead of supporting only webcam streams). Second, most of the filters from the GIMP plug-in has been imported to ZArt. This video shows these new possibilities, for instance with real-time emulation of analog films of a video stream. This could be really interesting to get a G’MIC plug-in working for video editing software, as the brand new Natron for instance. I’m already in touch with some of the Natron developers, so maybe this is something doable in the future. Note also that the code of ZArt have been upgraded to be compatible with the newest Qt 5.
_Fig.5.5. Aperçu du logiciel ZArt traitant une vidéo en temps réel._

Fig.5.5. Overview of ZArt, a G’MIC-based video processing software.

  • It has been a long time now since Lukas Tvrdy, member of the Krita devs, has started to develop a G’MIC plug-in for Krita. This plug-in is definitely promising. It contains all of the elements of the current G’MIC plug-in for GIMP, and it already works quite well on GNU/Linux. We are discussing to try solving some problems remaining for the compilation of the plug-in on Windows. Using the Krita plug-in is really nice because it is able to generate 16bits/channel images, or even more (32bits float-valued), whereas the plug-in for GIMP is currently limited to 8bits/channel for the plug-in API. This may change in the future with GEGL, but right now, there is not enough documentation available to ease the support of GEGL for the G’MIC plug-in. So now, if you want to process 16bits/channel images, you can do this only through the command-line tool gmic, or the G’MIC plug-in for Krita.
_Fig.5.6. Aperçu du greffon G'MIC pour Krita en action._ (Cette image provient du site web de Krita : https://krita.org/wp-content/uploads/2014/11/gmic-preview.png)

Fig.5.6. Overview of the G’MIC plug-in running on Krita.
(This image has been borrowed from the Krita website : https://krita.org/wp-content/uploads/2014/11/gmic-preview.png)

  • The integration of some of the G’MIC image processing algorithms has also begun in PhotoFlow. This is a quite recent and promising software which focuses on the development of RAW images into JPEG, as well as on photo retouching. This is a project done by Carmelo DrRaw, check out this project, it is really nice.
_Fig.5.7. Aperçu du filtre "Dream Smoothing" de G'MIC inclus dans PhotoFlow._ (Cette image provient du site web de PhotoFlow : http://photoflowblog.blogspot.fr/2014/12/news-gmic-dream-smoothing-filter-added.html)

Fig.5.7. Overview of the Dream Smoothing filter from G’MIC, included in PhotoFlow.
(This image has been borrowed from the PhotoFlow website : http://photoflowblog.blogspot.fr/2014/12/news-gmic-dream-smoothing-filter-added.html)

6. Perspectives and Conclusions.

Here we are! That was a quite complete summary of what happened the ten last months around the G’MIC project. All this has been made possible also thanks to many different contributors (coders, artists, and users in general), whose number is increasing everyday. Thanks again to them!

In the future, I still dream of more G’MIC integration in other open-source software. A plug-in for Natron could be really great to explore the possibilities of video processing with G’MIC. A plug-in for Blender would be really great too! Having a G’MIC-based GEGL node is again a great idea, for the upcoming GIMP 3.0, but we have probably a few decades to decide :) If you feel you can help in these topics, feel free to contact me. One thing I’m sure: I won’t be bored by G’MIC in the next few years!

Having said that, I think it’s time to go back working on G’MIC again. See you next time, for other exciting news about the G’MIC project! (and if you appreciate what we are doing or just the time spent, you are free to send us a nice postcard from your place, or give a few bucks for the hot chocolates I need to keep my mind clear working on G’MIC, through the donation page). Thank you!

April 18, 2015

Arbitrary 3D contour convex partitioning heuristic

Hi all

before
Some time ago I’ve developed an automatic quad fill routine to tessellate an arbitrary 3D contour into quads, as even as possible. That algorithm is quite good indeed but suboptimal for L-shapes, T-shapes and in general for complex concave contours.
So these days I’m quite busy trying to figure out an algorithm for spatial splitting the contour. After squeezing my brain finally found a very nice heuristic to split the contour at corner feature points. I’m exited because is very powerful and works in any arbitrary 3D spatial shapes. This algorithm will serve beyond the QuadFill tool and Im figuring out few interesting new geometric tools for it!
Here are some screenshots of the intermediate process with visual debugging.

1

1

2

2

Getting betterGot it


The chore of tuning PIDs

Tuning PIDs is one of those things you really don’t want to do, but can’t avoid it in the acrobatic quad space. Flying camera operators don’t usually have to deal with this, but the power/weight ratio is so varied in the world of acro flying you’ll have hard time avoiding it there. Having a multirotor “locked in” for doing fast spins is a must. Milliseconds count.

FPV Weekend

So what is PID tuning? The flight controller’s job is to maintain a certain position of the craft. It has sensors to tell it how the craft is angled and how it’s accellerating, and there’s external forces acting on the quad. Gravity, wind. Then there’s a human giving it RC orders to change its state. All this happens in a PID loop. The FC either wants to maintain its position or is given an updated position. That’s the target. All the sensors give it the actual current state. Magic happens here, as the controller gives orders to individual ESCs to spin the motors so we get to there. Then we look at what the sensors say again. Rinse and repeat.

PID loop is actually a common process you can find in all sorts of computer controllers. Even something as simple as a thermostat does this. You have a temperature sensor and you drive a heater or an air conditioner to reach and maintain a target state.

The trick to a solid control is to apply just the right amount of action to get to our target state. If there is difference between where we are and where we want to be, we need to apply some force. If this difference is smaller, only a small force is required. If it’s big, a powerful force is needed. This is essentially what the P means, proprotional. In most cases, as a controller, you are truly unhappy if you are elsewhere to where you were told to be. You want to correct this difference fast, so you provide a high proportional value/force. However, in the case of a miniquad, the momentum will continue pulling you when you reached your target point and don’t apply any force anymore. At this point the difference occurs again and the controller will start correcting the craft pulling it back in the opposite direction. This results in an unstable state as the controller will be bouncing the quad back and forth, never reaching the target state of “not having to do anything”. The P is too big. So what you need is a value that’s high enough to correct the difference fast, but not as much so the momentum gets you oscillating around the target.

So if we found our P value, why do we need to bother with anything else? Well sadly pushing air around with props is a complicated way to remain stationary. The difference between where you are and where you want to be isn’t just determined by the aircraft itself. There are external forces that are in play and those change. We can get a gust of wind. So what we do is we correct that P value based on the changed conditions. Suddenly we don’t have a fixed P contoller, we have one that has variable P. Let’s move on how P is dynamically corrected.

The integral part of the controller corrects the difference that suddenly appears due to the new external forces coming into play. I would probably do a better job explaining this if I enjoyed maths, but don’t hate me, I’m a graphics designer. Magic maths corrects this offset. Having just the proprotional and integral part of the corrective measure is enough to form a capable controller perfectly able to provide a stable system.

However for something as dynamic as an acrobatic flight controller, you want to improve on the final stage of the correction where you are close to reaching your target after a fast dramatic correction. Typically what a PI controller would get you is a bit of a wobble at the end. To correct it, we have the derivative part of the correction. It’s a sort of a predictive measure to lower the P as you’re getting close to the target state. D gives you the nice smooth “locked in” feeling, despite having high P and I values, giving you really fast corrective ability.

There are three major control motions of a quad that the FC needs to worry about. Pitch for forward motion is controlled by spinning the back motors faster than the front two motors thus angling the quad forward. Roll motion is achieved exactly the same way, but with the two motors on one side spinning faster than the other two. The last motion is spinning in the Z axis, the yaw. That is achieved by torgue and the fact than the propellers and motors spin in different directions. Typically the front left and back right motor are clockwise spinning and the front right and back left motor are spinning counter clockwise. Thus spinning up/accellerating the front left and back right motors will turn the whole craft counter clockwise (counter motion).

I prepared a little cheat sheet on how to go about tuning PIDs on the NAZE32 board. Before you start though, make sure you set the PID looptime as low as your ESC allow. Usually ESC send the pulses 400 times a second which is equivalent to a looptime of 2500. The more expensive ESC can do 600Hz and some, such as the miniscule KISS ESCs, can go as low as 1200.

ESC refresh rate NAZE32 Looptime
286Hz 3500
333Hz 3000
400Hz 2500
500Hz 2000
600Hz 1600

You do this in the CLI tab of baseflight:

set looptime=2500
save

Hope this has been helpful for some as it was for me :).

Quick Guide on PID tuning

April 16, 2015

I Love Small Town Papers

I've always loved small-town newspapers. Now I have one as a local paper (though more often, I read the online Los Alamos Daily Post. The front page of the Los Alamos Monitor yesterday particularly caught my eye:

[Los Alamos Monitor front page]

I'm not sure how they decide when to include national news along with the local news; often there are no national stories, but yesterday I guess this story was important enough to make the cut. And judging by font sizes, it was considered more important than the high school debate team's bake sale, but of the same importance as the Youth Leadership group's day for kids to meet fire and police reps and do arts and crafts. (Why this is called "Wild Day" is not explained in the article.)

Meanwhile, here are a few images from a hike at Bandelier National Monument: first, a view of the Tyuonyi Pueblo ruins from above (click for a larger version):

[View of Tyuonyi Pueblo ruins from above]

[Petroglyphs on the rim of Alamo Canyon] Some petroglyphs on the wall of Alamo Canyon. We initially called them spirals but they're actually all concentric circles, plus one handprint.

[Unusually artistic cairn in Lummis Canyon] And finally, a cairn guarding the bottom of Lummis Canyon. All the cairns along this trail were fairly elaborate and artistic, but this one was definitely the winner.

April 14, 2015

Minis and FPV

FPV

I’ve got some time into the hobby to actually share some experiences that could perhaps help someone who is just starting.

Cheap parts

I like cheap parts just like the next guy, but in the case of electronics, avoid it. Frame is one thing. Get the ZMR250. Yes it won’t be near as tough as the original Blackout, but it will do the job just fine for a few crashes. Rebuilding aside, you can get about 4 for the price of the original. Then the plates give. But electronics is a whole new category. If you buy cheap ESCs they will work fine. Until they smoke mid flight. They will claim to deal with 4S voltage fine. Until you actually attach a 4S and blue smoke makes its appearance. Or you get a random motor/ESC sync issue. And for FPV, when a component dies mid flight, it’s the end of the story if it’s the drive (motor/esc) or the VTX or a board cam.

No need to go straight to T-motor, which usually means paying twice as much of a comparable competitor. But avoid the really cheap sub $10 motors like RCX, RCTimer (although they make some decent bigger motors), generic chinese ebay stuff. In case of motors, paying $20 for a motor means it’s going to be balanced and the pain of vibration aleviated. Vibrations for minis don’t just ruin the footage due to rolling shutter. They actually mess up the IMU in the FC considerably. I like Sunnysky x2204s 2300kv for a 3S setup and the Cobra 2204 1960kv for a 4S. Also rather cheap DYS 1806 seem really well balanced.

Embrace the rate

Rate mode is giving up the auto-leveling of the flight controller and doing it yourself. I can’t imagine flying line of sight (LOS) on rate, but for first person view (FPV) there is no other way. NAZE32 has a cool mode called HORI that allows you to do flips and rolls really easily as it will rebalance it for you, but flying HORI will never get you the floaty smoothness that makes you feel like a bird. The footage will always have this jerky quality to it. On rate a tiny little 220 quad will feel like a 2 meter glider, but will fit inbetween those trees. I was flying hori when doing park proximity, but it was a time wasted. Go rate straight from the ground, you will have way more fun.

Receiver woes

For the flying camera kites, it’s usually fine to keep stuff dangling. Not for minis. Anything that could, will get chopped off by the mighty blades. These things are spinning so fast than antennas have no chance and if your VTX gets loose, it will get seriously messed up as well. You would not believe what a piece of plastic can do when it’s spinning 26 thousand times a minute. On the other hand you can’t bury your receiver antenna on the frame. Carbon fibre is super strong, but also super RF insulating. So you have to bring it outside as much as possible. Those two don’t quite go together, but the best practice I found was taping one of the antennas to the bottom of the craft and have the other stick out sideways on top. The cheapest and best way I found was using a zip tie to hold the angle and heatshrink the antenna onto it. Looks decent and holds way better than a straw os somesuch.

Next time we’ll dive into PID tuning, the most annoying part of the hobby (apart from looking for a crashed bird ;).

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.

Example

You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account to fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on lockee.me. The source is also available if you’d like to run your own instance or contribute.

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

April 13, 2015

Discourse Forum on PIXLS.US

After a bunch of hard work by someone not me (darix), there's finally a neat solution for commenting on PIXLS.US. An awesome side effect is that we get a great forum along with it.

Discourse

On the advice from the same guy that convinced me to build PIXLS using a static site generator (see above), I ended up looking into, and finally going with, a Discourse forum.


The actual forum location will be at:



What is extra neat about the way this forum works, though, is the embedding. For every post on the pixls.us blog (or an article), the forum will pickup on the post and will automatically create topics on the forum that coincide with the posts. Some small embedding code on the website allows these topic replies to show up at the end of a post similar to comments.

For instance, see the end of this blog post to see the embedding in action!

Come on by!

I personally really like this new forum software, both for the easy embedding, but also the fact that we own the data ourselves and are not having to farm it out through a third party service. I have enabled third party oauth logins if anyone is ok with using them (but are not required to - normal registration with email through us is fine of course).

I like the idea of being able to lower the barrier to participating in a community/forum, and the ability to auth against google or twitter for creating an account significantly lowers that friction I think.

Some Thanks are in Order

It's important to me to point out that being able to host PIXLS.US and now the forum is entirely due to the generosity of folks visiting my blog here. All those cool froods that take a minute to click an ad help offset the server costs, and the ridiculously generous folks that donate money (you know who you are) are amazing.

As such, their generosity means I can afford to bootstrap the site and forums for a little while (without having to dip into the wife goodwill fund...).

What does this mean to the average user? Thanks to the folks that follow ads here or donate, PIXLS.US and the forum is ad-free. Woohoo!

Paths: Stroking and Offsetting

Path stroking and offsetting are two intertwined topics; stroking is often implemented by path offsetting. This post explores some of the problems encountered with these path operations.

Stroking: It’s not as easy as it looks.

What could be easier that stroking a path? It’s a fundamental concept in all graphics libraries. You construct a path:

in PostScript:

newpath
100 100 moveto
150 100 lineto
10 setlinewidth
stroke

in SVG:

<path d="M 100,100 150,100" stroke-width="10"/>

and voila, you have a horizontal path, 50 pixels long, that is 10 pixels wide.

Hmm, if only it were that easy. It turns out that stroking an arbitrary path can be quite complicated. Different graphics libraries can give quite different results.

A simple Bezier path segment with high curvature at one end.

A Bezier path segment with high curvature at the end. Web browsers differ on the rendering. (SVG)

Firefox's rendering of the circle. It appears solid.
Chome's rendering of the circle. It appears like a donut.

Rendering of above path: Firefox (left/top), Chrome (right/bottom). (PNG)

There are two different ways to stroke a path. The first method is to pass a line segment of length ‘stroke-width’, centered on and perpendicular to the path, from one end to the other. Any pixels the line crosses are part of the stroke. This seems to be what Firefox does. (An equivalent method is to pass a circle of diameter ‘stroke-width’ centered on the path and then clip the semi-circles at the ends.) The second method is to construct two paths, offset by half the ‘stroke-width’ on each side of the original path and then fill the area between the two paths. This seems to be what Chrome does.

A simple Bezier path segment with high curvature at one end.

A Bezier path segment with high curvature at the end. Stroke constructed by offsetting path. Red: original path, blue: offset paths. (SVG)

Rendering engines appear to fall into one of these two camps:

Sweep a line:
Firefox, Adobe Reader
Offset paths:
Chrome, Inkscape (Cairo), Opera (Presto), Evince, Batik, rsvg

The difference can be also be seen in circular paths.

Two circular paths with strokes of different widths.

Two same size circular paths with different stroke widths. When one-half the stroke width exceeds the circle radius (right circle), web browsers differ in their rendering. (SVG)

Firefox's rendering of the circle. It appears solid.
Chome's rendering of the circle. It appears like a doughnut.

Rendering of a circular path when one-half the stroke width is greater than the radius in: Firefox (left/top), Chrome (right/bottom). (PNG)

When using the Offset paths method, an inner path is always created. As the direction of this path is the same regardless of the stroke width, one cannot differentiate between the case where the stroke width is less than one-half the radius and the case where it is not. This can be seen in the animation below:

Two circular paths with strokes of different widths. The drawing of the stroke is animated.

Stroking the path. The arrows indicate the direction of the offset paths. If the drawing is not animated, view the image by itself. (SVG)

Interestingly, some renderers draw a filled circle when one-half the ‘stroke-width’ is greater than the radius for an SVG <circle> (i.e. not a circular <path>) while others still draw a doughnut. However, for the SVG <rect> element, the rectangles are always drawn filled if the ‘stroke-width’ is greater than either the ‘width’ or ‘height’ (at least in the renderers I tested).

So what does the SVG specification say about how to stroke a path? Nothing…! One can look to PostScript and PDF on which SVG is partially based for a hint on what it should say. The PostScript and PDF specifications say the same thing. From the PDF 1.7 reference:

The S operator paints a line along the current
path. The stroked line follows each straight or curved segment
in the path, centered on the segment with sides parallel to
it. Each of the path’s subpaths is treated separately…

This seems to indicate that the sweeping the line technique is what is expected and indeed, Adobe’s own product, Adobe Reader, appears to do just that.

Stroke Alignment

Designers often want more control over how a stroke is positioned: only on the inside, only on the outside, or some arbitrary ratio of the two. The new SVG ‘stroke-alignment‘ property offers this control. For a closed path, it is relatively easy to figure out how this property should behave:

A figure eight path showing various methods for offsetting.

Top: the original path. Middle: left: stroke inside; right: stroke outside. Bottom: left: stroke to left; right: stroke to right.

For an open path, it is not quite so easy. What is inside, what is outside? One can define the terms by looking at what is filled: inside is in the fill, outside is not in the fill. With this definition, a single straight line segment would render nothing for an ‘inside’ stroke and a stroke on both sides for an ‘outside’ stroke. The SVG specification has a slightly different definition for ‘outside’ (see figure). For an open path it may make more sense to talk about left/right rather than inside/outside.

A figure eight path showing various methods for offsetting.

Top to bottom: Default stroke. Fill area (in gray). Inside (according to SVG specification?). Outside (implemented here by masking). Inside (another interpretation). Outside (according to SVG specification?). Stroke on left (round end cap in pink).

Handling line joins is fairly straight forward. End caps, at least ’round’ ones, are another matter. Does one draw half an end cap? Or does the radius of the end cap match the width of the (shifted) stroke?

Left: straight lines, right: curved lines.

Round end caps. Top to bottom: Default stroke. Stroke alignment ‘outside’, end-cap radius doubled. Stroke alignment ‘outside’, end-cap radius same as normal.

The ‘stroke-alignment’ property was recently removed from the SVG 2 specification draft and moved into a separate SVG Strokes module, partly due to the difficulty in specifying exactly how it should behave.

The ‘stroke-alignment’ ‘inside’/'outside’ values can be simulated via other methods. The new ‘paint-order‘ property allows one to paint the stroke before the fill and thus simulating stroking only the outside of the path (this only works for opaque fill). A mask can also be used to simulate stroking the outside of path. A clip path can be used to simulate stroking the inside of a path.

Offset Paths

We’ve seen that offsetting a path can be used for constructing strokes. What about offsetting a path for the purpose of creating a new path? This is quite useful in mapping. For example you might want to show multiple bus routes going along a road with different offsets for each route. More stylistically, you could produce the shadowing seen around land masses in older, hand-drawn maps.

Section of map showing lines ringing a group of islands.

An excerpt from a submarine cable map showing the use of offset paths to shade around land masses. Note also the use of inside strokes to define country boundaries.

Offsetting paths is in practice extremely tricky! Here are a few of the problems:

  1. Offsets of Bezier segments are not Beziers; in fact they are 10th-order polynomials. In practice, one can do a pretty good job of estimating the offset by breaking up a Bezier path into smaller segments.
  2. Offset paths can have loops at cusps.
  3. Offset paths may require breaking apart left and right offset paths and recombining to form outset and inset paths. It can be difficult to get this right.

Entire scientific papers are written on this topic.[1]

Here is a simple example path with offsets both inside and outside:

Path with a series of offsets.

Left: insets, right: outsets. Red path is original.

In this case, the outsets correspond to the outer edge of a stroked path with appropriate width when the ‘stroke-linejoin’ type is ’round’. The insets correspond to the inner edge of such strokes. Taking a closer look at the offset paths shows a number of cusp loops:

Complex path with offsets.

The same original path as in the above figure. Left: the light blue region is created by stroking the original path. As can be seen it matches the corresponding outset (blue) and inset (green) paths. Right: The raw offset paths used to construct the visible outset and inset paths. In this case, the outset path is constructed from the raw outset path (blue) and the inset path is constructed from the raw inset path (green). Cusp loops and overlaps have been removed.

Determining what is outset or inset becomes more difficult as a path loops back on itself. Both the outset and inset paths can consist of parts of both the right-offset and left-offset paths as shown below:

A path that loops back on itself three times.

Left: The left-offset path (blue) and the right-offset path (green), relative to the path’s direction (clock-wise). Right: The resulting outset path (blue) and inset path (green).

Here’s an example where Inkscape’s Linked Offset function gets it wrong:

A circular path segment on top of a figure eight segment.

The resulting outset path (blue) and inset path (green) as found by Inkscape’s Linked Offset function.

The previous examples assumed that the line joins for outside joins are rounded. It would be desirable to be able to specify the type of join to use. This can maintain the feel of the original path.

A triangle path with 's' shaped sides with various offsets.

Left: Outset path with three different types of joins: ‘bevel’, ’round’, and ‘arcs’. Right: Outset paths with various offsets and with the ‘arcs’ line join. Note: the ‘arcs’ line join fails for the outer most path as the generated arcs do not intersect; this results in falling back to a ‘miter’ line join.

Allowing more freedom to define stroke position and being able to offset strokes are highly desirable features for designers, but as this post shows, they are not so simple to implement. Before we can add such features to SVG, we need to define robust algorithms for generating proper offset paths.

References

  1. An offset algorithm for polyline curves Xu-Zheng Liu, Jun-Hai Yong, Guo-Qin Zheng, Jia-Guang Sun.

An image for the sole purpose of having a good PNG image to show in Google+ which doesn’t support SVG images, bad Google+.

Complex path with offsets.

San Francisco Impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

April 12, 2015

OpenRaster with JPEG and SVG

OpenRaster is a file format for layered images, essentially each layer is a PNG file, there is some XML glue and it is all contained in a Zip file.

In addition to PNG some programs allow layers in other formats. MyPaint is able to import JPG and SVG layers. Drawpile has also added SVG import.

After a small change to the OpenRaster plugin for The GNU Image Manipulation Program, it will also allow non-PNG layers. The code had to be changed in any case, it needed to at least give a warning that non-PNG layers were not being loaded, instead of quietly dropping them. Allowing other layer types was more useful and easier too.
(This change only means that other file types with be imported, they will not be passed through and will be stored as PNG when the file is exported.)

April 10, 2015

Updating device firmware on Linux

If you’re a hardware vendor reading this, I’d really like to help you get your firmware updates working on Linux. Using fwupd we can already update device firmware using UEFI capsules, and also update the various ColorHug devices. I’d love to increase the types of devices supported, so if you’re interested please let me know. Thanks!

April 09, 2015

Simplifying things

social-network-links

Hi all
Don’t know if is something age related, but I’m striving now for simplicity in everything. That’s one of the reasons I’ve recently decided to perform some contacts clearance in my social networks. Is NOTHING PERSONAL, is just a bit stressful to have floating in every social network account same contacts again and again like it was not enough to have them on your email, email chats, Facebook, your phone, Google+, whatsapp, Skype, you name it…
And more importantly, 90% of those contacts are just sitting idle, or friends of friends, contacts that never actually reach you. I will always favor physical friends, or at least often contact as a selection criteria. But hey, is not big deal, there’s plenty of way to reach each other even if we are not facebook friends!


April 08, 2015

Hughes Hypnobirthing

If you’re having a baby in London, you might want to ask my wife for some help. She’s started offering one to one HypnoBirthing classes for mothers to be and birth partners designed to bring about an easier, more comfortable birthing experience.

It worked for us, and I’m super proud she’s done all the training so other people can have such a wonderful birthing experience.

released darktable 1.6.4

We are happy to announce that darktable 1.6.4 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.4
Please only use our provided packages ("darktable-1.6.4.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here's the direct link to tar.xz and dmg:
https://github.com/darktable-org/darktable/releases/download/release-1.6.4/darktable-1.6.4.tar.xz
https://github.com/darktable-org/darktable/releases/download/release-1.6.4/darktable-1.6.4.dmg

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.4.tar.xz
 c5f705e8164c014acf0dac2ffc5b730362068c2864622121ca6fa9f330368d2a
sha256sum darktable-1.6.4.dmg
 e5bbf00fefcf116aec0e66d1d0cf2e2396cb0b19107402d2ef70d1fa0ab375f6

General improvements:

  • major rawspeed update
  • facebook exporter update (first authentication usability should be much better now)
  • first run opencl benchmark to prevent opencl auto-activation if GPU is obviously slower than CPU
  • lensfun cornercase fixes
  • some mask cornercase fixes
  • zonesystem now updates it's gui when number of zones changes
  • spots iop updates
  • ui_last/gui_language should work more reliably now
  • internal lua updated from 5.2.3 to 5.2.4 (distro typically use their own version of lua)
  • gcc 5 should build now

Camera support:

  • Canon Digital Rebel (non-european 300D)
  • Nikon D5500 (experimental)
  • Olympus E-M5 Mark II (experimental)
  • Samsung NX500 (experimental)

White balance presets:

  • Sony A77 II
  • Fujifilm X-E2
  • Olympus E-M5 Mark II

Noise profiles:

  • Canon 7D Mark II

updated translations:

  • German
  • French
  • Russian
  • Danish
  • Catalan
  • Japanese
  • Dutch

April 07, 2015

Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 7 April 2015

Summary

I don’t actually have time today to post a full summary, so just a few bullet points:

  • Bronwyn, my new intern, started today so we welcomed her at the meeting and she took on her first ticket, which she’s working on right now.
  • We walked through tickets needing attention and tickets needing triage.
  • We talked about the F22 Beta readiness meeting – I will attend to represent the team this Thursday.
  • We talked about Flock and discussed more details about the topics we’d like to propose.

April 06, 2015

OpenRaster Paths (or Vectors)

Summary: plugin updated to allow round-trip of paths.

The MyPaint team are doing great work, making progress towards MyPaint 1.2, I encourage you to give it a try, build it from source or check out the nightly builds. (Recent windows build Note: the filename mypaint-1.1.1a.7z may stay the same but the date of build does change.)
The Vector Layers feature in MyPaint is particularly interesting. One downside though is that the resulting OpenRaster files with vector layers are incompatible with most existing programs. MyPaint 1.0 was one of the few programs that managed to open the file at all, presenting an error message only for the layer it was not able to import. The other programs I tested, failed to import the file at all. It would be great if OpenRaster could be extended to include vector layers and more features but it will take some careful thought and planning.

It can be challenging enough to create a new and useful feature, planning ahead or trying to keep backwards compatibility makes matters even more complicated. With that in mind I wanted to add some support for vectors to the OpenRaster plugin. Similar to my previous work to round-trip metadata in OpenRaster I found a way to round-trip Paths/Vectors that is "good enough" and that I hope will benefit users. The GNU Image Manipulation Program already allows paths to be exported in Scalable Vector Graphics (SVG) format. All paths are exported to a single file, paths.svg and are imported back from that same file. It is not ideal, but it is simple and it works.

Users can get the updated plugin immediately from the OpenRaster plugin gitorious project page. There is lots more that could be done behind the scenes, but for ordinary users I do not expect any changes as noticeable as these for a while.


Back to the code. I considered (and implemented) a more complicated approach that included changes to stack.xml, where raster layers were stored as one group, and
paths (vectors layers) as another group. This approach was better for exporting information that was compatible with MyPaint but as previously mentioned, the files were not compatible with any other existing programs.

To ensure OpenRaster files that are back compatibility it might be better to always include a PNG file as the source for every layer, and to find another way to link to other types of content, such as text or vectors, or at some distant point in the future even video. A more complicated fallback system might be useful in the long run. For example the EPUB format reuses the Open Packaging Framework (OPF) standard, any pages can be stored in multiple formats, so long as it includes a fallback to another format, ending with a fallback to a few standard baseline formats (i.e. XHTML). The OpenRaster standard has an elegant simplicity, but there is so much more it could do.

Krita 3.0

Krita 3.0 is going to be the first Qt 5.x-based Krita. April 13th, 2006, we ported Krita to Qt 4. Seventeen days after porting started, I could publish Krita 2.0 runs!":

Back then, I was young and naive and deluded enough to think that porting Krita to a new version of Qt would automatically make Krita better. Porting itself was an exciting adventure, and using new technology was fun all by itself.

But porting to Qt 4 was a complete and utter disaster and it took us many years to finally get to a Qt 4-based version of Krita that was ready for users: Krita 2.4, released April 11th, 2012. There were reasons for that beyond the mere porting, of course, including the fact that, like fools, we couldn't resist doing a complete refactoring of the basic KOffice libraries.

This time, we're not doing that. But that's not to say that I'm at all confident that we'll have a Krita 3.0 that is as good for end users as 2.9. We started porting March 6th. Right now, Krita starts, but you cannot load or save an image and you cannot use any tool. Our input handling is broken because of changes in the event filter handling in Qt. Also, ugly big fonts and no icons. Really simple things, like the list of brush engines, are broken.

I know that we have to port Krita to Qt 5, because Qt 4 is not going to be maintained for much longer, because Linux distributions want to move on, because Qt 5 is being actively developed (except for the parts, like QtWebkit that are being actively deprecated). But it's taking a lot of effort away from what really counts: making Krita better for end users.

It's like this...

One can develop software for any number of reasons: because it's fun to write code, because you get paid for it or because your software makes users happy. I'm going for the last reason. I want to make users happy to use Krita. I want users to have fun using Krita, I want it to be an efficient tool, a pleasure to use.

In order to do that, code-wise, I have to do three things: implement cool new features (including workflow improvements), fix bugs and improve Krita's performance.

Similarly, I expect people working on libraries or build tools to have as their goal making it possible for me to reach my goals: after all, I'd hope they are writing software for me to use, otherwise I'd better use something else that does help me reach my goals.

Updating our build system and struggling with that because the new way of specifying the list of directories where the compiler looks for header files isn't compatible with third party software that uses cmake, well, that does not contribute to my goal. This problem has taken four or five people each over four hours of time looking into and it hasn't been solved yet! Now realize that Calligra has 609 CMakeLists.txt files and about 300 plugins. There are over seventy libraries in Calligra.

Likewise, having to rewrite existing code because someone found a new, better way of handling boolean values in a C library for reading and writing a particular file format is not helping. Your library might be much better quality, now, code-wise, but I, as your user don't give a damn. Just like my users don't give a damn what my code looks like; it only needs to work. I only care about whether your software helps me to deliver features to my users. Don't tell me the new way is cleaner -- that's an illusion anyway. Don't insult me by telling me the new way will make my maintenance-burden smaller, because you've just added a load to it right away.

In general, any change in a library or in a build system that makes work for me without materially improving Krita for Krita's users is a waste of my time. Doubly so if it's badly documented. I resent that waste. I don't have enough time already.

Let's take this tiny example... QModelIndex::internalId() no longer returns a qint64, but a kind of a pointer abstraction, a signed integer. Well, we have some code that compared that internalId() to -1. This code was written in 2011 by someone who is no longer around. The Qt documentation used to say

"Returns a qint64 used by the model to associate the index with the internal data structure."

Now it says

"Returns a quintptr used by the model to associate the index with the internal data structure."

It might just be me... But this change isn't mentioned in C++ API changes -- so, what do we do now? And once we've done it, is the style selector for the text tool really better?

Likewise, what do I care that "QCoreApplication::setEventFilter() and QApplication::x11EventFilter/macEventFilter/qwsEventFilter/winEventFilter are replaced with QCoreApplication::installNativeEventFilter() and QCoreApplication::removeNativeEventFilter() for an API much closer to QEvent filtering." Nothing. Nada. Zilch. I just don't want to port our event filter to a new api; it worked fine, let it go on working!

I could enumerate examples until dawn, and we're only a month into porting. We've disabled all deprecation warnings, even, because they were so numerous they obscured the real errors.

So, to conclude, I suspect that it'll take at least six months before the Qt 5 port of Krita is usable by end users, and that we'll be adding new features and fixing bugs in Krita 2.9 for at least a year to come. Because if there's one thing that I desperately want to avoid, it's losing our userbase just when it's nicely growing because we spend months doing stuff none of our users gives a damn about.

OpenRaster Metadata

Summary: plugin updated to allow round-trip of metadata.

OpenRaster does not yet make any suggestions on how to store metadata. My preference is for OpenRaster to continue to borrow from OpenDocument and use the same format meta.xml file, but that can be complicated. Rather than taking the time to write a whole lot of code and waiting do metadata the best way, I found another way that is good enough, and expedient. I think ordinary users will find it useful -- which is the most important thing -- to be able to round-trip metadata in the OpenRaster format, so despite my reservations about creating code that might discourage developers (myself included) from doing things a better way in future I am choosing the easy option. (In my previous post I mentioned my concern about maintainability, this is what I was alluding to.)

A lot of work has been done over the years to make the The GNU Image Manupilation Program (GIMP) work with existing standards. One of those standards is XMP, the eXtensible Metadata Platform originally created by Adobe Systems, which used the existing Dublin Core metadata standard to create XML packets that can be inserted inside (or alongside) an image file. The existing code creates an XMP packet, let's call it packet.xmp and include it in the OpenRaster file. There's a little more code to the load the information back in and users should be able to go to menu File, Properties and in Properties dialog go to the tab labelled Advanced to view (or set) metadata.

This approach may not be particularly useful to users who want to get their information out into other applications such as MyPaint or Krita (or Drawpile or Lazpaint) but it at least allows them not to lose metadata information when they use OpenRaster. (In the long run other programs will probably want to implement code to read XMP anyway, so I think this is a reasonable compromise, even though I want OpenRaster to stay close to OpenDocument and benefit from being part of that very large community.)

You can get the updated plugin immediately from the OpenRaster plugin gitorious project page.

If you are a developer and want to modify or reuse the code, it is published under the ISC License.

Quickly seeing bird sightings maps on eBird

The local bird community has gotten me using eBird. It's sort of social networking for birders -- you can report sightings, keep track of what birds you've seen where, and see what other people are seeing in your area.

The only problem is the user interface for that last part. The data is all there, but asking a question like "Where in this county have people seen broad-tailed hummingbirds so far this spring?" is a lengthy process, involving clicking through many screens and typing the county name (not even a zip code -- you have to type the name). If you want some region smaller than the county, good luck.

I found myself wanting that so often that I wrote an entry page for it.

My Bird Maps page is meant to be used as a smart bookmark (also known as bookmarklets or keyword bookmarks), so you can type birdmap hummingbird or birdmap golden eagle in your location bar as a quick way of searching for a species. It reads the bird you've typed in, and looks through a list of species, and if there's only one bird that matches, it takes you straight to the eBird map to show you where people have reported the bird so far this year.

If there's more than one match -- for instance, for birdmap hummingbird or birdmap sparrow -- it will show you a list of possible matches, and you can click on one to go to the map.

Like every Javascript project, it was both fun and annoying to write. Though the hardest part wasn't programming; it was getting a list of the nonstandard 4-letter bird codes eBird uses. I had to scrape one of their HTML pages for that. But it was worth it: I'm finding the page quite useful.

How to make a smart bookmark

I think all the major browsers offer smart bookmarks now, but I can only give details for Firefox. But here's a page about using them in Chrome.

Firefox has made it increasingly difficult with every release to make smart bookmarks. There are a few extensions, such as "Add Bookmark Here", which make it a little easier. But without any extensions installed, here's how you do it in Firefox 36:

[Firefox bookmarks dialog] First, go to the birdmap page (or whatever page you want to smart-bookmark) and click on the * button that makes a bookmark. Then click on the = next to the *, and in the menu, choose Show all bookmarks. In the dialog that comes up, find the bookmark you just made (maybe in Unsorted bookmarks?) and click on it.

Click the More button at the bottom of the dialog.
(Click on the image at right for a full-sized screenshot.)
[Firefox bookmarks dialog showing keyword]

Now you should see a Keyword entry under the Tags entry in the lower right of that dialog.

Change the Location to http://shallowsky.com/birdmap.html?bird=%s.

Then give it a Keyword of birdmap (or anything else you want to call it).

Close the dialog.

Now, you should be able to go to your location bar and type:
birdmap common raven or birdmap sparrow and it will take you to my birdmap page. If the bird name specifies just one bird, like common raven, you'll go straight from there to the eBird map. If there are lots of possible matches, as with sparrow, you'll stay on the birdmap page so you can choose which sparrow you want.

How to change the default location

If you're not in Los Alamos, you probably want a way to set your own coordinates. Fortunately, you can; but first you have to get those coordinates.

Here's the fastest way I've found to get coordinates for a region on eBird:

  • Click "Explore a Region"
  • Type in your region and hit Enter
  • Click on the map in the upper right

Then look at the URL: a part of it should look something like this: env.minX=-122.202087&env.minY=36.89291&env.maxX=-121.208778&env.maxY=37.484802 If the map isn't right where you want it, try editing the URL, hitting Enter for each change, and watch the map reload until it points where you want it to. Then copy the four parameters and add them to your smart bookmark, like this: http://shallowsky.com/birdmap.html?bird=%s&minX=-122.202087&minY=36.89291&maxX=-121.208778&maxY=37.484802

Note that all of the the "env." have been removed.

The only catch is that I got my list of 4-letter eBird codes from an eBird page for New Mexico. I haven't found any way of getting the list for the entire US. So if you want a bird that doesn't occur in New Mexico, my page might not find it. If you like birdmap but want to use it in a different state, contact me and tell me which state you need, and I'll add those birds.

April 05, 2015

Life reset

houston-835-1920x1080

My time has come, finally I’m free from all limitations I had in Cuba: I’m in United States of America, and I’m here to stay. It was a long dream and a hard road to get here, even traumatic to the point I will not share, but I’ve made it!

Is literally a life reset, I have to start from scratch in a new world full of strange things to me but also full of opportunities. I really cant express what I’m feeling now, is a mixture of everything: happiness for the dream come true and sadness for the loved ones I left behind, impressed by all the beauty and fear for my future, hope for opportunities and uncertainty for my new path and I can go on.

But after all there’s a subtle feeling ever present, women better know: is like a void, after you give birth, when you reach your hardest goal with nothing planned after that, preventing you from fully enjoy your victory and after all, feeling loneliness.

But all of this is normal, is the same countless people have felt when they leave the nest, and more importantly, is the price we have to pay for finally live our lives as we should.

Thank you everyone!


April 03, 2015

What DAW (or other music software) is the right for me?

Dear lazyweb,

Is Ableton Live the right DAW for me?

I am not a musician or "producer". I don't know how to play any instrument. I don't really know musical notation (and have little desire to learn). But I do have some basic understanding of musical theory, an open mind, and would enjoy making experimental music. The kind I like to listen to. (Or, for the matter, why not more traditional electronic music, even dancey stuff, too.)

(I do listen to more "normal" music, too, not just experimental bleeps and drones;)

Among my favourite composers / artists are the usual suspects like Scelsi, Ligeti, Reich, Glass, Eno, Fripp, Kraftwerk, and contemporary ones like Max Richter, Anna Thorvaldsdottir, Nils Frahm, and of course acts like Circlesquare, Monolake, Plastikman etc. My favourite radio show is WNYC's New Sounds .

The software I have been ogling most is Ableton Live. What I like about it is that it is popular, cool, and has a thriving development, apparently, with relatively frequent updates etc. It seems not likely to go away suddenly. I love the total (?) lack of skeuomorphism in the user interface. (I strongly dislike the trend of faux real hardware look and feel in 3rd-party plugins.)

I certainly have no wish to use the "live" aspect of Live. And, as one of the main points, as I understand it, of Live is to make it easy to launch clips that will be automatically properly aligned in live performance scenarios, will Live be suitable for stuff like having multiple loops or patterns playing simultaneously *without* being synchronised? You know, like emulating Frippertronics, or patterns being slightly out of phase in the style of Reich, or Eno.

And what about microtonal aspects?

So, will Live have too many limitations? Should I look somewhere else? Maybe the generative music scene is what I should be looking into, like Intermorphic's Noatikl? Although with that I would definitely be afraid of the proprietary-software-maker-goes-belly-up scenario.

Maybe even some Open Source software? Csound? Pd? Note that I wouldn't want to get lured into hacking (programming) on some Open Source software, for once I want to be "just a user"...

Oh, and I use a Mac as my desktop environment, so that is also a limiting factor.

Or am I silly to even think I could create something interesting (to myself, that is) without starting by learning how to do stuff "by the book" first?

--tml

April 02, 2015

High Contrast Refresh

One of the major visual updates of the 3.16 release is the high contrast accessible theme. Both the shell and the toolkit have received attention in the HC department. One noteworthy aspect of the theme is the icons. To guarantee some decent amount of contrast of an icon against any background, back in GNOME 2 days, we solved it by “double stroking” every shape. The term double stroke comes from a special case, when a shape that was open, having only an outline, would get an additional inverted color outline. Most of the time it was a white outline of a black silhouette though.

Fuzzy doublestroke PNGs of the old HC theme

In the new world, we actually treat icons the same way we treat text. We can adjust the best contrast by controlling the color at runtime. We do this the same way we’ve done it for symbolic icons, using and embedded CSS stylesheet inside SVG icons. And in fact we are using the very same symbolic icons for the HC variant. You would be right arguing that there are specific needs for high contrast, but in reality majority of the double stroked icons in HC have already been direct conversions of their symbolic counterparts.

Crisp recolorable SVGs of the post 3.16 world

While centralized theme that overrides all application never seemed like a good idea, as the application icon is part of its identity and should be distributed and maintained alongside the actual app, the process to create a high contrast variant of an icon was extremely cumbersome and required quite a bit of effort. With the changes in place for both the toolkit and the shell, it’s far more reasonable to mandate applications to include a symbolic/high contrast variant of its app icon now. I’ll be spending my time transforming the existing double stroke assets into symbolic, but if you are an application author, please look into providing a scalable stencil variant of your app icon as well. Thank you!

Audi Quattro

Winter is definitelly losing its battle and last weekend we had some fun filming with my new folding Xu Gong v2 quad.

Audi Quattro from jimmac on Vimeo.

Making of GNOME 3.14

The release of GNOME 3.14 is slowly approaching, so I stole some time from actual design work and created this little promo to show what goes into a release that probably isn’t immediately obvious (and a large portion of it doesn’t even make it in).

Watch on Youtube

I’d like to thank all the usual suspects that make the wheels spinning, Matthias, Benjamin and Allan in particular. The crown goes to Lapo Calamandrei though, because the amount of work he’s done on Adwaita this cycle will really benefit us in the next couple of releases. Thanks everyone, 3.14 will be a great release*!

* I keep saying that every release, but you simply feel it when you’re forced to log in to your “old” GNOME session rather than jhbuild.

Open Flight Controllers

In my last multirotor themed entry I gave an insight into the magical world of flying cameras. I also gave a bit of a promise to write about the open source flight controllers that are out there. Here’s a few that I had the luck laying my hands on. We’ll start with some acro FCs, with a very differt purpose to the proprietary NAZA I started on. These are meant for fast and acrobatic flying, not for flying your expensive cameras on a stabilized gimbal. Keep in mind, I’m still fairly inexperienced so I don’t want to go into specifics and provide my settings just yet.

Blackout: Potsdam from jimmac on Vimeo.

CC3D

The best thing to be said about CC3D is that while being aimed at acro pilots, it’s relatively newbie friendly. The software is fairly straight forward. Getting the QT app built, set up the radio, tune motors and tweak gains is not going to make your eyes roll in the same way APM’s ground station would (more on that in a future post, maybe). The defaults are reasonable and help you achieve a maiden flight rather than a maiden crash. Updating to the latest firmware over the air is seamless.

Large number of receivers and connection methods is supported. Not only the classic PWM, or the more reasonable “one cable” CPPM method, but even Futaba proprietary SBUS can be used with CC3D. I’ve flown it with Futaba 8J, 14SG and even the Phantom radio (I actually quite like the compact receiver and the sticks on the TX feel good. Maybe it’s just that it’s something I’ve started on). As you’re gonna be flying proximity mostly, the range is not an issue, unless you’re dealing with external interference where a more robust frequency hopping radio would be safer. Without a GPS “break” or even a barometer, losing signal for even a second is fatal. It’s extremely nasty to get a perfect 5.8 video of your unresponsive quad plumetting to the ground :)

Overall a great board and software, and with so much competition, the board price has come down considerably recently. You can get non-genuine boards for around EUR20-25 on ebay. You can learn more about CC3D on openpilot website

Naze32

Sounding very similar to the popular DJI flight controller, this open board is built around the 32-bit STM32 processor. Theoretically it could be used to fly a bit larger kites with features like GPS hold. You’re not limited to the popular quad or hexa setups with it either, you can go really custom with defining your own motor mix. But you’d be stepping in the realm of only a few and I don’t think I’d trust my camera equipment to a platform that hasn’t been so extensively tested.

Initially I didn’t manage to get the cheap acro variant ideal for the minis, so I got the ‘bells & whistles’ edition, only missing the GPS module. The mag compass and air pressure barometer is already on the board, even though I found no use for altitude hold (BARO). You’ll still going to worry about momentum and wind so reaching for those goggles mid flight is still not going to be any less difficult than just having it stabilized.

If you don’t count some youtube videos, there’s not a lot of handholding for the naze32. People assume you have prior experience with similar FCs. There are multiple choices of configuration tools, but I went for the most straight forward one — a Google Chrome/Chromium Baseflight app. No compiling necessary. It’s quite bare bones, which I liked a lot. Reasonably styled few aligned boxes and CLI is way easier to navigate than the non-searchable table with bubblegum styling than what APM provides for example.

One advanced technique that caught my eye, as the typical process is super flimsy and tedious, is ESC calibration. To set the full range of speeds based on your radio, you usually need to make sure to provide power to the RX, and setting the top and bottom throttle leves to each esc. With this FC, you can actually set the throttle levels from the CLI, calibrating all ESCs at the same time. Very clever and super useful.

Another great feature is that you can have up to three setting profiles, depending on the load, wind conditions and the style you’re going for. Typically when flying proximity, between trees and under park benches, you want very responsive controls at the expense of fluid movement. On the other hand if you plan on going up and fast and pretend to be a plane (or a bird), you really need to have that fluid non-jittery movement. It’s not a setting you change mid-flight, using up a channel, but rather something you choose before arming.

To do it, you hold throttle down and yaw to the left and with the elevator/aileron stick you choose the mode. Left is for preset1, up is for preset 2 and right is for preset 3. Going down with the pitch will recalibrate the IMU. It’s good to solder on a buzzer that will help you find a lost craft when you trigger it with a spare channel (it can beep on low voltage too). The same buzzer will beep for selecting profiles as well.

As for actual flying characteristics, the raw rate mode, which is a little tricky to master (and I still have trouble flying 3rd person with it), is very solid. It feels like a lot larger craft, very stable. There’s also quite a feat in the form of HORI mode, where you get a stabilized flight (kite levels itself when you don’t provide controls), but no limit on the angle, so you’re still free to do flips. I can’t say I’ve masted PID tuning to really get the kind of control over the aircraft I would want. Regardless of tweaking the control characteristics, you won’t get a nice fluid video flying HORI or ANGLE mode, as the self leveling will always do a little jitter to compensate for wind or inaccurate gyro readings which seems to not be there when flying rate. Stabilizing the footage in post gets rid of it mostly, but not perfectly:

Minihquad in Deutschland

You can get the plain acro version for about EUR30 which is an incredible value for a solid FC like this. I have a lot of practice ahead to truly get to that fluid fast plane-like flight that drew me into these miniquads. Check some of these masters below:

APM and Sparky next time. Or perhaps you’d be more interested in the video link instead first? Let me know in the comments.

Update: Turns out NAZE32 supports many serial protocols apart form CPPM, such as Futaba SBUS and Graupner SUMD.

Almost half a million downloads per month

In a regular month, without a release, Blender.org serves 430,000 downloads directly from the download page. This doesn’t count all the people who get the sources, doesn’t include release candidates, it doesn’t include the other websites that offer Blender.

The image below can also be viewed as a pdf here.

Also, added below the full 1 year stat of registered downloads!

Screen Shot 2015-04-02 at 17.56.14

Screen Shot 2015-04-03 at 10.24.48

JdLL 2015

Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.



On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

  • Removed the 17" screen, and replaced it with a 21" widescreen one with speakers builtin. This is useful when we can't setup the projector because of the lack of walls.
  • Upgraded machine to 1GB of RAM, thanks to my hoarding of old parts.
  • Bought a French keyboard and removed the German one (with missing keys), cleaned up the UK one (which still uses IR wireless).
  • Threw away GNOME 3.0 CDs (but kept the sleeves that don't mention the minor version). You'll need to take a sharpie to the small print on the back of the sleeve if you don't fill it with an OpenSUSE CD (we used Fedora 21 DVDs during this event).
  • Triaged the batteries. Office managers, get this cheap tester!
  • The machine's Wi-Fi was unstable, causing hardlocks (please test again if you use a newer version of the kernel/distributions). We tried to get onto the conference network through the wireless router, and installed DD-WRT on it as the vendor firmware didn't allow that.
  • The Nokia N810 and N800 tablets will going to kernel developers that are working on Nokia's old Linux devices and upstreaming drivers.
The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to gnome.org, obviously.

Thanks

The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

Krita 2.9.2 Released

It’s April! We’ve got another bug-fix and polish release of Krita! Here are the improvements:

  • Make the eraser end of the stylus erase by default
  • Make krita remember the presets chosen for each stylus and stylus end
  • Don’t show the zoom level on-canvas message while loading
  • Fix initialization of the multi-brush axis
  • Add some more kickstarter backers to the about box
  • Fix memory leak when loading presets (and a bunch more memory leaks)
  • Fix crashes related to progress reporting when running a g’mic action
  • Add a toggle button to hide/show the filter selection tree in the filter dialog
  • Fix a focus bug that made it hard to edit e.g. layer names when activating the editor in the docker with a tablet stylus
  • Fix geometry of the toolbox on startup in some cases
  • Fix lock proportions in the free transform tool when locking aspect ratio
  • Add an option to hide the docker titlebars
  • Update the resource manager lists after loading a resource bundle
  • Make the resource manager look for bundles by default
  • Make Krita start faster by only loading images for the references docker when the references docker is visible
  • Fix a crash in the g’mic docker when there’s no preview widget selected
  • On switching images, show the selected layer in the layer box, not the bottom one
  • Show the selected monitor profile in the color management settings page instead of the default one
  • Make the Image Split dialog select the right export file type.
  • Fix saving and loading masks for file layers
  • Make the default MDI background darker
  • Fix loading some older .kra files that contained an image name with a number after a /
  • Don’t crash if the linux colord colormanager cannot find a color-managed output device
  • Clean the code following a number of PVS studio code analyzer warnings
  • Add tooltips to the presets in the popup palette
  • Fix a problem where brush presets in the popup palette were sometimes misaligned
  • Fix loading most types of images in the reference docker on Windows

Most work in the past month has gone into the Qt 5 port (Krita now starts, yay! But it doesn’t work yet…) and most of all the Photoshop-style Layerstyle feature. We’ve got most of the effects implemented, most of the dialog box, too — only the contour selector, the style library selector and the blending mode page is still missing. Loading and saving is still to be done.

But here’s a teaser screenshot of of a layer style on a vector layer (layer styles work on all types of layers, paint layers, vector, group, clone, file…)

 

layerstyle

We hope the loading and saving will be ready in time for 2.9.3!

Note on G’Mic on Windows: Lukas, David and Boudewijn are trying to figure out how to make G’Mic stable on Windows. The 32 bits 2.9.1 Windows build doesn’t include G’Mic at all. The 64 bits build does, and on a large enough system, most of the filters are stable. We’re currently trying different compilers because it seems that most problems are causes by Microsoft Visual Studio 2012 generating buggy code. We’re working like crazy to figure out how to fix this, but please, for now, on 64 bits Windows treat G’Mic as entirely experimental. We still haven’t managed to find a combination of compilers that will let us build Krita and G’Mic and make it work reliable.

If you’ve got experience cross-compiling from Linux to Windows and want to help out: that’s about the last thing we haven’t done. I’ve tried to create a cross-compilation setup, but got stuck on making Qt build with OpenGL support for Windows on Linux. If you can help, please join us on #krita.

Note for Windows users with an Intel GPU: If krita shows a black screen after opening an image, you need to update your Intel graphics drivers. This isn’t a bug in Krita, but in Intel’s OpenGL support. Update to 10.18.14 or later. Most Ultrabooks and the Cintiq Companion can suffer from outdated drivers.

Note on OSX: Krita on OSX is still experimental and not suitable for real work. Many features are still missing. Krita will only work on Mavericks and up.

Downloads

OSX:

Fun with a Surface Pro 3

Micosoft's Surface Pro 3 is, or should be, pretty much a perfect sketchbook device. Nice aspect ratio, high resolution, pen included. Of course, the screen is glossy and the pen isn't a Wacom. But when it was cheap with the keyboard cover thrown in for free I got one anyway -- as a test device for Krita on Windows.

In the end, hardware-wise, it's nice and light, it just looks good, the keyboard doesn't feel too bad, actually, and it's got home, end, page-up and page-down keys, a trick that Dell hasn't managed. The kickstand is rather meh -- it's sharp, hard to open and doesn't feel secure on my lap. Still, not a big problem.

Windows

I can handle Windows these days, even Windows 8. It's OSX that I truly despise to work with. However, when I got home and switched it on, I got my first shock... After the usual Windows setup sequence, not only did the device refuse to install the 70 or so critical updates, any and all https traffic was broken!

Turns out that the setup sequence picked the wrong timezone, and that in turn broke everything! Now, I might be a bit of an idiot, but I knew for sure that I had chosen Netherlands as location, US English as language, US English as keyboard. And besides, if your average Linux distribution can automatically set the right timezone, Windows should be able to, too! Since then, I've used the restore option a couple of times, and it seems to be really hit and miss whether the setup sequence understands this set of choices!

N-Trig

So, I restored to blank and started over. This time it did install everything. So, I installed the N-Trig wintab driver, Krita X64 2.9.0 and gave it a whirl. Everything worked perfectly. Cool!

Then I tried some other software, and when we released 2.9.1, I restored to blank, installed 86 updates and Krita 2.9.1. Now the 64 bits version of Krita didn't work properly with the pen anymore: no pressure sensitivity. That's something others have reported as well: the 32 bits version worked fine... Now N-Trig releases two versions of their drivers: 32 bits and 64 bits, and they claim that you need to use the x86 driver with x86 apps and the x64 driver with x64 apps, but... No matter which driver is installed, x86 Krita and x86 photoshop CS2 work fine. I don't have any other x64 based Wacom-based drawing application to test with, but all of this sounds very suspicious.

Especially when I noticed that on their other wintab driver download page, they will send you to a driver that fits your OS: 32 bit driver for a 32 bit Windows, 64 bit driver for a 64 bit Windows. I'm not totally sure that the N-Trig people actually know what they're doing.

And then I tested Krita 2.9.1 on another Windows 8.1 device with a more modern (1024 levels of pressure) N-Trig pen. With the 64 bit driver both x64 and x86 versions of Krita had pressure sensitivity. But... The driver reports the wrong number of sensitivity levels, in one place it claims 256, like the old pens, in one 1024, so Krita gets confused about that, unless we hack around it.

Still, this must be an issue with the drivers installed on the Surface Pro 3, and I haven't yet managed to make it work again, despite a couple of other wipes and reinstalls. Copying the drivers from the Intel N-Trig laptop to the Surface Pro also doesn't make a difference. Installing the Surface Pro 3 app doesn't make a difference. I guess I'd best mail the N-Trig developers.

For now, it's irritating as heck that I can't run the 64 bits version of Krita on the Surface Pro 3.

Linux

OpenSUSE 13.2 boots on it, and the pen even moves the cursor around. No idea about pressure sensitivity because while the trackpad is fine, the keyboard doesn't work yet, so I couldn't actually install stuff. Google suggests that kernel patches might help there, but it's the pen that I care about, and nobody mentions that.

As a drawing tablet

A good artist can do great stuff with pretty much everything. People can do wonders with an ipad and a capacitive stylus with no pressure sensitivity. I'm not much of an artist. I used to believe I could draw, but that was twenty-five years ago, and is another story besides. But I spent an evening with Krita 2.9.1 and the Surface, to get a feel for how it differs from, e.g. the Wacom Hybrid Companion.

The screen is glossy and very smooth. That's not as nice as the Companion's matte screen, but the high resolution makes up for it. It's really hard to see pixels. But... Every time you touch the screen with the pen, it deforms a little bit and becomes a little bit lighter. That's pretty distracting!

The pen also lags. Not just in Krita, not just when painting, but when hovering over buttons or menus, the cursor is always a bit behind.

There's no parallax, which is really irriting on the Cintiq. There's also no calibration needed, the pen is accurate into the deepest corners. That is pretty awesome, especially since it lets me use the zoom slider with the pen. The pen really feels very accurate. In Krita at least: in Photoshop CS2, it's very hit and miss whether a quick stroke will register.

The tablet itself is nice and light, the form factor pretty much perfect. I can hold it in one hand, draw with the other one, in portrait mode, for quite some time. Try that with a Companion!

The hybrid companion doesn't get hot, though I heard that the Windows companion does. The Surface certainly does get hot! But the heat is located in one place, not the place where I held the tablet, it was only noticeable because I rest my hand on the screen while drawing. Note for other Surface users: disabling flicks in the control center makes life much easier!

For Krita

I have had reports that the virtual keyboard didn't work from a Cintiq Companion user, but it works just fine for me on the Surface. I can enter values in all the dockers, sliders and dialogs in desktop mode.

As for the N-Trig pen, we support pressure sensitivity, but the two buttons are weird things and I don't know yet how to work with them. Which means, no quick-access palette, no quick panning yet.

In any case, we need to do a lot of work on the Sketch gui to make it as usable as, e.g., Art Flow on Android. Heck, we need to do a lot of work on Sketch to bring it up to 2.9! But a good tablet-mode gui is something I really want to work on.

One-antlered stags

[mule deer stag with one antler] This fellow stopped by one evening a few weeks ago. He'd lost one of his antlers (I'd love to find it in the yard, but no luck so far). He wasn't hungry; just wandering, maybe looking for a place to bed down. He didn't seem to mind posing for the camera.

Eventually he wandered down the hill a bit, and a friend joined him. I guess losing one antler at a time isn't all that uncommon for mule deer, though it was the first time I'd seen it. I wonder if their heads feel unbalanced.
[two mule deer stags with one antler each]

Meanwhile, spring has really sprung -- I put a hummingbird feeder out yesterday, and today we got our first customer, a male broad-tailed hummer who seemed quite happy with the fare here. I hope he stays around!

March 31, 2015

Introducing the darktable app store

Today we are happy to announce a big new feature that we will not only ship with the big 2.0 release later this year but also with our next point release, 1.6.4, which is due in about a week: even more darkroom modules!

One of the big strengths of darktable has always been its varied selection of modules to tweak your image. However, that has also been one of the main points of criticism: too much, too many and too complicated to grasp. To make it easier for the user to deal with the flood of tools darktable has had the “more modules” list since many years. It changed its appearance a few times, we added module categories, allowed to select favorite modules, and all of that has proven to be useful. Thus there have always been people that approached us with great new ideas for new modules, especially since we moved to GitHub a while ago with its powerful Pull Request system, yet we couldn't accept many of them. Some were not that great codewise, some didn't really fit our product vision – and then there were some that looked nice and certainly benefited some users, but we felt it wasn't generic enough to justify polluting our module list even more. Of course this was a bad situation, after all these people invested quite some time into providing us with a new feature that we turned down. No one likes to waste their time.

In the default state the new dialog doesn't clutter the gui

In the default state the new dialog doesn't clutter the gui

After initial discussions about this topic at last year's LGM in Leipzig we started working on a solution later that year and now we feel confident to present you the new module store. Think of it as an in-game app store (for all you gamers out there), or Firefox' Add-On system. Instead of bloating the list of modules shipped with darktable you can now easily browse for exciting new features from within the darkroom GUI, installing new modules on the fly (or uninstall them if you don't like the result), and even see previews of the module's effect on the currently opened image. We are certain that you will like this.

Who will also like it are module developers. Writing new image modules has always been quite easy: clone the darktable sources, create one new C file and add it to the CMake system. But that was only the first part, after all you wanted to allow people to use your work in the end. So you either had to convince us to include your module into the official darktable release (with the problems outlined above), or provide a patched version of darktable for people to compile themselves. In theory you could also have provided a binary package or just the module compiled into a shared library for people to just copy to their install directory, but we have never seen anyone taking that route. With our new module system this will become easier. Instead of creating a patched version of darktable you can now make use of our Module Developers Package which contains all the required header files and a CMake stub to quickly compile your module into a shared library that can be used with a stock installation of darktable. And since we will release these files under a LGPL license you could even write non-free modules. Once you are happy you can submit your code for us to review (this step is still manual to prevent malicious code being shipped) and once we approved it everyone can install the module.

Currently there is just the one module in store. More to come!

Currently there is just the one module in store. More to come!

All of the things described until now are implemented already and will be part of the next releases. Once it has proven to be working reliably we also plan to allow developers to make some money with their work as an incentive to attract more and better developers to our community. We are currently evaluating what payment models would work best, at the moment PayPal looks like a strong contender, but we are open for suggestions.

In case you are curious how it's implemented, it is based on the GHNS system that is already used by KDE and others, and might eventually also be merged with the styles you can find on http://dtstyle.net/. On the server side there is a continuous integration system (Jenkins in our case) that recompiles everything whenever something got changed, taking care of the different target architectures and operating systems with their dependencies. And if you don't want to wait for the release just try a development build, the code is merged and ready to be tested. As a first example we moved the new “color reconstruction” module from the regular darktable installation to the store.

PS: Thou shalt not believe what got posted on the Internet on April 1st.

March 30, 2015

Interview with Odysseas Stamoglou

scifi-girl-800

Could you tell us something about yourself?

My name is Odysseas Stamoglou, I am an artist born in Athens, Greece. I am currently based in Vienna, working as a freelancer illustrator for board games and historical covers and comics.

I am also a musician, giving concerts and composing music for film and documentaries.

Do you paint professionally, as a hobby artist, or both?

Professionally, and… both! To me, making pictures is an important way of communication and understanding. I take in my real world experience, and these images come out. So I would say that I became a professional artist because of my need to stay in touch and evolve this essential communication skill.

What genre(s) do you work in?

My favourite themes usually involve things that cannot be photographed. So, science fiction, fantasy and history are the fields I mostly work on.

Whose work inspires you most — who are your role models as an artist?

There are so many amazing artists out there that I would have to make a long list. I am absolutely fascinated by the classic renaissance and romantic painters, and I greatly appreciate the work of many modern illustrators and industrial designers. However, my great inspiration and teacher is real life experience, as this is the source where we all draw from.

What made you try digital painting for the first time?

My illustration teacher introduced me to digital painting, back when I was studying in 2004.

What makes you choose digital over traditional painting?

Speed and no material restrictions would be the obvious answer for me. I have worked a lot with acrylics and watercolors, inks and oils. Although computers are still lacking the freedom, and the organic randomness and the “dirtyness” of natural media, digital painting is the obvious choice for a professional illustrator.

You don’t need to “wait for the paint to dry”, and you can explore your designs, colour combinations and even different versions of your painting for ever. Plus, computers are like a big melting pot where you can throw paintings, drawings, photographs… anything you like to enhance your image. However, more often than not I find myself working traditionally up to a certain point and then finishing the image digitally. Each medium has its strengths and weaknesses, but in the end they are just tools.

PHOOW-Rome

How did you find out about Krita?

I found Krita a couple of years ago, when I was randomly looking what’s new in the open source world.

What was your first impression?

I believe the first version I got my hands on was 2.4 or 2.5. Well, it was obvious that I couldn’t use this for my work, but it looked promising and from that point on I started to keep an eye on the progress of Krita.

What do you love about Krita?

I love the fact that it is open source. The ability to directly communicate with the creators of my painting tool and actively participate in a great and positive community dedicated to making it better, is invaluable.

The enthusiasm and the energy the developers are constantly putting in it, and also the frequency in which they are releasing updates and fixes is simply amazing.

What do you think needs improvement in Krita? Also, anything that you really hate?

Krita has come a long way since I started working with it around two years ago. All the main features are great and constantly refined and improved.

Since last year I am confident enough with this software and I trust it enough to use it for 90% of my professional painting work.

I would like Krita to be more stable (occasional crashes, overal performance) and less distractive (it seems like Krita still thinks that the dockers and the canvas belong to separate worlds. One has to click on the canvas every time in order to trigger any canvas-related  actions.)

But hate? No, I don’t think there is room for hate here.

In your opinion, what sets Krita apart from the other tools that you use?

Well, there are programs out there that emulate watercolors better, or have a better overal performance. But Krita really feels like home to me. I can customize it to my heart’s delight and the toolset of Krita combines many great ideas in one place. To name a few of my favourites:

– The brush engine
– The amazing ruler assistant tool, you guys brought a much needed feature the digital world was lacking until now.
– The file layers are simply ingenious.
– The favourites popup

If you had to pick one favourite of all your work done in Krita so far, what would it be? Why this particular picture?

It would be the SciFi girl! I guess, because it was a quick, spontaneous, simple, funny and meaningful picture that I made during a random evening last year and which I enjoyed very much.

What techniques and brushes did you use in it?

The simplest available in the market, really..A pencil-like brush for a quick sketch and most of the rest was done using a custom made pastel brush. In some minor places I used some selections and airbrushing.

Where can people see more of your work?

They can visit my website: www.odysseus-art.com, and get updates on twitter and facebook.

Anything else you’d like to share?

Thank you for inviting me to this interview. I am very enthusiastic about Krita, and proud to call it my favourite painting tool. You guys rock!

First Krita Book Published in Japan

Today we got mail from Kayoko Matsumoto that the first book about Krita has gone into print!

krita-book-japan

You can get the book from Amazon or Mynavi. Kayoko has promised to send the Krita Foundation a copy, too, and we’re waiting with bated breath for it to arrive. And you can download a sample chapter, too.

March 29, 2015

An API is only as good as its documentation.

Your APIs are only as good as the documentation that comes with them. Invest time in getting docs right. — @rubenv on Twitter

If you are in the business of shipping software, chances are high that you’ll be offering an API to third-party developers. When you do, it’s important to realize that APIs are hard: they don’t have a visible user interface and you can’t know how to use an API just by looking at it.

For an API, it’s all about the documentation. If an API feature is missing from the documentation, it might as well not exist.

Sadly, very few developers enjoy the tedious work of writing documentation. We generally need a nudge to remind us about it.

At Ticketmatic, we promise that anything you can do through the user interface is also available via the API. Ticketing software rarely stands alone: it’s usually integrated with e.g. the website or some planning software. The API is as important as our user interface.

To make sure we consistently document our API properly, we’ve introduced tooling.

Similar to unit tests, you should measure the coverage of your documentation.

After every change, each bit of API endpoint (a method, a parameter, a result field, …) is checked and cross-referenced with the documentation, to make sure a proper description and instructions are present.

The end result is a big documentation coverage report which we consider as important as our unit test results.

Constantly measure and improve the documentation coverage metric.

More than just filling fields

A very important things was pointed out while circulating these thoughts on Twitter.

Shaun McCance (of GNOME documentation fame) correctly remarked:

@rubenv I’ve seen APIs that are 100% documented but still have terrible docs. Coverage is no good if it’s covered in crap. — @shaunm on Twitter

Which is 100% correct. No amount of metrics or tooling will guarantee the quality of the end-result. Keeping quality up is a moral obligation shared by anyone in the team and that can never be replaced with software.

Nevertheless, getting a slight nudge to remind you of your documentation duties never hurts.


Comments | @rubenv on Twitter

March 27, 2015

The World is Wrong!!!

"No shit, the world is wrong! It ain't got a clue! But here, in this one minute video, I will explain what's wrong and how I have discovered the right way."

As soon as you encounter something that can be reduced to the above, it's a pretty fair sign that the author doesn't know what he's talking about. Anyone who thinks they're so unique that they can come up with something nobody else has thought before is, with likelihood bordering on certainty deluded. The world is full of smart people who have encountered the problem before. Any problem.

That means that a video that's been doing the rounds on how "Computer Color is Broken" is a case in point. The author brings us his amazing discovery that linear rgb is better than non-linear, except that everyone who's been working on computer graphics has known all of that, for ages. It's textbook stuff. It's not amazing, it's just the way maths work. The same with the guy who some years ago proved that all graphics applications scale the WRONG way! "And how much did you pay for your expensive graphics software?", he asked. "Eh? You sucker, you got suckered", he effectively said, "but fortunately, here's me to put you right!" It's even the same thing, actually.

Whether it's about color, graphics, or finding the Final Synthesis between Aristotle and Plato, this is my rule of thumb: people who think everone else in the world is wrong, are certainly wrong. (Also, Basque really is not the mother of all languages.)

And when it comes to color blending or image scaling: with Krita you got the choice. Use 16 bit rgb with a linear color profile, and you won't see the artefacts, or don't use it, and get the artefacts you probably were already used to, and were counting on for the effect you're trying to achieve. We've had support for that for a decade now.

Note: I won't link to any of these kookisms. They get enough attention already.

Hide Google's begging (or any other web content) via a Firefox userContent trick

Lately, Google is wasting space at the top of every search with a begging plea to be my default search engine.

[Google begging: Switch your default search engine to Google] Google already is my default search engine -- that's how I got to that page. But if you don't have persistent Google cookies set, you have to see this begging every time you do a search. (Why they think pestering users is the way to get people to switch to them is beyond me.)

Fortunately, in Firefox you can hide the begging with a userContent trick. Find the chrome directory inside your Firefox profile, and edit userContent.css in that directory. (Create a new file with that name if you don't already have one.) Then add this:

#taw { display: none !important; }

Restart Firefox, do a Google search and the begs should be gone.

In case you have any similar pages where there's pointless content getting in the way and you want to hide it: what I did was to right-click inside the begging box and choose Inspect element. That brings up Firefox's DOM inspector. Mouse over various lines in the inspector and watch what gets highlighted in the browser window. Find the element that highlights everything you want to remove -- in this case, it's a div with id="taw". Then you can write CSS to address that: hide it, change its style or whatever you're trying to do.

You can even use Inspect element to remove elements immediately. That won't help you prevent them from showing up later, but it can be wonderful if you need to use a page that has an annoying blinking ad on it, or a mis-designed page that has images covering the content you're trying to read.

March 26, 2015

Skin Retouching with Wavelets on PIXLS.US

Anyone who has been reading here for a little bit knows that I tend to spend most of my skin retouching time with wavelet scales. I've written about it originally here, then revisited it as part of an Open Source Portrait tutorial, and even touched upon the theme one more time (sorry about that - I couldn't resist the “touching” pun).

Because I haven’t possibly beat this horse dead enough yet, I have now also compiled all of those thoughts into a new post over on PIXLS.US that is now pretty much done:

PIXLS.US: Skin Retouching with Wavelet Decompose

Of course, we get another view of the always lovely Mairi before & after (from an older tutorial that some may recognize):


As well as the lovely Nikki before & after:


Even if you've read the other material before this might be worth re-visiting.

Don't forget, Ian Hex has an awesome tutorial on using luminosity masks in darktable, as well as the port of my old digital B&W article! These can all be found at the moment on the Articles page of the site.

The Other Blog

Don't forget that I also have a blog I'm starting up over on PIXLS.US that documents what I'm up to as I build the site and news about new articles and posts as they get published! You can follow the blog on the site here:


There's also an RSS feed there if you use a feed reader (RSS).

Write For PIXLS

I am also entertaining ideas from folks who might like to publish a tutorial or article for the site. If you might be interested feel free to contact me with your idea! Spread the love! :)

Call for Content Blenderart Magazine #47

We are ready to start gathering up tutorials, making of articles and images for Issue # 47 of Blenderart Magazine.

The theme for this issue is What’s your Passion?

Everyone has an area or topic that motivates them to try harder, work longer and push beyond their comfort zone. What is your singular creative joy? Is there an area of Blender you love to explore? A project you want to start and complete? Have you already completed an amazing new project this year? Any challenges you have started or completed? Any projects you completed or started last year that you want to explore further or have led you to new areas of exploration in your art?

What are you working on? We would love to hear about it and cheer you on. We are looking for articles on:

    • Challenges (30 day, year long, organized or personal)

    • New or on going projects

    • Areas of Blender that you want to or are currently exploring

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P

Articles

Send in your articles to sandra
Subject: “Article submission Issue # 47 [your article name]”

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “New Beginnings”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 47″

Note: Image size should be of 1024x (width) at max.

Last date of submissions May 5, 2015.

Good luck!
Blenderart Team

Blenderart Mag Issue #46 now available

Welcome to Issue #46, “FANtastic FANart

In this issue, we pay tribute to the creative geniuses that inspire us to attempt creative masterpieces of our own. The “FANtastic Fanart” gathered within is sure to inspire you to practice your skills. So settle in with your favorite beverage and check out all the fun goodies we have gathered for you.

Table of Contents:

Modeling Clay Characters

  • Final Inspection
  • Making of DJ. Boyie
  • Back to the 80’s
  • Tribute to Pierre Gilhodes
  • Minas Tirith

And Lot More…

March 25, 2015

Summary of Enabling New Contributors Brainstorm Session

Photo of Video Chat

So today we had a pretty successful brainstorm about enabling new contributors in Fedora! Thank you to everyone who responded my call for volunteers yesterday – we were at max capacity within an hour or two of the post! :) It just goes to show this is a topic a lot of folks are passionate about!

Here is a quick run-down of how it went down:

Video Conference Dance

We tried to use OpenTokRTC but had some technical issues (we were hitting an upper limit and people were getting booted, and some folks could see/hear some but not others. So we moved onto the backup plan – BlueJeans – and that worked decently.

Roleplay Exercise: Pretend You’re A Newbie!

Watch this part of the session starting here!

For about the first 30 minutes, we brainstormed using a technique called Understanding Chain to roleplay as if we were new contributors trying to get started in Fedora and noting all of the issues we would run into. We started thinking about how would we even begin to contribute, and then we started thinking about what barriers we might run up against as we continued on. Each idea / thought / concept got its own “sticky note” (thanks to Ryan Lerch for grabbing some paper and making some large scale stickies,) I would write the note out, Ryan would tack it up, and Stephen would transcribe it into the meeting piratepad.

Photo of the whiteboard with all of the sticky notes taped to it.

Walkthrough of the Design Hubs Concept Thus Far

Watch this part of the session starting here!

Next, I walked everyone through the design hubs concept and full set of mockups. You can read up more on the idea at the original blog post explaining the idea from last year. (Or poke through the mockups on your own.)

Screenshot of video chat: Mo explaining the Design Hubs Concept

Comparing Newbie Issues to Fedora Hubs Offering

Watch this part of the session starting here!

We spent the remainder of our time wakling through the list of newbie issues we’d generated during the first exercise and comparing them to the Fedora Hubs concept. For each issue, we asked these sorts of questions:

  • Is this issue addressed by the Fedora Hubs design? How?
  • Are there enhancements / new features / modifications we could make to the Fedora Hubs design to better address this issue?
  • Does Fedora Hubs relate to this issue at all?

We came up with so many awesome ideas during this part of the discussion. We had ideas inline with the issues that we’d come up with during the first exercise, and we also had random ideas come up that we put in their own little section on the piratepad (the “Idea Parking Lot.”)

Here’s a little sampling of ideas we had:

  • Fedorans with the most cookies are widely helpful figures within Fedora, so maybe their profiles in hubs could be marked with some special thing (a “cookie monster” emblem???) so that new users can find folks with a track record of being helpful more easily. (A problem we’d discussed was new contributors having a hard time tracking down folks to help them.)
  • User hub profiles can serve as the centralized, canonical user profile for them across Fedora. No more outdated info on wiki user pages. No more having to log into FAS to look up information on someone. (A problem we’d discussed was multiple sources for the same info and sometimes irrelvant / outdated information.)
  • The web IRC client we could build into hubs could have a neat affordance of letting you map an IRC nick to a real life name / email address with a hover tool tip thingy. (A problem we’d discussed was difficulty in finding people / meeting people.)
  • Posts to a particular hub on Fedora hubs are really just content aggregated from many different data sources / feeds. If a piece of data goes by that proves to be particularly helpful, the hub admins can “pin” it to a special “Resources” area attached to the hub. So if there’s great tutorials or howtos or general information that is good for group members to know, they can access it on the team resource page. (A problem we’d discussed was bootstrapping newbies and giving them helpful and curated content to get started.)
  • Static information posted to the hub (e.g. basic team metadata, etc.) could have a set “best by” date and some kind of automation could email the hub admins every so often (every 6 months?) and ask them to re-read the info and verify if it’s still good or update it if not. (The problem we’d discussed here was out-of-date wiki pages.)
  • Having a brief ‘intake questionnaire’ for folks creating a new FAS account to get an idea of their interests and to be able to suggest / recommend hubs they might want to follow. (Problem-to-solve: a lot of new contributors join ambassadors and aren’t aware of what other teams exist that could be a good place for them.)

There’s a lot more – you can read through the full piratepad log to see everything we came up with.

Screenshot of video chat discussion

Next Steps

Watch this part of the session starting here!

Here’s the next steps we talked about at the end of the meeting. If you have ideas for others or would like to claim some of these items to work on, please let me know in the comments!

  1. We’re going to have an in-person meetup / hackfest in early June in the Red Hat Westford office. (mizmo will plan agenda, could use help)
  2. We need a prioritized requirements list of all of the features. (mizmo will work on this, but could use help if anybody is interested!)
  3. The Fedora apps team will go through the prioritized requirements list when it’s ready and give items an implementation difficult rating.
  4. We should do some resarch on the OpenSuSE Connect system and how it works, and Elgg, the system they are using for the site. (needs a volunteer!)
  5. We should take a look at the profile design updates to StackExchange and see if there’s any lessons to be learned there for hubs. (mizmo will do this but would love other opinions on it.)
  6. We talked about potentially doing another video chat like this in late April or early May, before the hackfest in June.
  7. MOAR mockups! (mizmo will do, but would love help :))

How to Get Involved / Resources

So we have a few todos listed above that could use a volunteer or that I could use help with. Here’s the places to hang out / the things to read to learn more about this project and to get involved:

Please let us know what you think in the comments! :)

GNOME 3.16 is out!

Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!



Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.



I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

March 24, 2015

How to turn the Chromebook Pixel into a proper developer laptop

Recently I spent about a day installing Fedora 22 + jhbuild on a Chromebook and left it unplugged overnight. The next day I turned it on with a flat battery, grabbed the charger, and the coreboot bios would not let me do the usual ctrl+L boot-to-SeaBIOS trick. I had to download the ChromeOS image to an SD card, reflash the ChromeOS image and thet left me without any of my Fedora workstation I’d so lovingly created the day before. This turned a $1500 laptop with a gorgeous screen into a liability that I couldn’t take anywhere for fear of losing all my work, again. The need to do CTRL+L every time I rebooted was just crazy.

I didn’t give up that easily; I need to test various bits of GNOME on a proper HiDPI screen and having a loan machine sitting in a bag wasn’t going to help anyone. So I reflashed the BIOS, and now have a machine that boots straight into Fedora 22 without any of the other Chrome stuff getting in the way.

Reflashing a BIOS on a Chromebook Pixel isn’t for the feignt of heart, but this is the list of materials you’ll need:

  • Set of watchmakers screwdrivers
  • Thin plastic shim (optional)
  • At least 1Gb USB flash drive
  • An original Chromebook Pixel
  • A BIOS from here for the Pixel
  • A great big dollop of courage

This does involve deleting the entire contents of your Pixel, so back anything up you care about before you start, unless it’s hosted online. I’m also not going to help you if you brick your machine, cateat emptor and all that. So, lets get cracking:

  • Boot chromebook into Recovery Mode (escape+refresh at startup) then do Control+D, then Enter, wait for ~5 mins while the Pixel reflashes itself
  • Power down the machine, remove AC power
  • Remove the rubber pads from the underside of the Pixel, remove all 4 screws
  • Gently remove the adhesive from around the edges, and use the smallest shim or screwdriver you have to release the 4 metal catches from the front and sides. You can leave the glue on the rear as this will form a hinge you can use. Hint: The tabs have to be released inwards, although do be aware there are 4 nice lithium batteries that might kinda explode if you slip and stab them hard with a screwdriver.
  • Remove the BIOS write protect screw AND the copper washer that sits between the USB drives and the power connector. Put it somewhere safe.
  • Gently close the bottom panel, but not enough for the clips to pop in. Turn over the machine and boot it.
  • Do enough of the registration so you can logon. Then logout.
  • Do the CTRL+ALT+[->] (really F2) trick to get to a proper shell and login as the chromos user (no password required). If you try to do it while logged in via the GUI it will not work.
  • On a different computer, format the USB drive as EXT4 and copy the squashfs.img, vmlinuz and initrd.img files there from your nearest Fedora mirror.
  • Also copy the correct firmware file from johnlewis.ie
  • Unmount the USB drive and remove
  • Insert the USB drive in the Pixel and mount it to /mnt
  • Make a backup of the firmware using /usr/sbin/flashrom -r /mnt/backup.rom
  • Flash the new firmware using /usr/sbin/flashrom -w /mnt/the_name_of_firmware.rom
  • IMPORTANT: If there are any warnings or errors you should reflash with the backup; if you reboot now you’ll have a $1500 brick. If you want to go back to the backup copy just use /usr/sbin/flashrom -w /mnt/backup.rom, but lets just assume it went well for now.
  • /sbin/shutdown -h now, then remove power again
  • Re-open the bottom panel, which should be a lot easier this time, and re-insert the BIOS write washer and screw, but don’t over-tighten.
  • Close the bottom panel and insert the clips carefully
  • Insert the 4 screws and tighten carefully, then convince the sticky feet to get back into the holes. You can use a small screwdriver to convince them a little more.
    Power the machine back on and it will automatically boot to the BIOS. Woo! But not done yet.
  • It will by default boot into JELTKA which is “just enough Linux to kexec another”.
  • When it looks like it’s hung, enter “root” then enter and it’ll log into a root prompt.
  • Mount the USB drive into /mnt again
  • Do something like kexec -l /mnt/vmlinuz --initrd=/mnt/initrd.img --append=stage2=hd:/dev/sdb1:/squashfs.img
  • Wait for the Fedora installer to start, then configure a network mirror where you can download packages. You’ll have to set up Wifi before you can download package lists.

This was all done from memory, so feel free to comment if you try it and I’ll fix things up as needed.

Fedora Design Team Update

Fedora Design Team Logo

Fedora Design Team Meeting 24 March 2015

Completed Tickets

Ticket 361: Fedora Reflective Bracelet

This ticket involved a simple design for a reflective bracelet for bike riders to help them be more visible at night. The imprint area was quite small and the ink only one color, so this was fairly simple.

Tickets Open For You to Take!

One of the things we required to join the design team is that you take and complete a ticket. We have one ticket currently open and awaiting you to claim it and contribute some design work for Fedora :):

Discussion

Fedora 22 Supplemental Wallpapers Vote Closes Tomorrow!

Tomorrow (Wednesday, March 25) is the last day to get in your votes for Fedora 22’s supplemental wallpapers! Vote now! (All Fedora contributors are eligible to vote.)

(Oh yeah, don’t forget – You’ll get a special Fedora badge just for voting!)

Fedora 22 Default Wallpaper Plan

A question came up what our plan was with the Fedora 22 wallpaper – Ryan Lerch created the mockups that we shipped / will ship in the alpha and beta and the feedback we’ve got on these is positive thus far so we’ll likely not change direction for Fedora 22’s default wallpaper. The pattern is based on the pattern Ryan designed for the Fedora.next product artwork featured on getfedora.org.

However, it is never too early to think about F23 wallpaper. If you have some ideas to share, please share them on the design team list!

2015 Flock Call for Papers is Open!

Flock is going to be at the Hyatt Regency in Rochester, New York. The dates are August 12 to August 15.

Gnokii proposed that we figure out which design team members are intending to go, and perhaps we could plan out different sessions for a design track. Some of the sessions we talked about:

  • Design Clinic – bring your UI or artwork or unfiled design team ticket to an open “office hours” session with design team members and get feedback / critique / help.
  • Wallpaper Hunt – design team members with cameras could plan a group photoshoot to get nice pictures that could make good wallpapers for F23 (rietcatnor suggested Highland Park as a good potential place to go.
  • Badge Design Workshop – riecatnor is going to propose this talk!

I started a basic wiki page to track the Design Team Flock 2015 presence – add your name if you’re intending to go and your ideas for talk proposals so we can coordinate!

(I will message the design-team list with this idea too!)

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

Enabling New Contributors Brainstorm Session

You (probably don’t, but) may remember an idea I posted about a while back when we were just starting to plan out how to reconfigure Fedora’s websites for Fedora.next. I called the idea “Fedora Hubs.”

Some Backstory

The point behind the idea was to provide a space specifically for Fedora contributors that was separate from the user space, and to make it easier for folks who are non-packager contributors to Fedora to collaborate by providing them explicit tools to do that. Tools for folks working in docs, marketing, design, ambassadors, etc., to help enable those teams and also make it easier for them to bring new contributors on-board. (I’ve onboarded 3 or 4 people in the past 3 months and it still ain’t easy! It’s easy for contributors to forget how convoluted it can be since we all did it once and likely a long time ago.)

Well, anyway, that hubs idea blog post was actually almost a year ago, and while we have a new Fedora project website, we still don’t have a super-solid plan for building out the Fedora hub site, which is meant to be a central place for Fedora contributors to work together:

The elevator pitch is that it’s kind of like a cross between Reddit and Facebook/G+ for Fedora contributors to keep on top of the various projects and teams they’re involved with in Fedora.

There are some initial mockups that you can look through here, and a design team repo with the mockups and sources, but that’s about it, and there hasn’t been a wide or significant amount of discussion about the idea or mockups thus far. Some of the thinking behind what would drive the site is that we could pull in a lot of the data from fedmsg, and for the account-specific stuff we’d make API calls to FAS.

Let’s make it happen?

"Unicorn - 1551"  by j4p4n on openclipart.org. Public Domain.

“Unicorn – 1551″ by j4p4n on openclipart.org. Public Domain.

Soooo…. Hubs isn’t going to magically happen like unicorns, so we probably need to figure out if this is a good approach for enabling new contributors and if so how is it going to work, who is going to work on it, what kind of timeline are we looking at – etc. etc. So I’m thinking we could do a bit of a design thinking / brainstorm session to figure this out. I want to bring together representatives of different teams within Fedora – particularly those teams who could really use a tool like this to collaboate and bring new contributors on board – and have them in this session.

For various reasons, logistically I think Wednesday, March 25 is the best day to do this, so I’m going to send out invites to the following Fedora teams and ask them to send someone to participate. (I realize this is tomorrow – ugh – let’s try anyway.) Let me know if I forgot your team or if you want to participate:

  • Each of the three working groups (for development representation)
  • Infrastructure
  • Websites
  • Marketing
  • Ambassadors
  • Docs
  • Design

I would like to use OpenTokRTC for the meeting, as it’s a FLOSS video chat tool that I’ve used to chat with other Fedorans in the past and it worked pretty well. I think we should have an etherpad too to track the discussion. I’m going to pick a couple of structured brainstorming games (likely from gamestorming.com) to help guide the discussion. It should be fun!

The driving question for this brainstorm session is going to be:

How can we lower the bar for new Fedora contributors to get up and running?

Let me know if this question haunts you too. :)

This is the time we’re going to do this:

  • Wednesday March 25 (tomorrow!) from 14:00-16:00 GMT (10-12 AM US Eastern.)

Since this is short-notice, I am going to run around today and try to personally invite folks to join and try to build a team for this event. If you are interested let me know ASAP!

(‘Wait, what’s the rush?’ you might ask. I’m trying to have a session while Ryan Lerch is still in the US Eastern timezone. We may well end up trying another session for after he’s in the Australian timezone.)


Update

I think we’re just about at the limit of folks we can handle from both the video conferencing pov and the effectiveness of the brainstorm games I have planned. I have one or two open invites I’m hoping to hear back from but otherwise we have full representation here including the Join SIG so we are in good shape :) Thanks Fedora friends for your quick responses!

March 23, 2015

OpenRaster Python Plugin

Thanks to developers Martin Renold and Jon Nordby who generously agreed to relicense the OpenRaster plugin under the Internet Software Consortium (ISC) license (it is a permissive license, it is the license preferred by the OpenBSD project, and also the license used by brushlib from MyPaint). Hopefully other applications will be encouraged to take another look at implementing OpenRaster.

[Edit: The code might possibly also be useful to anyone interested in writing a plugin for other file formats that use ZIP containers or XML. For example XML Paper Specification (XPS) JavaFX (.fxz and the unzipped .fxd), or even OpenDocument]

The code has been tidied to conform to the PEP8 style guide, with only 4 warnings remaining, and they are all concerning long lines of more than 80 characters (E501).

The OpenRaster files are also far tidier. For some bizarre reason the Python developers choose to make things ugly by default, and neglected to include any line breaks in the XML. Thanks to Fredrik Lundh and Effbot.org for the very helpful pretty-printing code. The code has also been changed so that many optional tags are included if and only if they are needed, so if you ever do need to read the raw XML it should be a lot easier.

There isn't much for normal users unfortunately. The currently selected layer is marked in the OpenRaster file, and also if a layer is edit locked. If you are sending files to MyPaint it will correctly select the active layer, and recognize which layers were locked. (No import back yet though.) Unfortunately edit locking (or "Lock pixels") does require version 2.8 so if there is anyone out there stuck on version 2.6 or earlier I'd be interested to learn more, and I will try to adjust the code if I get any feedback. [Edit: No feedback but I fixed it anyway, to keep the plugin compatible with version 2.6.]
I've a few other changes that are almost ready but I'm concerned about compatibility and maintainability so I'm going to take a bit more time before releasing those changes.

The latest code is available from the OpenRaster plugin gitorious project page. [Edit: ... but only until May 2015.]

WebKitGTK+ 2.8.0

We are excited and proud of announcing WebKitGTK+ 2.8.0, your favorite web rendering engine, now faster, even more stable and with a bunch of new features and improvements.

Gestures

Touch support is one the most important features missing since WebKitGTK+ 2.0.0. Thanks to the GTK+ gestures API, it’s now more pleasant to use a WebKitWebView in a touch screen. For now only the basic gestures are implemented: pan (for scrolling by dragging from any point of the WebView), tap (handling clicks with the finger) and zoom (for zooming in/out with two fingers). We plan to add more touch enhancements like kinetic scrolling, overshot feedback animation, text selections, long press, etc. in future versions.

HTML5 Notifications

notifications

Notifications are transparently supported by WebKitGTK+ now, using libnotify by default. The default implementation can be overridden by applications to use their own notifications system, or simply to disable notifications.

WebView background color

There’s new API now to set the base background color of a WebKitWebView. The given color is used to fill the web view before the actual contents are rendered. This will not have any visible effect if the web page contents set a background color, of course. If the web view parent window has a RGBA visual, we can even have transparent colors.

webkitgtk-2.8-bgcolor

A new WebKitSnapshotOptions flag has also been added to be able to take web view snapshots over a transparent surface, instead of filling the surface with the default background color (opaque white).

User script messages

The communication between the UI process and the Web Extensions is something that we have always left to the users, so that everybody can use their own IPC mechanism. Epiphany and most of the apps use D-Bus for this, and it works perfectly. However, D-Bus is often too much for simple cases where there are only a few  messages sent from the Web Extension to the UI process. User script messages make these cases a lot easier to implement and can be used from JavaScript code or using the GObject DOM bindings.

Let’s see how it works with a very simple example:

In the UI process, we register a script message handler using the WebKitUserContentManager and connect to the “script-message-received-signal” for the given handler:

webkit_user_content_manager_register_script_message_handler (user_content, 
                                                             "foo");
g_signal_connect (user_content, "script-message-received::foo",
                  G_CALLBACK (foo_message_received_cb), NULL);

Script messages are received in the UI process as a WebKitJavascriptResult:

static void
foo_message_received_cb (WebKitUserContentManager *manager,
                         WebKitJavascriptResult *message,
                         gpointer user_data)
{
        char *message_str;

        message_str = get_js_result_as_string (message);
        g_print ("Script message received for handler foo: %s\n", message_str);
        g_free (message_str);
}

Sending a message from the web process to the UI process using JavaScript is very easy:

window.webkit.messageHandlers.foo.postMessage("bar");

That will send the message “bar” to the registered foo script message handler. It’s not limited to strings, we can pass any JavaScript value to postMessage() that can be serialized. There’s also a convenient API to send script messages in the GObject DOM bindings API:

webkit_dom_dom_window_webkit_message_handlers_post_message (dom_window, 
                                                            "foo", "bar");

 

Who is playing audio?

WebKitWebView has now a boolean read-only property is-playing-adio that is set to TRUE when the web view is playing audio (even if it’s a video) and to FALSE when the audio is stopped. Browsers can use this to provide visual feedback about which tab is playing audio, Epiphany already does that :-)

ephy-is-playing-audio

HTML5 color input

Color input element is now supported by default, so instead of rendering a text field to manually input the color  as hexadecimal color code, WebKit now renders a color button that when clicked shows a GTK color chooser dialog. As usual, the public API allows to override the default implementation, to use your own color chooser. MiniBrowser uses a popover, for example.

mb-color-input-popover

APNG

APNG (Animated PNG) is a PNG extension that allows to create animated PNGs, similar to GIF but much better, supporting 24 bit images and transparencies. Since 2.8 WebKitGTK+ can render APNG files. You can check how it works with the mozilla demos.

webkitgtk-2.8-apng

SSL

The POODLE vulnerability fix introduced compatibility problems with some websites when establishing the SSL connection. Those problems were actually server side issues, that were incorrectly banning SSL 3.0 record packet versions, but that could be worked around in WebKitGTK+.

WebKitGTK+ already provided a WebKitWebView signal to notify about TLS errors when loading, but only for the connection of the main resource in the main frame. However, it’s still possible that subresources fail due to TLS errors, when using a connection different to the main resource one. WebKitGTK+ 2.8 gained WebKitWebResource::failed-with-tls-errors signal to be notified when a subresource load failed because of invalid certificate.

Ciphersuites based on RC4 are now disallowed when performing TLS negotiation, because it is no longer considered secure.

Performance: bmalloc and concurrent JIT

bmalloc is a new memory allocator added to WebKit to replace TCMalloc. Apple had already used it in the Mac and iOS ports for some time with very good results, but it needed some tweaks to work on Linux. WebKitGTK+ 2.8 now also uses bmalloc which drastically improved the overall performance.

Concurrent JIT was not enabled in GTK (and EFL) port for no apparent reason. Enabling it had also an amazing impact in the performance.

Both performance improvements were very noticeable in the performance bot:

webkitgtk-2.8-perf

 

The first jump on 11th Feb corresponds to the bmalloc switch, while the other jump on 25th Feb is when concurrent JIT was enabled.

Plans for 2.10

WebKitGTK+ 2.8 is an awesome release, but the plans for 2.10 are quite promising.

  • More security: mixed content for most of the resources types will be blocked by default. New API will be provided for managing mixed content.
  • Sandboxing: seccomp filters will be used in the different secondary processes.
  • More performance: FTL will be enabled in JavaScriptCore by default.
  • Even more performance: this time in the graphics side, by using the threaded compositor.
  • Blocking plugins API: new API to provide full control over the plugins load process, allowing to block/unblock plugins individually.
  • Implementation of the Database process: to bring back IndexedDB support.
  • Editing API: full editing API to allow using a WebView in editable mode with all editing capabilities.

March 22, 2015

My FreeCAD talk at FOSDEM 2015

This is a video recording of the talk I did at FOSDEM this year. Enjoy! The pdf slides are here. Enjoy! FreeCAD talk at FOSDEM 2015 from Yorik van Havre on Vimeo

March 21, 2015

Tumblr showcase blog: Made with MyPaint

Check out our showcase blog on Tumblr, Made with MyPaint! It reblogs all the best new art on Tumblr where our little program was used.

Screengrab of the made-with-mypaint blog on tumblr

Go follow it now! It’s curated by one of our own, and we’d love suggestions for awesome SFW art you’d like to include: just ask via the blog.

March 20, 2015

"GNOME à 15 ans" aux JdLL de Lyon



Le week-end prochain, je vais faire une petite présentation sur les quinze ans de GNOME aux JdLL.

Si les dieux de la livraison sont cléments, GNOME devrait aussi avoir une présence dans le village associatif.

Call for submissions: Libre Graphics magazine 2.4

cfs_lgmag24

Issue 2.4: Capture

Data capture sounds like a thoroughly dispassionate topic. We collect information from peripherals attached to computers, turning keystrokes into characters, turning clicks into actions, collecting video, audio and images of varying quality and fidelity. Capture in this sense is a young word, devised in the latter half of the twentieth century. For the four hundred years previous, the word suggested something with far higher stakes, something more passionate and visceral. To capture was to seize, to take, like the capture of a criminal or of a treasure trove. Computation has rendered capture routine and safe.

But capture is neither simply an act of forcible collection or of technical routine. The sense of capture we would like to approach in this issue is gentler, more evocative. Issue 2.4 of Libre Graphics magazine, the last in volume 2, looks at capture as the act of encompassing, emulating and encapsulating difficult things, subtle qualities. Routinely, we capture with keyboards, mice, cameras, audio recorders, scanners, browsing histories, keyloggers. We might capture a fleeting expression in a photo, or a personal history in an audio recording. Our methods of data capture, though they may seem commonplace at first glance, offer opportunities to catch moments.

We’re looking for work, both visual and textual, exploring the concept of capture, as it relates to or is done with F/LOSS art and design. All kinds of capture, metaphorical or literal, are welcome. Whether it’s a treatise on the politics of photo capture in public places, a series of photos taken using novel F/LOSS methods, documentation of a homebrew 3D scanner, any riff on the idea of capture is invited. We encourage submissions for articles, showcases, interviews and anything else you might suggest. Proposals for submissions (no need to send us the completed work right away) can be sent to submissions@libregraphicsmag.com.The deadline for submissions is May 11th, 2015.

Capture is the fourth and final issue in volume two of Libre Graphics magazine. Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses.

March 19, 2015

Hints on migrating Google Code to GitHub

Google Code is shutting down. They've sent out notices to all project owners suggesting they migrate projects to other hosting services.

I moved all my personal projects to GitHub years ago, back when Google Code still didn't support git. But I'm co-owner on another project that was still hosted there, and I volunteered to migrate it. I remembered that being very easy back when I moved my personal projects: GitHub had a one-click option to import from Google Code. I assumed (I'm sure you know what that stands for) that it would be just as easy now.

Nope. Turns out GitHub no longer has any way to import from Google Code: it tells you it can't find a repository there when you give it the address to Google's SVN repository.

Google's announcement said they were providing an exporter to GitHub. So I tried that next. I had the new repository ready on GitHub -- under the owner's account, not mine -- and I expected Google's exporter to ask me for the repository.

Not so. As soon as I gave it my OAuth credentials, it immediately created a new repository on GitHub under my name, using the name we had used on Google Code (not the right name, since Google Code project names have to be globally unique while GitHub projects don't).

So I had to wait for the export to finish; then, on GitHub, I went to our real repository, and did an import there from the new repository Google had created under my name. I have no idea how long that took: GitHub's importer said it would email me when the import was finished, but it didn't, so I waited several hours and decided it was probably finished. Then I deleted the intermediate repository.

That worked fine, despite being a bit circuitous, and we're up and running on GitHub now.

If you want to move your Google Code repository to GitHub without the intermediate step of making a temporary repository, or if you don't want to give Google OAuth access to your GitHub account, here are some instructions (which I haven't tested) on how to do the import via a local copy of the repo on your own machine, rather than going directly from Google to GitHub: krishnanand's steps for migrating Google code to GitHub

Help Making a Krita Master Class Possible!

The Belgium Blender User Group is currently holding a crowdfunding campaign to make it possible to organize four master classes about 3D and Digital art in Brussels. Four internationally renowned artists: David Revoy,  Sarah Laufer, François Gastaldo and François Grassard will teach in-depth about creating art using free graphics software: Krita and Blender.

David Revoy will be teaching Krita, with a focus on concept art and the challenges of digital painting — and he’ll introduce the new features we just released with Krita 2.9! Sarah Laufer has founded her own animation studio, regularly gives Blender courses in San Jose, and is now, of course, in the Netherlands for Project Gooseberry. She will focus on animating characters. François Gastaldo is an Open Shading Language expert and that’s the topic of his master class, while François Grassard from University Paris-8 has led the transition to free tools: Krita, Blender, Natron. He will talk about his experiences, but also about camera tracking, 3D integration and particle systems.

The organizers are committed to publishing videos afterwards. The Master Classes will be given in French, but there’s the intention to add subtitles in English.

The funding is meant to defray the travel expenses of the four speakers: if the campaign goes over budget, the surplus will be divided between the Krita Foundation and the Blender Foundation.

March 17, 2015

Announce: Entangle “Charm” release 0.7.0 – an app for tethered camera control & capture

I am pleased to announce a new release 0.7.0 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

The main features introduced in this release are a brand new logo, a plugin for automated capture of image sequences and the start of a help manual. The full set of changes is

  • Require GLib >= 2.36
  • Import new logo design
  • Switch to using zanata.org for translations
  • Set default window icon
  • Introduce an initial help manual via yelp
  • Use shared library for core engine to ensure all symbols are exported to plugins
  • Add framework for scripting capture operations
  • Workaround camera busy problems with Nikon cameras
  • Add a plugin for repeated capture sequences
  • Replace progress bar with spinner icon

The Entangle project has a bit of a quantum physics theme in its application name and release code names. So primary inspiration for the new logo was the interference patterns from (electromagnetic) waves. As well as being an alternate representation of an interference pattern, the connecting filaments can also be considered to represent the (USB) cable connecting camera & computer. The screenshot of the about dialog shows the new logo used in the application:

Logo

Introducing ColorHug ALS

Ambient light sensors let us change the laptop panel brightness so that you can still see your screen when it’s sunny outside, but we can dim it when the ambient room light level is lower to save power.

colorhug-als1-large

I’ve spent a bit of time over the last few months designing a small OpenHardware USB device that acts as a ambient light sensor. It’s basically an uncalibrated ColorHug1 design with a less powerful processor, but speaking a subset of same protocol so all the firmware update and test tools just work out of the box.

colorhug-als2-large

The sensor itself is a very small (12x22mm) printed circuit board that inserts directly into a spare USB socket. It only sticks out about 9mm from the edge of the laptop as most of the PCB actually gets pushed into the USB slot.

colorhug-als-pcb-large

ColorHugALS can currently control the backlight when running the colorhug-backlight utility. The Up/Down header buttons do the same as the hardware BrightnessUp and BrightnessDown keys. You can still set the absolute backlight so you’re in control of the absolute level right now, the ALS modifying the level either side of what you just set in the coming minutes. The brightness is modified using a exponential moving average, which makes the brightness changes smooth and unnoticeable on hardware with enough brightness levels.

colorhug-backlight-large

We also use the brightness value at start to be what you consider “normal” so the algorithm tries to stay out of the way. When we’ve got some defaults that work well and has been tested the aim is to push this into gnome-control-center and gnome-settings-daemon for GNOME 3.18 so that no additional software is required.

I’ve got 42 devices in stock now. Buy one here!

March 16, 2015

Krita 2.9.1 Released

The first bugfix release for Krita 2.9 is out! There are now builds for Windows, OSX and CentOS 6 available. While bug fixing is going on unabated, Dmitry has started working on the Photoshop layer style Kickstarter feature, too: drop shadows already work, and the rest of the layer styles are coming.The goal is to have this feature done for Krita 2.9.2, which should be out next month. And we’re working on a new Kickstarter project!

  • Fix the outline cursor on CentOS 6.5
  • Update G’Mic to the very latest version (but the problems on Windows are still not resolved)
  • Improve the layout of the filter layer/⁠mask dialog’s filter selector
  • Fix the layout of the pattern selector for fill layers
  • Remove the dependency of QtUiTools, which means that no matter whether the version Qt installed and the version Qt Krita was built agast differ, the layer metadata editors work
  • Fix a bug that happened when switching between workspaces
  • Fix bug 339357: the time dynamic didn’t start reliably
  • Fix bug 344862: a crash when opening a new view with a tablet stylus
  • Fix bug 344884: a crash when selecting too small a scale for a brush texture
  • Fix bug 344790: don’t crash when resizing a brush while drawing
  • Fix setting the toolbox to only one icon wide
  • Fix bug 344478: random crash when using liquify
  • Fix bug 344346: Fix artefacts in fill layers when too many parallel updates happened
  • Fix bug 184746: merging two vector layers now creates a vector layer instead of rendering the vectors to pixels
  • Add an option to disable the on-canvas notification messages (for some people, they slow down drawing)
  • Fix bug 344243: make the preset editor visible in all circumstances

Note on G’Mic on Windows: Lukas, David and Boudewijn are trying to figure out how to make G’Mic stable on Windows. The 32 bits 2.9.1 Windows build doesn’t include G’Mic at all. The 64 bits build does, and on a large enough system, most of the filters are stable. We’re currently trying different compilers because it seems that most problems are causes by Microsoft Visual Studio 2012 generating buggy code. We’re working like crazy to figure out how to fix this, but please, for now, on 64 bits Windows treat G’Mic as entirely experimental.

Note for Windows users with an Intel graphics board: If krita shows a black screen after opening an image, you need to update your Intel graphics drivers. This isn’t a bug in Krita, but in Intel’s OpenGL support. Update to 10.18.14 or later. Most Ultrabooks and the Cintiq Companion can suffer from outdated drivers.

Note on OSX: Krita on OSX is still experimental and not suitable for real work. Many features are still missing. Krita will only work on Mavericks and up.

Downloads

OSX:

Interview with Abbigail Ward

Mountains by Abbigail Ward

Would you like to tell us something about yourself?

Hi, my name is Abbigail Ward. I am a published illustrator and fine art student.

Do you paint professionally or as a hobby artist?

I have been drawing as a hobby since I was little but just started doing professional art work in the past couple of years. My first published book Monster Parade, written by Gregory Moss, came out in January, so that’s exciting! Besides that I have been doing small projects like character portraits and album covers.

When and how did you end up trying digital painting for the first time?

I first started digital painting around 9 years ago when my mother bought me a small Wacom tablet for Christmas. I loved that tablet, still have it in fact! Though I use a different tablet now. If I remember correctly, I believe it was digital art on elfwood.com that first made me want to try digital painting. I’ve always loved fantasy art.

What is it that makes you choose digital over traditional painting?

I wouldn’t say that I choose digital over traditional. I am actually going to college for fine arts at the moment and I love both. I will say that I find digital to be a better fit for my illustration work. It’s cheaper and faster and tends to translate better for printing. I also like the experimenting that I can do painting digitally. If I learn how to do something traditionally, I like to see if I can imitate that same technique on the computer. For instance I made a tutorial on how to make a drawing that looks like a pencil drawing. That was fun.

How did you first find out about open source communities? What is your opinion about them?

I can’t say exactly how I first heard of them. But I think they are wonderful! While I have used some FOSS for a few years now, I am still new to learning about the communities themselves. I wish I could do more to contribute, myself, but I don’t know what I could do.

Have you worked for any FOSS project or contributed in some way?

I haven’t contributed to any FOSS projects though I do try to spread the word about the programs I use and have started trying to make tutorials on how I make my art using them.

How did you find out about Krita?

I found out about Krita through David Revoy and looking for alternatives to Corel Painter.

What was your first impression?

Since it was a few years ago it was slow for me since I was using Krita on an old Windows laptop. Even then I loved it.

What do you love about Krita?

I feel like Krita is the closest fit to what I want in a program. It lets me imitate traditional media or have a more digital look without having to switch programs. A lot of the new features released in 2.9 make it even more efficient.

What do you think needs improvement in Krita? Also, anything that you really hate?

Hrm, most of what I would want to see improved has already been listed as future goals. The main improvement I would want to see would probably be the text box feature. That way I could edit the text without having to go to a different program.

In your opinion, what sets Krita apart from the other tools that you use?

Krita really is more focused to creating images from scratch without being chained to imitating traditional media as close to possible. I’ve tried quite a few programs, but Krita is the program that works the best for me, maybe because I like both traditional and digital media.

If you had to pick one favourite of all your work done in Krita so far, what would it be?

I can’t really pick a favourite picture. But I have plenty of pictures made with Krita in my gallery. I think if I had to choose one picture to share it might be  “Mountains” since it’s my newest finished work.

What brushes did you use in it?

For that one I used mostly the bristle, knife/flat brushes, and added a canvas texture.

Would you like to share it with our site visitors?

Anyone is free to use it since I uploaded it with a Creative Commons license.

Anything else you’d like to share?

Thanks for being awesome!

March 14, 2015

Making a customized Firefox search plug-in

It's getting so that I dread Firefox's roughly weekly "There's a new version -- do you want to upgrade?" With every new upgrade, another new crucial feature I use every day disappears and I have to spend hours looking for a workaround.

Last week, upgrading to Firefox 36.0.1, it was keyword search: the feature where, if I type something in the location bar that isn't a URL, Firefox would instead search using the search URL specified in the "keyword.URL" preference.

In my case, I use Google but I try to turn off the autocomplete feature, which I find it distracting and unhelpful when typing new search terms. (I say "try to" because complete=0 only works sporadically.) I also add the prefix allintext: to tell Google that I only want to see pages that contain my search term. (Why that isn't the default is anybody's guess.) So I set keyword.URL to: http://www.google.com/search?complete=0&q=allintext%3A+ (%3A is URL code for the colon character).

But after "up"grading to 36.0.1, search terms I typed in the location bar took me to Yahoo search. I guess Yahoo is paying Mozilla more than Google is now.

Now, Firefox has a Search tab under Edit->Preferences -- but that just gives you a list of standard search engines' default searches. It would let me use Google, but not with my preferred options.

If you follow the long discussions in bugzilla, there are a lot of people patting each other on the back about how much easier the preferences window is, with no discussion of how to specify custom searches except vague references to "search plugins". So how do these search plugins work, and how do you make one?

Fortunately a friend had a plugin installed, acquired from who knows where. It turns out that what you need is an XML file inside a directory called searchplugins in your profile directory. (If you're not sure where your profile lives, see Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it should lead you to your profile.)

Once you have one plugin installed, it's easy to edit it and modify it to do anything you want. The XML file looks roughly like this:

<SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/">
<os:ShortName>MySearchPlugin</os:ShortName>
<os:Description>The search engine I prefer to use</os:Description>
<os:InputEncoding>UTF-8</os:InputEncoding>
<os:Image width="16" height="16">data:image/x-icon;base64,ICON GOES HERE</os:Image>
<SearchForm>http://www.google.com/</SearchForm>
<os:Url type="text/html" method="GET" template="https://www.google.com/search">
  <os:Param name="complete" value="0"/>
  <os:Param name="q" value="allintext: {searchTerms}"/>
  <!--os:Param name="hl" value="en"/-->
</os:Url>
</SearchPlugin>

There are four things you'll want to modify. First, and most important, os:Url and os:Param control the base URL of the search engine and the list of parameters it takes. {searchTerms} in one of those Param arguments will be replaced by whatever terms you're searching for. So <os:Param name="q" value="allintext: {searchTerms}"/> gives me that allintext: parameter I wanted.

(The other parameter I'm specifying, <os:Param name="complete" value="0"/>, used to make Google stop the irritating autocomplete every time you try to modify your search terms. Unfortunately, this has somehow stopped working at exactly the same time that I upgraded Firefox. I don't see how Firefox could be causing it, but the timing is suspicious. I haven't been able to figure out another way of getting rid of the autocomplete.)

Next, you'll want to give your plugin a ShortName and Description so you'll be able to recognize it and choose it in the preferences window.

Finally, you may want to modify the icon: I'll tell you how to do that in a moment.

Using your new search plugin

[Firefox search prefs]

You've made all your modifications and saved the file to something inside the searchplugins folder in your Firefox profile. How do you make it your default?

I restarted firefox to make sure it saw the new plugin, though that may not have been necessary. Then Edit->Preferences and click on the Search icon at the top. The menu near the top under Default search engine is what you want: your new plugin should show up there.

Modifying the icon

Finally, what about that icon?

In the plugin XML file I was copying, the icon line looked like:

<os:Image width="16"
height="16">data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAAAAAAA
... many more lines like this then ... ==</os:Image>
So how do I take that and make an image I can customize in GIMP?

I tried copying everything after "base64," and pasting it into a file, then opening it in GIMP. No luck. I tried base64 decoding it (you do this with base64 -d filename >outfilename) and reading it in with GIMP. Still no luck: "Unknown file type".

The method I found is roundabout, but works:

  1. Copy everything inside the tag: data:image/x-icon;base64,AA ... ==
  2. Paste that into Firefox's location bar and hit return. You'll see the icon from the search plugin you're modifying.
  3. Right-click on the image and choose Save image as...
  4. Save it to a file with the extension .ico -- GIMP won't open it without that extension.
  5. Open it in GIMP -- a 16x16 image -- and edit to your heart's content.
  6. File->Export as...
  7. Use the type "Microsoft Windows icon (*.ico)"
  8. Base64 encode the file you just saved, like this: base64 yourfile.ico >newfile
  9. Copy the contents of newfile and paste that into your os:Image line, replacing everything after data:image/x-icon;base64, and before </os:Image>

Whew! Lots of steps, but none of them are difficult. (Though if you're not on Linux and don't have the base64 command, you'll have to find some other way of encoding and decoding base64.)

But if you don't want to go through all the steps, you can download mine, with its lame yellow smiley icon, as a starting point: Google-clean plug-in.

Happy searching! See you when Firefox 36.0.2 comes out and they break some other important feature.

The 2:1 Form Factor

At KO GmbH, we did several projects to show off the 2:1 convertible laptop form factor. Krita Gemini and Calligra Gemini are applications that automatically switch from laptop to tablet gui mode when you switch your device. Of course, one doesn't get that to work without some extensive testing, so here's a little collection of devices showing off all existing (I believe) ways of making a laptop convertible:

There's rip'n'flip, as exemplified by the Lenovo Helix, and arguably by the Surface Pro 3 (which its own twist, the kickstand). There's the bend-over-and-over-and-over model pioneered by the Thinkpad Yoga (but this is an Intel SDP, not a Yoga, all the Yoga's are with other ex-KO colleagues) and finally the screen tumbler of the Dell XPS 12.

Every model on the table has its own foibles.

The Helix actually doesn't do the 2:1 automatic switch trick, but that's because it's also the oldest model we've got around. The Helix basically has only one angle between the screen and keyboard, and that's it, and it's fairly upright. The keyboard is pretty good, the trackpoint is great, of course. In contrast to all the other devices, it also runs Linux quite well. The power button is sort recessed and really hard to press: it's hard to switch the device on. The built-in wacom pen is impossible to calibrate correctly, and it just won't track correctly at the screen edges. As tablet, it's nice and light, but the rip'n'flip thing is flawed: doesn't alway re-attach correctly.

The Dell XPS 12 is one of the nicest devices of the four. The screen rotation mechanism looks scary at first, but it works very, very well. It's a nice screen, too. The keyboard is ghastly, though, missing a bunch of essential keys like separate home, end, page-up, page-down. The powerbutton is placed at the left side, and it's just the right thing to play with, mindlessly, while thinking. This leads to the laptop suspending, of course! The device is heavy, too, too heavy to comfortably use as a tablet. As a laptop, except for the keyboard, it's pretty good. Linux compatibility is weird: you can either have the trackpad properly supported, or the wifi adapter, but not both. Not even with the latest Kubuntu! There's no pen, which is a pity...

I cannot talk about the what's in the Intel SDP system, because that's under NDA. It's got a pen, just like the Surface Pro 3, and it's nice, light and a good harbinger of things to come. The form factor works fine for me. Some people are bothered by feeling the keys at the back of the device when it's in tablet mode, but I don't care. Tent mode nice for watching movies, and presentation mode sort of makes it a nice drawing tablet.

The Surface Pro 3 is rather new. I got it as test system for Krita on Windows. It's the lowest spec model, because that makes the best test, right? I can use the thing on my lap, with the kickstand, but only if I put my feet up on the table and make long legs... The n-trig pen is sort of fine... It's accurate, there's no parallax as with the Cintiq or Helix, the pressure levels are fine, too, for my usage that is. But because it's a bluetooth device, there's a noticable delay. It's a bit as if you're painting with too thick oil paint. I never remove the keyboard cover, which, btw, is perfectly fine to type on. It feels a bit tacky, folded back, but not a big problem.

So that's it... Four convertible laptops, three have trouble running Linux at the moment, but then, I should do more testing of Krita on Windows anyway. It's where 90% of Krita's user base is, it seems. I'd like the Dell way of converting best, if the device weren't so heavy as a consequence. The Helix convertible never gets turned into a pure tablet in practice; that seems to go with the rip'n'flip design, because the same holds for the Surface Pro 3. The back-bendy type of convertible is just fine as well...

March 12, 2015

Memories

I first encountered Terry Pratchett's work in 1986, when Fergus McNeill's Quilled adventure game adaption of the Colour of Magic was released for the ZX Spectrum. Back then, Fergus was a bigger name in my mind than Terry Pratchett. I enjoyed the game a lot, but couldn't get the book anywhere -- this was 1986, the Netherlands, no Internet, Oosterhout, so no bookshop carrying any fantasy books in English beyond Lord of the Rings.

When I was in my first year in Leiden, eighteen years old, studying Sinology, a friend of mine and me, we went to London for a book-buying expedition. Forget about the Tower, the V&A or the National Portrait Gallery. We went for Foyles, The Fantasy Book Center and the British Library. I acquired the full set of Fritz Leiber's "Fafhrd and the Gray Mouser" series, Frank got Lord Dunsany's autobiography, I got Clark Ashton Smith's collected short stories and Lord Dunsany's Gods of Pegana (straight, apparently, from the rare books locker from the University of Buffalo).

I also bought Mort.

That was the first Terry Pratchett novel I read, and I was hooked. I read and re-read it a dozen times that week.

When I first met Irina, we had an overlapping taste, but very few books in common... The first book I foisted upon her was Equal Rites. I think, I'm not so sure anymore, I recognize books by their colour, and all my Terry Pratchett paperbacks have vaguely white splotchy spines by now.

If you look at our fantasy shelves, it's easy to see when I got my first job. That was 1994, when I bought my first Terry Pratchett hardcover. Since then, I've bought all his books in hardcover when they were released.

I fondly remember the Terry Pratchett and discworld Usenet newsgroups, back when Usenet was fun. alt.books.Pratchett, alt.fan.Pratchett. The annotated FAQ. L-Space. Pterry.

Deciding that, well, sure, I couldn't wait for the paperback, and would get the hardback, no matter what. Seeing the books' spines go all skewed with re-reading.

Were all his books awesome? No, of course not. Though I guess nobody will agree with me which ones were less awesome. And I sometimes got fed up with his particular brand of moralizing, even.

But, in my mind, Terry Pratchett falls in the same slot as Wodehouse and Diana Wynne Jones. Wodehouse had about thirty years more of productive life; and Wodehouse' sense of language was, honestly, better. But Terry Pratchett's work showed much more versatility, though there, Diana Wynne Jones surely was the greater master. But there are books, like Feet of Clay, that I read, re-read and will keep re-reading.

An author of a body of work that will last a long time.

Cat Splash Fever

Blender 2.74 is nearly out (in fact, you can test out Release Candidate 1 already!) and as with previous releases, there was a contest held with the community for the splash image that appears when Blender first launches. The theme for this release? Cats!

The rules were simple (as posted in a thread on blenderartists.org):

  • All cat renders will be fine, but preferred are the hairy fluffy Cycles rendered ones.
  • One cat, many cats, cartoon cats, crazy cats, angry cats, happy cats. All is fine.
  • Has to be a single F12 render+composite (no post process in other editors)
  • The selected artist should be willing to share the .blend with textures with everyone under a CC-BY-SA or CC-BY.
  • Deadline Saturday March 7.

And we got a winner! This excellent image by Manu Järvinen (maxon) is what you’ll see in Blender 2.74’s splash:

maxon

You can download a .zip with the splash and .blend (public domain!), or watch the making-of video.

But wait! That’s not all. There were many, many fantastic entries that were submitted. It would be a shame not to share them all. Check out this gallery of the top 10 runner-up submissions (in no particular order):

Agent_AL Jyri Unt (cgstrive) Davide Maimone (dawidh) kubo ^^NOva Robert J. Tiess (RobertT) Stan.1 StevenvdVeen Derek Barker (LordOdin - Theory Animation) Julian Perez (julperado)

March 11, 2015

Film Emulation in RawTherapee

This is old news but I just realized that I hadn't really addressed it before.

The previous work I had done on Film Emulation with G'MIC in GIMP (here and here) are also now available in RawTherapee directly! You'll want to visit this page on the RawTherapee wiki to see how it works, and to download the film emulation collection to use.


This is handy for those that may work purely in RawTherapee or that don't want to jump into GIMP just to do some color toning. It's a pretty big collection of emulations, so hopefully you'll be able to find something that you like. Here's the list of what I think is in the package (there may be more there now):

  • Fuji 160C, 400H, 800Z
  • Fuji Ilford HP5
  • Kodak Portra 160, 400, 800
  • Kodak TMAX 3200
  • Kodak Tri-X 400

  • Fuji Neopan 1600
  • Fuji Superia 100, 400, 800, 1600
  • Fuji Ilford Delta 3200
  • Kodak Portra 160 NC, 160 VC, 400 NC, 400 UC, 400 VC

  • Polaroid PX-70
  • Polaroid PX100UV
  • Polaroid PX-680
  • Polaroid Time Zero (Expired)

  • Fuji FP-100c
  • Fuji FP-3000b
  • Polaroid 665
  • Polaroid 669
  • Polaroid 690

  • Fuji Neopan 1600
  • Fuji Superia 100/400/800/1600
  • Ilford Delta 3200
  • Kodak Portra 160 NC/VC
  • Kodak Portra 400 NC/UC/VC

  • Fuji 160C
  • Fuji 400H
  • Fuji 800Z
  • Ilford HP5
  • Kodak Portra 160/400/800
  • Kodak TMax 3200
  • Kodak Tri-X 400

  • Polaroid PX-70
  • Polaroid PX100UV
  • Polaroid PX-680
  • Polaroid Time Zero (Expired)

  • Fuji FP-100c
  • Fuji FP-3000b
  • Polaroid 665
  • Polaroid 669
  • Polaroid 690

  • Agfa Precisa 100
  • Fuji Astia 100F
  • Fuji FP 100C
  • Fuji Provia 100F
  • Fuji Provia 400F
  • Fuji Provia 400X
  • Fuji Sensia 100
  • Fuji Superia 200 XPRO
  • Fuji Velvia 50
  • Generic Fuji Astia 100
  • Generic Fuji Provia 100
  • Generic Fuji Velvia 100
  • Generic Kodachrome 64
  • Generic Kodak Ektachrome 100 VS
  • Kodak E-100 GX Ektachrome 100
  • Kodak Ektachrome 100 VS
  • Kodak Elite Chrome 200
  • Kodak Elite Chrome 400
  • Kodak Elite ExtraColor 100
  • Kodak Kodachrome 200
  • Kodak Kodachrome 25
  • Kodak Kodachrome 64
  • Lomography X-Pro Slide 200
  • Polaroid 669
  • Polaroid 690
  • Polaroid Polachrome

  • Agfa Ultra Color 100
  • Agfa Vista 200
  • Fuji Superia 200
  • Fuji Superia HG 1600
  • Fuji Superia Reala 100
  • Fuji Superia X-Tra 800
  • Kodak Elite 100 XPRO
  • Kodak Elite Color 200
  • Kodak Elite Color 400
  • Kodak Portra 160 NC
  • Kodak Portra 160 VC
  • Lomography Redscale 100

  • Agfa APX 100
  • Agfa APX 25
  • Fuji Neopan 1600
  • Fuji Neopan Acros 100
  • Ilford Delta 100
  • Ilford Delta 3200
  • Ilford Delta 400
  • Ilford FP4 Plus 125
  • Ilford HP5 Plus 400
  • Ilford HPS 800
  • Ilford Pan F Plus 50
  • Ilford XP2
  • Kodak BW 400 CN
  • Kodak HIE (HS Infra)
  • Kodak T-Max 100
  • Kodak T-Max 3200
  • Kodak T-Max 400
  • Kodak Tri-X 400
  • Polaroid 664
  • Polaroid 667
  • Polaroid 672
  • Rollei IR 400
  • Rollei Ortho 25
  • Rollei Retro 100 Tonal
  • Rollei Retro 80s

Have fun with these, and don't forget to show off your results if you get a chance! It's always neat to see what folks do with these! :)

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

Stellarium in SOCIS 2015

Are you a student looking for an exciting summer job? Get paid this summer to work on Stellarium!

We were selected to be a mentoring organization for the ESA Summer of Code in Space 2015: a program funding european students for working on astronomy open source projects. Please review our ideas page and submit your application on http://sophia.estec.esa.int/socis/

March 10, 2015

Fedora Design Team Update

Fedora Design Team Logo

One of the things the Fedora Design Team decided to do following the Design Team Fedora Activity Day(s) we had back in January was to meet more regularly. We’ve started fortnightly meetings; we just had our second one.

During the FAD, we figured out a basic process for handling incoming design team tickets and Chris Roberts and Paul Frields wrote the SQL we needed to generate ticket reports for us to be able to triage and monitor our tickets more efficiently. From our whiteboard:

1

Anyhow, with those ticket reports in place and some new policies (if a ticket is older than 4 weeks with no response from the reporter, we’ll close it; if a ticket hasn’t had any updates in 2 weeks and the designer who took the ticket is unresponsive, we open it up for others) we went through a massive ticket cleanout during the FAD. We’ve been maintaining that cleanliness at our fortnightly meetings: we have only 16 open tickets now!

Were you to join one of our meetings, you’ll note we spend a lot of time triaging tickets together and getting updates on ticket progress; we also have an open floor for announcements and for designers to get critique on things they are working on.

Here’s a report from our latest meeting. I don’t know if I’ll have time to do this style of summary after every meeting, but I’ll try to do them after particularly interesting or full meetings. When I don’t post one of these posts, I will post the meetbot links to the design-team mailing list, so that is the best place to follow along.

Fedora Design Team Meeting 10 March 2015

Completed Tickets

FUDCon APAC 2015 Call for Proposals Poster

Shatadru designed a poster for FUDCon APAC in ticket 353; we closed the ticket since the CFP was closed.

353-3-1-compressed

LiveUSB Creator Icons

Gnokii took on a new ticket to design some icons for the new LiveUSB creator UI.

FUDCon Pune Logo design

logo date

Suchakra and Yogi together created the logo for FUDCon Pune, and we closed the ticket as the work was all done and accepted.

Standee Banner Design for Events

banner-czech2

Gnokii gave us a print-ready CMYK tiff for this banner design ticket; we updated it with a link to the file and asked for feedback from the reporter (siddesh.)

Fedora Magazine favicon

Ryan Lerch created a favicon for Fedora Magazine, so we closed the ticket seeing as it was done. :)

Tickets In Progress

Tickets Open For You to Take!

One of the things we required to join the design team is that you take and complete a ticket. We opened up 3 tickets for folks to be able to take – this could be you! Let me know if you want to work on any of these!

Discussion

Fedora 22 Supplemental Wallpapers Submission Window Closing Soon!

Gnokii pointed out that we really need more submissions for Fedora 22 supplemental wallpapers; the deadline is March 19. If you have some nice photography you’d like to submit or have a friend who has openly licensed photography you think would be a good match for Fedora, please submit it! All of the details are on gnokii’s blog post, and you can submit them directly in Nauncier, our wallpaper submission & voting app.

1/4 Page Ad for LinuxFest Northwest

One of our newest members, mleonova, put together some mockups for an ad for Fedora to go in the LinuxFest Northwest program. We gave her some critiques on her work and she is going to work on a final draft now.

New look for Fedora Magazine

Screenshot from 2015-03-10 14:38:11

Ryan Lerch put together a new design for Fedora Magazine on a test server and shared it with us for feedback; overall the feedback was overwhelmingly positive and we only had a couple of suggestions/ideas to add.

Ask.fedoraproject.org Redesign

Suchakra, Banas, and Sadin worked on a redesign of ask.fedoraproject.org during the Design Team FAD for ticket 199 and Suchakra showed us some things he’d been working on for that ticket. So far the work looks great, and it’s now listed as a possible summer project for a Fedora intern in GSoc.

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

March 08, 2015

Portable Float Map with 16-bit Half

Recently we saw some lively discussions about support of Half within the Tiff image format on the OpenEXR mailing list. That made me aware of the according oyHALF code paths inside Oyranos. In order to test easily, Oyranos uses the KISS format PPM. That comes with a three ascii lines header and then the uncompressed pixel data. I wanted to create some RGB images containing 16-bit floating point half channels, but that PFM format variant is not yet defined. So here comes a RFC.

A portable float map (PFM) starts with the first line identifier “Pf” or “PF” and contains 32-bit IEEE floating point data. The 16-bit IEEE/Nvidia/OpenEXR floating point data variant starts with a first line “Ph” or “PH” magic similar to PFM. “Ph” stands for grayscale with one sample. The “PH” identifier is used for RGB with three samples.

That’s it. Oyranos supports the format in git and maybe in the next 0.9.6 release.

GIMP: Turn black to another color with Screen mode

[20x20 icon, magnified 8 times] I needed to turn some small black-on-white icons to blue-on-white. Simple task, right? Except, not really. If there are intermediate colors that are not pure white or pure black -- which you can see if you magnify the image a lot, like this 800% view of a 20x20 icon -- it gets trickier.

[Bucket fill doesn't work for this] You can't use anything like Color to Alpha or Bucket Fill, because all those grey antialiased pixels will stay grey, as you see in the image at left.

And the Hue-Saturation dialog, so handy for changing the hue of a sky, a car or a dress, does nothing at all -- because changing hue has no effect when saturation is zero, as for black, grey or white. So what can you do?

I fiddled with several options, but the best way I've found is the Screen layer mode. It works like this:

[Make a new layer] In the Layers dialog, click the New Layer button and accept the defaults. You'll get a new, empty layer.

[Set the foreground color] Set the foreground color to your chosen color.

[Set the foreground color] Drag the foreground color into the image, or do Edit->Fill with FG Color.

Now it looks like your whole image is the new color. But don't panic!

[Use screen mode] Use the menu at the top of the Layers dialog to change the top layer's mode to Screen.

Layer modes specify how to combine two layers. (For a lot more information, see my book, Beginning GIMP). Multiply mode, for example, multiplies each pixel in the two layers, which makes light colors a lot more intense while not changing dark colors very much. Screen mode is sort of the opposite of Multiply mode: GIMP inverts each of the layers, multiplies them together, then inverts them again. All those white pixels in the image, when inverted, are black (a value of zero), so multiplying them doesn't change anything. They'll still be white when they're inverted back. But black pixels, in Screen mode, take on the color of the other layer -- exactly what I needed here.

Intensify the effect with contrast

[Mars sketch, colorized orange] One place I use this Screen mode trick is with pencil sketches. For example, I've made a lot of sketches of Mars over the years, like this sketch of Lacus Solis, the "Eye of Mars". But it's always a little frustrating: Mars is all shades of reddish orange and brown, not grey like a graphite pencil.

Adding an orange layer in Screen mode helps, but it has another problem: it washes out the image. What I need is to intensify the image underneath: increase the contrast, make the lights lighter and the darks darker.

[Colorized Mars sketch, enhanced  with brightness/contrast] Fortunately, all you need to do is bump up the contrast of the sketch layer -- and you can do that while keeping the orange Screen layer in place.

Just click on the sketch layer in the Layers dialog, then run Colors->Brightness/Contrast...

This sketch needed the brightness reduced a lot, plus a little more contrast, but every image will be different. Experiment!

March 07, 2015

Color Reconstruction

If you overexpose a photo with your digital camera you are in trouble. That’s what most photography related textbooks tell you – and it’s true. So you better pay close attention to your camera’s metering while shooting. However, what to do when the “bad thing” happened and you got this one non-repeatable shot, which is so absolutely brilliant, but unfortunately has some ugly signs of overexposure?

In this blog article I’d like to summarize how darktable can help you to repair overexposed images as much as possible. I’ll cover modules which have been part of darktable for a long time but also touch the brand new module “color reconstruction”.

Why overexposed highlights are a problem?

The sensor cells of a digital camera translate the amount of light that falls onto them into a digital reading. They can do so up to a certain sensor specific level – called the clipping value. If even more light falls onto the sensor it does not lead to any higher reading, the clipping value is the maximum. Think of a sensor cell as a water bucket; you can fill the bucket with liquid until it’s full but you cannot fill in more than its maximum volume.

For a digital camera to sense the color of light three color channels are required: red, green and blue. A camera sensor achieves color sensitivity by organizing sensor cells carrying color filters in a certain pattern, most frequently a Bayer pattern.

colorreconstruction_bayer_matrix

Combining this fact with the phenomenon of overexposure we can differentiate three cases:

  1. All three color channels have valid readings below the clipping value

  2. At least one color channel is clipped and at least one color channel has a valid reading

  3. All three color channels are clipped

Case (1) does not need to concern us in this context: all is good and we get all tonal and color information of the affected pixels.

Case (3) is the worst situation: no tonal nor any color information is available from the pixels in question. The best we can say about these pixels is that they must represent really bright highlights at or above the clipping value of the camera.

In case (2) we do not have correct color information as this would require valid readings of all three color channels. As it’s often the green channel that clips first, pixels affected by this case of overexposure typically show a strong magenta color cast if we do not take further action. The good news: at least one of the channels has stayed below the clipping value, so we may use this one to restore the tonal information of the affected pixels, alas, without color.

Dealing with overexposure in darktable

darktable has a modular structure. Therefore more than one module is typically involved when working on overexposed images. This is different from other applications where all the functionality may be part of a single general exposure correction panel. It is in the philosophy of darktable to not hide from the user the order in which modifications are made to an image.

Just in order to manage expectations: a heavily overexposed image or one with a fully blown-out sky is beyond repair. Only if at least some level of information is left in the highlights and if highlights represent only a limited part of the image there is a realistic chance to get a convincing result.

Correcting overall image exposure

Logically one of the basic modifications you need to consider for an overexposed image is an exposure correction. A negative exposure correction in the “exposure” module is frequently indispensable in order to bring brightest highlights into a more reasonable tonal range.

colorreconstruction_scr_1

Additionally you should take into account that the curve defined in the “base curve” module has a strong effect on highlights as well. You may try out different alternatives as offered in the module's presets to find the one that best fits to your expectations. A base curve with a more continuous increase that slowly reaches the top right corner (right example) is often better suited for images with overexposed areas than one that already reaches the top at a moderately high input luminance level (left example).

colorreconstruction_scr_2

colorreconstruction_scr_3

Bringing back detail into highlights

The “highlight reconstruction” module comes early in the pixel pipeline acting on raw data. This is the central module that deals with the different cases of overexposure as described above. As a default the module uses the “clip highlights” method: it will make sure that pixels, which have all or only part of the RGB channels clipped (cases 2 and 3), are converted to neutral white highlights instead of showing some kind of color cast. This is the minimum you want to do with highlights. For that reason this method is activated by default for all raw input images.

colorreconstruction_scr_4

As an alternative option the “highlight reconstruction” module offers the method “reconstruct in LCh”. This method is able to effectively deal with case (2) as described above. The luminance of partly clipped pixels can be reconstructed, the pixels get back their tonal information but result in a colorless neutral gray.

A third method offered by the “highlight reconstruction” module is called “reconstruct color”.

At first the idea of reconstructing color in highlights may sound surprising. As you know from what has been said above, overexposed highlights always lack color information (cases 2 and 3) and may even miss any luminance information as well (case 3). How can we then expect to reconstruct colors in these cases?

Now, the method that is used here is called “inpainting”. The algorithm assumes that an overexposed area is surrounded by non-overexposed pixels with the same color that the overexposed area had originally. The algorithm extrapolates those valid colors into the clipped highlights. This works remarkably well for homogeneous overexposed areas like skin tones.

Often it works perfectly but sometimes it might struggle to successfully fill all the highlights. In some cases it might produce moiré like patterns as an artifact, especially if the overexposed area is superposed by some sharp structures. As you will be able to identify limitations and potential problems immediately this method is always worth a try.

Bringing colors into highlights

The existing limitations of the “highlight reconstruction” module when it comes to colors has led to the development of a new module called “color reconstruction”. This module is currently part of the master development branch and will be part of darktable with the next feature release.

As we have discussed above there is no way to know the “true” color of a clipped highlight, we can only make an estimate.

The basic idea of the module is as follows: pixels which exhibit a luminance value above a user selectable threshold are assumed to have invalid colors. All pixels whose luminance value is below the threshold are assumed to have valid colors. The module now replaces invalid colors by valid ones based on proximity in the image’s x and y scale and in the luminance scale.

Let us assume we have an area of overexposed highlights, e.g. a reflection on a glossy surface. The reflection has no color information and is displayed as pure white if the “highlight reconstruction” module is active. If this overexposed area is very close to or surrounded by non-overexposed pixels the new module transfers the color of the non-overexposed area to the uncolored highlights. The luminance values of the highlight pixels remain unchanged.

Example 1

The following image is a typical case.

colorreconstruction_ex1_1

The fountain statue has a glossy gold-plated surface. Even with proper metering there is virtually no chance to get a photo of this object on a sunny day without overexposed highlights – there is always a reflection of the sun somewhere on the statue unless I would have gone for an exact back-lit composition (which would have had its own problems). In this case we see overexposed highlights on the left shoulder and arm and partly on the head of the figure – distracting as the highlights are pure white and present a strong contrast to the warm colors of the statue.

With the “color reconstruction” module activated I only needed to adjust the “luma threshold” to get the desired result.

colorreconstruction_scr_6

The highlights are converted into a gold-yellow cast which nicely blends with the surrounding color of the statue.

colorreconstruction_ex1_2

The “luma threshold” parameter is key for the effect. When you decrease it, you tell darktable to assume that an ever growing portion of the pixels is to be regarded as having invalid colors which darktable needs to replace. At the same time the number of pixels which are regarded as having valid colors decreases. darktable only replaces an invalid color if a “good” fit can be found – good means that a source color is available within a certain distance in terms of image coordinates and luminance relative to the target. Therefore, when shifting the slider too far to the left at some point the results gets worse again because too few valid colors are available – the slider typically shows a marked “sweet spot” where results are best. The sweet spot depends on the specifics of your image and you need to find it by trial and error.

The “color reconstruction” module uses a so called “bilateral grid” for fast color look-up (for further reading see [1]). Two parameters “spatial blur” and “range blur” control the details of the bilateral grid. With a low setting of “spatial blur” darktable will only consider valid colors that are found geometrically close to the pixels that need replacement. With higher settings colors get more and more averaged over a broader area of the image which delivers replacement colors that are more generic and less defined. This may or may not improve the visual quality of your image – you need to find out by trial and error. The same is true for the “range blur” which acts on the luminance axis of the bilateral grid. It controls how strong pixels with luminance values that are different from the target pixel, contribute to the color replacement.

Example 2

Here is a further example (photo supplied by Roman Lebedev).

colorreconstruction_ex2_1

The image shows an evening scene with a sky typical of the day time shortly after sunset. As a starting point a basic set of corrections has already been applied: “highlight reconstruction” has been used with the “reconstruct in LCh” method. If we would have used the “clip highlights” method the small cloud behind the flag post would have got lost. In addition we applied a negative exposure compensation by -1.5 EV units in the “exposure” module, we used the “lens correction” module mainly to fix vignetting, and we used the “contrast brightness saturation” module for some boosting effect on contrast and saturation.

Obviously the sky is overexposed and lacks good rendition of colors – visible by the arch-like area with wrong colors. With the “reconstruction module” and some tweaking of the parameters I got the following result, qualified by a much more credible evening sky:

colorreconstruction_ex2_2

These are the settings I used:

colorreconstruction_scr_7

If you let darktable zoom into the image you will immediately see that reconstructed colors change with every zoom step. This is an unwanted side-effect of the way darktable's pixel pipeline deals with zoomed-in images. As only the visible part of the image is processed for speed reasons our “color reconstruction” module “sees” different surroundings depending on the zoom level. These lead to different colors in the visible area. It is therefore recommended to adjust the “color reconstruction” parameters while viewing the full image in the darkroom. We'll try to fix this behavior in future versions of the module [ see below for an update ].

Example 3

As a final example let's look at this photo of the colorful window of a Spanish cathedral. Although this image is not heavily overexposed in the first place, the rose window clearly lacks color saturation, especially the centers of the lighter glass tiles look like washed out, which is mostly due to an too aggressive base curve. As an exercise let's try how to fix this with “color reconstruction”.

colorreconstruction_ex3_1

This time I needed to make sure that highlights do not get colored in some homogeneous orange-brownish hue that we would get when averaging all the various colors of the window tiles. Instead we need to take best care that each tile retains its individual color. Therefore, replacement colors need to be looked for in close geometrical proximity to the highlights. This requires a low setting of the “spatial blur” parameter. Here are the details:

colorreconstruction_scr_8

And here is the resulting image with some additional adjustment in the “shadows and highlights” module. The mood of the scene, which has been dominated by the rich and intensive primary colors, is nicely reconstructed.

colorreconstruction_ex3_2

One final word on authenticity. It should be obvious by now that the “color reconstruction” module only does an assumption of the colors that have been lost in the highlights. By no means can these colors be regarded as “authoritative”. You should be aware that “color reconstruction” is merely an interpretation rather than a faithful reproduction of reality. So if you strive for documentary photography, you should not rely on this technique but better go for a correct exposure in the first place. :)

Update

The behavior of this module on zoomed-in image views has been improved recently. In most cases you should now get a rendition of colors that is independent of the zoom level. There are a few known exceptions:

  • If you have highlight areas which are adjacent to high-contrast edges, you may observe a slight magenta shift when zooming in.
  • If you combine this module with the “reconstruct color” method of the “highlight reconstruction” module, highlights may be rendered colorless or in a wrong color when zooming in.

These artifacts only influence image display – the final output remains unaffected. Still we recommend to finetune the parameters of this module while viewing the full image.

[1] Chen J., Paris S., and Durand F. 2007. Real-time Edge-Aware Image Processing with the Bilateral Grid. In Proceedings of the ACM SIGGRAPH conference. (http://groups.csail.mit.edu/graphics/bilagrid/bilagrid_web.pdf)

Blender 2.74 Test Build

The next Blender release is coming soon! Here are a couple of useful links.

 

CC “Open Business Models”

The Creative Commons launched a new initiative to support “Open Business Models”.

creativecommons.org/weblog/entry/45022

Here’s the reply I posted there, still waiting to be approved (probably timezone issue :)

The Netherlands Blender Institute is doing business with CC-BY animation film and training since 2007. We’re very well known for pioneering with making a living selling CC media and using free/open source software exclusively. Based on my experience I have to make a couple of remarks though.

For me a definition of ‘open business model’ implies that a business is transparent and accessible for clients and customers – open about how costs work, how internal processes work, including sharing the revenue figures etc. This is even a new trend now. It’s the counter movement to answer to the financial crisis – and one of the positive outcomes of the “occupy” movement.

Calling “Making a living by selling your work under CC” an Open Business Model is just confusing people and potentially misleading. People who do business with free/open source software also don’t call their work “open business”. I even know corporations (Autodesk) who share training under CC-ND-NC, a license a sane person wouldn’t call “open business model” nor would I consider Autodesk to be interested in sharing and openess at all.

Let me state it stronger – doing bizz with CC should not be explicitly branded as such a special thing. The CC is there to stay and is one of the valuable choices artists can make (should be legally allowed to make!) when doing business. But it’s not a religion, it’s not exclusive. Leave the choice to artists themselves what to do. Sometimes CC works great, sometimes not.

What I always liked about CC so much is that they found an elegant solution to name something that’s related to essential user freedom, sharing and openess. All three aspects are relevant together.

-Ton-

For fun:

The Creative Commons official sharing site: photoshop files, clumsy graphics and they love Autodesk!
https://layervault.com/creative-commons/carousel

 

March 06, 2015

MyPaint wiki to close down in approx. 2 weeks

The MyPaint wiki will be closed down shortly: our hosting provider will be closing down the server it runs on.

That’s not necessarily a bad thing: it encourages us to migrate the content somewhere a little more central, and make hard-nosed decisions about what to keep and what to ignore that we’ve been putting off since forever (there has been so. much. spam.!)

So, we’ll be migrating at least the user manual and the brushpacks page to our home on Github so that they can still be maintained. If you think that anything else should be retained, please go to

http://wiki.mypaint.info/

and have a dig around. If you see an area which should be copied, please link to it on our tracking issue for this migration,

https://github.com/mypaint/mypaint/issues/242

Thank you.

Especial thanks to Techmight for hosting our site and our DNS for many years! And thanks to all previous contributors to the wiki too. We will try to retain your content, or give you all fair warning to move it elsewhere.

March 04, 2015

Getting Around in GIMP - Luminosity Masks Revisited


Brorfelde landscape by Stig Nygaard (cb)
After adding an aggressive curve along with a mid-tone luminosity mask.

I had previously written about adapting Tony Kuyper’s Luminosity Masks for GIMP. I won’t re-hash all of the details and theory here (just head back over to that post and brush up on them there), but rather I’d like to re-visit them using channels. Specifically to have another look at using the mid-tones mask to give a little pop to images.

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP
Original tutorial on Luminosity Masks:
Getting Around in GIMP - Luminosity Masks
Luminosity Masking in darktable:
PIXLS.US - Luminosity Masking in darktable





Let’s Build Some Luminosity Masks!

The way I had approached building the luminosity masks previously were to create them as a function of layer blending modes. In this re-visit, I’d like to build them from selection sets in the Channels tab of GIMP.

For the Impatient:
I’ve also written a Script-Fu that automates the creation of these channels mimicking the steps below.

Download from: Google Drive

Download from: GIMP Registry (registry.gimp.org)

Once installed, you’ll find it under:
Filters → Generic → Luminosity Masks (patdavid)
[Update]
Yet another reason to love open-source - Saul Goode over at this post on GimpChat updated my script to run faster and cleaner.
You can get a copy of his version at the same Registry link above.
(Saul’s a bit of a Script-Fu guru, so it’s always worth seeing what he’s up to!)


We’ll start off in a similar way as we did previously.

Duplicate your base image

Either through the menus, or by Right-Clicking on the layer in the Layer Dialog:
Layer → Duplicate Layer
Pat David GIMP Luminosity Mask Tutorial Duplicate Layer

Desaturate the Duplicated Layer

Now desaturate the duplicated layer. I use Luminosity to desaturate:
Colors → Desaturate…

Pat David GIMP Luminosity Mask Tutorial Desaturate Layer

This desaturated copy of your color image represents the “Lights” channel. What we want to do is to create a new channel based on this layer.

Create a New Channel “Lights”

The easiest way to do this is to go to your Channels Dialog.

If you don’t see it, you can open it by going to:
Windows → Dockable Dialogs → Channels

Pat David GIMP Luminosity Mask Tutorial Channels Dialog
The Channels dialog

On the top half of this window you’ll see the an entry for each channel in your image (Red, Green, Blue, and Alpha). On the bottom will be a list of any channels you have previously defined.

To create a new channel that will become your “Lights” channel, drag any one of the RGB channels down to the lower window (it doesn’t matter which - they all have the same data due to the desaturation operation).

Now rename this channel to something meaningful (like “L” for instance!), by double-clicking on its name (in my case it‘s called “Blue Channel Copy”) and entering a new one.

This now gives us our “Lights” channel, L :

Pat David GIMP Luminosity Mask Tutorial L Channel

Now that we have the “Lights” channel created, we can use it to create its inverse, the “Darks” channel...

Create a New Channel “Darks”

To create the “Darks” channel, it helps to realize that it should be the inverse of the “Lights” channel. We can get this selection through a few simple operations.

We are going to basically select the entire image, then subtract the “Lights” channel from it. What is left should be our new “Darks” channel.

Select the Entire Image

First, have the entire image selected:
Select → All

Remember, you should be seeing the “marching ants” around your selection - in this case the entire image.

Subtract the “Lights” Channel

With the entire image selected, now we just have to subtract the “Lights” channel. In the Channels dialog, just Right-Click on the “Lights” channel, and choose “Subtract from Selection”:

Pat David GIMP Luminosity Mask Tutorial L Channel Subtract

You’ll now see a new selection on your image. This selection represents the inverse of the “Lights” channel...

Create a New “Darks” Channel from the Selection

Now we just need to save the current selection to a new channel (which we’ll call... Darks!). To save the current selection to a channel, we can just use:
Select → Save to Channel

This will create a new channel in the Channel dialog (probably named “Selection Mask copy”). To give it a better name, just Double-Click on the name to rename it. Let’s choose something exciting, like “D”!

More Darker!

At this point, you’ll have a “Lights” and a “Darks” channel. If you wanted to create some channels that target darker and darker regions of the image, you can subtract the “Lights” channel again (this time from the current selection, “Darks”, as opposed to the entire image).

Once you’ve subtracted the “Lights” channel again, don’t forget to save the selection to a new channel (and name it appropriately - I like to name subsequent masks things like, “DD”, in this case - if I subtracted again, I’d call the next one “DDD” and so on…).

I’ll usually make 3 levels of “Darks” channels, D, DD, and DDD:

Pat David GIMP Luminosity Mask Tutorial Darks Channels
Three levels of Dark masks created.

Here’s what the final three different channels of darks looks like:

Pat David GIMP Luminosity Mask Tutorial All Darks Channels
The D, DD, and DDD channels

Lighter Lights

At this point we have one “Lights” channel, and three “Darks” channels. Now we can go ahead and create two more “Lights” channels, to target lighter and lighter tones.

The process is identical to creating the darker channels, just in reverse.

Lights Channel to Selection

To get started, activate the “Lights” channel as a selection:

Pat David GIMP Luminosity Mask Tutorial L Channel Activate

With the “Lights” channel as a selection, now all we have to do is Subtract the “Darks” channel from it. Then save that selection as a new channel (which will become our “LL” channel, and so on…

Pat David GIMP Luminosity Mask Tutorial Subtract D Channel
Subtracting the D channel from the L selection

To get an even lighter channel, you can subtract D one more time from the selection so far as well.

Here are what the three channels look like, starting with L up to LLL:

Pat David GIMP Luminosity Mask Tutorial All Lights Channels
The L, LL, and LLL channels

Mid Tones Channels

By this point, we’ve got 6 new channels now, three each for light and dark tones:

Pat David GIMP Luminosity Mask Tutorial L+D Channels

Now we can generate our mid-tone channels from these.

The concept of generating the mid-tones is relatively simple - we’re just going to intersect dark and light channels to produce whats left - midtones.

Intersecting Channels for Midtones

To get started, first select the “L” channel, and set it to the current selection (just like above). Right-Click → Channel to Selection.

Then, Right-Click on the “D” channel, and choose “Intersect with Selection”.

You likely won’t see any selection active on your image, but it’s there, I promise. Now as before, just save the selection to a channel:
Select → Save to Channel

Give it a neat name. Sayyy, “M”? :)

You can repeat for each of the other levels, creating an MM and MMM if you’d like.

Now remember, the mid-tones channels are intended to isolate mid values as a mask, so they can look a little strange at first glance. Here’s what the basic mid-tones mask looks like:

Pat David GIMP Luminosity Mask Tutorial Mid Channel
Basic Mid-tones channel

Remember, black tones in this mask represent full transparency to the layer below, while white represents full opacity, from the associated layer.


Using the Masks

The basic idea behind creating these channels is that you can now mask particular tonal ranges in your images, and the mask will be self-feathering (due to how we created them). So we can now isolate specific tones in the image for manipulation.

Previously, I had shown how this could be used to do some simple split-toning of an image. In that case I worked on a B&W image, and tinted it. Here I’ll do the same with our image we’ve been working on so far...

Split Toning

Using the image I’ve been working through so far, we have the base layer to start with:

Pat David GIMP Luminosity Mask Tutorial Split Tone Base

Create Duplicates

We are going to want two duplicates of this base layer. One to tone the lighter values, and another to tone the darker ones. We’ll start by considering the dark tones first. Duplicate the base layer:
Layer → Duplicate Layer

Then rename the copy something descriptive. In my example, I’ll call this layer “Dark” (original, I know):

Pat David GIMP Luminosity Mask Tutorial Split Tone Darks

Add a Mask

Now we can add a layer mask to this layer. You can either Right-Click the layer, and choose “Add Layer Mask”, or you can go through the menus:
Layer → Mask → Add Layer Mask

You’ll then be presented with options about how to initialize the mask. You’ll want to Initialize Layer Mask to: “Channel”, then choose one of your luminosity masks from the drop-down. In my case, I’ll use the DD mask we previously made:

Pat David GIMP Luminosity Mask Tutorial Add Layer Mask Split Tone

Adjust the Layer

Pat David GIMP Luminosity Mask Tutorial Split Tone Activate DD Mask
Now you’ll have a Dark layer with a DD mask that will restrict any modification you do to this layer to only apply to the darker tones.

Make sure you select the layer, and not it’s mask, by clicking on it (you’ll see a white outline around the active layer). Otherwise any operations you do may accidentally get applied to the mask, and not the layer.


At this point, we now want to modify the colors of this layer in some way. There are literally endless ways to approach this, bounded only by your creativity and imagination. For this example, we are going to tone the image with a cool teal/blue color (just like before), which combined with the DD layer mask, will restrict it to modifying only the darker tones.

So I’ll use the Colorize option to tone the entire layer a new color:
Colors → Colorize

To get a Teal-ish color, I’ll pull the Hue slider over to about 200:

Pat David GIMP Luminosity Mask Tutorial Split Tone Colorize

Now, pay attention to what’s happening on your image canvas at this point. Drag the Hue slider around and see how it changes the colors in your image. Especially note that the color shifts will be restricted to the darker tones thanks to the DD mask being used!

To illustrate, mouseover the different hue values in the caption of the image below to change the Hue, and see how it effects the image with the DD mask active:


Mouseover to change Hue to: 0 - 90 - 180 - 270

So after I choose a new Hue of 200 for my layer, I should be seeing this:

Pat David GIMP Luminosity Mask Tutorial Split Tone Dark Tinted

Repeat for Light Tones

Now just repeat the above steps, but this time for the light tones. So duplicate the base layer again, and add a layer mask, but this time try using the LL channel as a mask.

For the lighter tones, I chose a Hue of around 25 instead (more orange-ish than blue):

Pat David GIMP Luminosity Mask Tutorial Split Tone Light Tinted

In the end, here are the results that I achieved:

Pat David GIMP Luminosity Mask Tutorial Split Tone Result
After a quick split-tone (mouseover to compare to original)

The real power here comes from experimentation. I encourage you to try using a different mask to restrict the changes to different areas (try the LLL for instance). You can also adjust the opacity of the layers now to modify how strongly the color tones will effect those areas as well. Play!

Mid-Tones Masks

The mid-tone masks were very interesting to me. In Tony’s original article, he mentioned how much he loved using them to provide a nice boost to contrast and saturation in the image. Well, he’s right. It certainly does do that! (He also feels that it’s similar to shooting the image on Velvia).

Pat David GIMP Luminosity Mask Tutorial Mid Tones Mask
Let’s have a look.

I’ve deleted the layers from my split-toning exercise above, and am back to just the base image layer again.

To try out the mid-tones mask, we only need to duplicate the base layer, and apply a layer mask to it.

This time I’ll choose the basic mid-tones mask M.


What’s interesting about using this mask is that you can use pretty aggressive curve modifications to it, and still keep the image from blowing up. We are only targeting the mid-tones.

To illustrate, I’m going to apply a fairly aggressive compression to the curves by using Adjust Color Curves:
Colors → Curves

When I say aggressive, here is what I’m referring to:

Pat David GIMP Luminosity Mask Tutorial Aggresive Curve Mid Tone Mask

Here is the effect it has on the image when using the M mid-tones mask:


Aggressive curve with Mid-Tone layer mask
(mouseover to compare to original)

As you can see, there is an increase in contrast across the image, as well a nice little boost to saturation. You don’t need to worry about blowing out highlights or losing shadow detail, because the mask will not allow you to modify those values.

More Samples of the Mid-Tone Mask in Use

Pat David GIMP Luminosity Mask Tutorial
Pat David GIMP Luminosity Mask Tutorial
The lede image again, with another aggressive curve applied to a mid-tone masked layer
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Red Tailed Black Cockatoo at f/4 by Debi Dalio on Flickr (used with permission)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscape Ballon by Lennart Tange on Flickr (cb)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscapes by Tom Hannigan on Flickr (cb)
(mouseover to compare to original)



Mixing Films

This is something that I’ve found myself doing quite often. It’s a very powerful method for combining color toning that you may like from different film emulations. Consider what we just walked through.

These masks allow you to target modifications of layers to specific tones of an image. So if you like the saturation of, say, Fuji Velvia in the shadows, but like the upper tones to look similar to Polaroid Polachrome, then these luminosity masks are just what you’re looking for!

Just a little food for experimentation thought... :)

Stay tuned later in the week where I’ll investigate this idea in a little more depth.

In Conclusion

This is just another tool in our mental toolbox of image manipulation, but it’s a very powerful tool indeed. When considering your images, you can now look at them as a function of luminosity - with a neat and powerful way to isolate and target specific tones for modification.

As always, I encourage you to experiment and play. I’m willing to bet this method finds it’s way into at least a few peoples workflows in some fashion.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


Monthly Drawing Challenge

(by jmf)

The new monthly drawing challenge on the Krita forums now really boots up! The first run in February was mainly a test run. After that a lot of people said they were interested, so I decided to keep going.

stranger_by_tharindad-d8j6s4d

Last month’s winner: “Stranger” by tharindad.

The idea came when I was browsing the Krita forums in search of a drawing challenge and the only thing that came up was on Facebook. Not everybody has or wants Facebook, so we’ll have this challenge on the forum.

It’s not about competiton! It’s mostly a way to get rid of the “blank canvas syndrome”, to try something new and get new inspiration. If you want to draw but aren’t inspired, or want to step out of your comfort zone, this is for you!

This month’s topic is “Unusual Dinner”.

To enter, post your picture on this thread, The deadline is March 24, 2015. The winner is decided by vote on the forums and gets the privilege to choose next month’s topic.

released darktable 1.6.3

We are happy to announce that darktable 1.6.3 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.3
Please only use our provided packages ("darktable-1.6.3.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here's the direct link to tar.xz:
https://github.com/darktable-org/darktable/releases/download/release-1.6.3/darktable-1.6.3.tar.xz
and the DMG:
https://github.com/darktable-org/darktable/releases/download/release-1.6.3/darktable-1.6.3.dmg

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.3.tar.xz
 852bb3d307b0e2b579d14cc162b347ba1193f7bc9809bb283f0485dfd22ff28d
sha256sum darktable-1.6.3.dmg
 be568ad20bfb75aed703e2e4d0287b27464dfed1e70ef2c17418de7cc631510f

Changes

  • Make camera import window transient
  • Allow soft limits on radius
  • Fix soft boundaries for black in exposure
  • Change order of the profile/intent combo in export dialog
  • Support read/write of chromaticities in EXR
  • Allow to default to :memory: db in config
  • Add mime handler for non-raw image file formats
  • Improved lens model name detection for Sony SAL lenses

Bug fixes

  • Fix buffer overrun in SSE clipping loop for highlight handling
  • Prevent exporting when an invalid export/storage is selected
  • Hopefully last fix for aspect ratios in crop and rotate (#9942)
  • No tooltip when dragging in monochrome (#10319)

RAW support

  • Panasonic LX100 (missing non-standard aspect ratio modes)
  • Panasonic TZ60
  • Panasonic FZ1000
  • KODAK EASYSHARE Z1015 IS
  • Canon 1DX (missing sRAW modes)
  • Canon A630 and SX110IS (CHDK RAW)

white balance presets

  • Panasonic FZ1000
  • Panasonic TZ60
  • Panasonic LX100

standard matrix

  • Canon Rebel T3 (non-european 1100D)

enhanced matrix

  • nikon d750

noise profiles

  • Canon EOS 1DX

March 03, 2015

Updating Firmware on Linux

A few weeks ago Christian asked me to help with the firmware update task that a couple of people at Red Hat have been working on for the last few months. Peter has got fwupdate to the point where we can “upload” sample .cap files onto the flash chips, but this isn’t particularly safe, or easy to do. What we want for Fedora and RHEL is to be able to either install a .rpm file for a BIOS update (if the firmware is re-distributable), or to get notified about it in GNOME Software where it can be downloaded from the upstream vendor. If we’re showing it in a UI, we also want some well written update descriptions, telling the user about what’s fixed in the firmware update and why they should update. Above all else, we want to be able to update firmware safely offline without causing any damage to the system.

So, lets back up a bit. What do we actually need? A binary firmware blob isn’t so useful, and so Microsoft have decided we should all package it up in a .cab file (a bit like a .zip file) along with a .inf file that describes the update in more detail. Parsing .inf files isn’t so hard in Linux as we can fix them up to be valid and open them as a standard key file. The .inf file gives us the hardware ID of what the firmware is referring to, as well as a vendor and a short (!) update description. So far the update descriptions have been less than awesome “update firmware” so we also need some way of fixing up the update descriptions to be suitable to show the user.

AppStream, again, to the rescue. I’m going to ask nice upstreams like Intel and the weird guy who does ColorHug to start shipping a MetaInfo file alongside the .inf file in the firmware .cab file. This means we can have fully localized update descriptions, along with all the usual things you’d expect from an update, e.g. the upstream vendor, the licensing information, etc. Of course, a lot of vendors are not going to care about good descriptions, and won’t be interested in shipping another 16k file in the update just for Linux users. For that, we can actually “inject” a replacement MetaInfo file when we curate the AppStream metadata. This allows us to download all the .cab files we care about, but are not allowed to redistribute, run the appstream-builder on them, then package up just the XML metadata which can be consumed by pretty much any distribution. Ideally vendors would do this long term, bu you need got master versions of basically everything to generate the file, so it’s somewhat of a big ask at the moment.

So, we’ve now got a big blob of metadata we can read in GNOME Software, and show to Fedora users. We can show it in the updates panel, just like a normal update, we just can’t do anything with it. We also don’t know if the firmware update we know about is valid for the hardware we’re running on. These are both solved by the new fwupd project that I’ve been hacking on for a few days. This exit-on-idle daemon allows normal users to apply firmware to devices (with appropriate PolicyKit checks, typically the root password) in a safe way. We check the .cab file is valid, is for the right hardware, and then apply the update to be flashed on next reboot.

A lot of people don’t have UEFI hardware that’s capable of using capsule firmware updates, so I’ve also added a ColorHug provider, which predictably also lets you update the firmware on your ColorHug device. It’s a lot lower risk testing all this super-new code with a £20 EEPROM device than your nice shiny expensive prototype hardware from Intel.

At the moment there’s not a lot to test, we still need to connect up the low level fwupdate code with the fwupd provider, but that will be a lot easier when we get all the prerequisites into Fedora. What’s left to do now is to write a plugin for GNOME Software so it can communicate with fwupd, and to write the required hooks so we can get the firmware upgrade status as a notification for boot+2. I’m also happy to accept patches for other hardware that supports updates, although the internal API isn’t 100% stable yet. This is probably quite interesting for phones and tablets, so I’d be really happy if this gets used on other non-Fedora, or non-desktop usecases.

Comments welcome. No screenshots yet, but coming soon.

Tue 2015/Mar/03

  • An inlaid GNOME logo, part 3

    Esta parte en español

    (Parts 1, 2)

    The next step is to make a little rice glue for the template. Thoroughly overcook a little rice, with too much water (I think I used something like 1:8 rice:water), and put it in the blender until it is a soft, even goop.

    Rice glue in the blender

    Spread the glue on the wood surfaces. I used a spatula; one can also use a brush.

    Spreading the glue

    I glued the shield onto the dark wood, and the GNOME foot onto the light wood. I put the toes closer to the sole of the foot so that all the pieces would fit. When they are cut, I'll spread the toes again.

    Shield, glued Foot, glued

March 02, 2015

Luminosity Masking in darktable (Ian Hex)

Photographer Ian Hex was kind enough to be a guest writer over on PIXLS.US with a fantastic tutorial on creating and using Luminosity Masks in the raw processing software darktable.


You can find the new tutorial over on PIXLS.US:



I had previously looked at a couple of amazing shots from Ian over on the PIXLS.US blog, when I introduced him as a guest writer. I thought it might be nice to re-post some of his work here...


The Reverence of St. Peter by Ian Hex (cc-by-sa-nc)


Fire of Whitby Abbey by Ian Hex (cc-by-sa-nc)


Wonder of Variety by Ian Hex (cc-by-sa-nc)

Ian has many more amazing images from Britain of breathtaking beauty over on his site, Lightsweep. Be sure to check them out!

PIXLS.US Update

I have also written an update on the status of the site over on the PIXLS.US blog. TL;DR: It's still coming along! :)

Interview with Igor Leskov

apes800

Would you like to tell us something about yourself?

I like cinema and I like to draw motion pictures. I do not like very much to draw static pictures but I can. I studied traditional painting for eight years in the art school and after that I’ve continued to do it myself for 36 years. I like to learn painting even more than to paint.

Do you paint professionally or as a hobby artist?

I work in the small animation studio as a 2D-3D artist. I draw storyboards and backgrounds in 2D. I make the full 3D film work: modelling, texturing, lighting, rigging and animation. I have very little time to paint personal works, unfortunately.

When and how did you end up trying digital painting for the first time?

It was terrific! I was scanning the black ink drawings on the paper and colouring them in Photoshop in 1996. It was my black-and-white comics for the regional newspaper.

What is it that makes you choose digital over traditional painting?

The choice is simple. No need to buy oil paints and squirrel brushes, it is so lazy. Laziness is the engine of technological progress.

evolve800

How did you first find out about open source communities? What is your opinion about them?

When I found out about Krita I wrote to Boudewijn Rempt and he answered! It was cool!

Have you worked for any FOSS project or contributed in some way?

I have no such experience yet and I have no ability to do that at the present day but I would like do it in the future.

How did you find out about Krita?

My favourite artists are Titian and Moebius (Jean Giraud). When the developers dedicated the another edition of Krita to Moebius I was interested in this and looked at Krita.

What was your first impression?

I liked it.

What do you love about Krita?

Krita is my favourite 2D package and I would like to do something for its development.

What do you think needs improvement in Krita? Also, anything that you really hate?

There is nothing to hate in Krita. I hate myself for that I can’t convince Boud to do what I want and not what he wants:)

volcano800

In your opinion, what sets Krita apart from the other tools that you use?

I like to write to Mr. Rempt and to Mr. Kazakov and I like how they answer.

If you had to pick one favourite of all your work done in Krita so far, what would it be?

I don’t have any favourites yet.

What is it that you like about it? What brushes did you use in it?

I like Smooth Zoom Tool, Wrap Around Mode and Mirror View. I use the standard brushes: Ink_brush_25, Airbrush_linear, Block_tilt, Basic_circle, Bristles_hairy, Basic_mix_soft. I make animated texture brushes and rotate them during painting manually.

Anything else you’d like to share?

Unfortunatelly I cannot share all my professional works to public. It is just because they are owned by the customers of Irkutsk small animation studio Forsight. I can do it a bit on some sites: http://megayustas.deviantart.com, http://ascomix.narod.ru and http://igor-leskov.livejournal.com.

February 27, 2015

Reddit IAmA today, 2-4pm EST

DeathKillCycleAMAProof? Here’s your proof.

Head on over here between 2 and 4pm EST today, Friday February 27.

UPDATE: Reddit is not allowing me to post. On my own IAMA. Granted, this IAMA was set up by someone else, who said he had duly submitted my handle (Nina_Paley, created a week ago) to the mods. But it didn’t work. I was on the calendar, but I can’t respond to questions. I am not happy about this but mods aren’t responding, so I give up. You can AMA on Twitter instead.

UPDATE 2: after half an hour the problem was corrected, and I went back and answered questions.

 

 

Share/Bookmark

flattr this!

Updated Windows Builds

 

We prepared new Windows builds today. They contain the following updates:

  • Improved brush presets. The existing presets were not optimized for Windows systems, so Scott Petrovic took a look at all of them and optimized where possible
  • The brush editor now opens in the right place even if the screen is too small
  • You can now disable the on-canvas message that pops up when zooming, rotating etc. This might solve some performance issues for some people
  • We increased the amount of memory available for G’Mic even more. This may mean that on big, beefy Windows machines you can now use G’Mic, but on other, less beefy machines filters might still crash. G’Mic is an awesome tool, but keep in mind that it’s a research project and that its Windows support is experimental.

We’ll move the new builds to the official download location as soon as possible, but in the meantime, here are the downloads:

 

February 26, 2015

Another fake flash story

I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.


Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on Amazon.fr, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account

The Second Plague (Frogs) – rough

Music is from “Frogs” by DJ Zeph featuring Azeem, from the album “Sunset Scavenger.” It’s from 2004, making it the most contemporary song in the film. I almost used Taylor Swift’s 2014 “Bad Blood” for Blood, but I ended up deciding Josh White’s 1933 “Blood Red River Blues” was simply a better song. It wasn’t due to fear of lawsuits; I decided long ago not to allow copyright to determine my artistic choices. If you don’t know my stance on Intellectual Disobedience, you can learn about it here:
youtube.com/watch?v=dfGWQnj6RNA
and here:
blog.ninapaley.com/2013/12/07/make-art-not-law-2/

I’m curious what frogs DJ Zeph and Azeem were originally referring to. Here, of course, the frogs are these:

“3 And the river shall bring forth frogs abundantly, which shall go up and come into thine house, and into thy bedchamber, and upon thy bed, and into the house of thy servants, and upon thy people, and into thine ovens, and into thy kneadingtroughs:

“4 And the frogs shall come up both on thee, and upon thy people, and upon all thy servants.” -Exodus 8, King James Version

Share/Bookmark

flattr this!

February 25, 2015

Krita 2.9

Congratulations to Krita on releasing version 2.9 and a very positive write-up for Krita by Bruce Byfield writing for Linux Pro Magazine.

I'm amused by his comment comparing Krita to "the cockpit of a fighter jet" and although there are some things I'd like to see done differently* I think Krita is remarkably clear for a program as complex as it is and does a good job of balancing depth and breadth. (* As just one example: I'm never going to use "File, Mail..." so it's just there waiting for me to hit it accidentally, but far as I know I cannot disable or hide it.)

Unfortunately Byfield writes about Krita "versus" other software. I do not accept that premise. Different software does different things, users can mix and match (and if they can't that is a different and bigger problem). Krita is another weapon in the arsenal. Enjoy Krita 2.9.

Mairi Trois


Readers who've been here for a little while might recognize my friend Mairi, who has modeled for me before. This time I had a brief opportunity for her to sit for me again for a few shots before she jet-setted her way over to Italy for a while.

I was specifically looking to produce the lede image you see above, Mairi Troisième. In particular, I was chasing some chiaroscuro portrait lighting that I had in mind for a while and I was quite happy with the final result!

Of course, I also had a large new light modifier, so bigger shots were fun to play with as well:


Mairi Color (in Black)
ƒ/6.3 1/200s ISO200


Mairi B&W
ƒ/8.0 1/200s ISO200

Those two shots were done using a big Photek Softlighter II [amazon] that I treated myself to late last year. (I believe the speedlight was firing @3/4 power for these shots).

It wasn't all serious, there were some funny moments as well...


My Eyes Are Up Here
ƒ/7.1 1/200s ISO200

Of course, I like to work up close to a subject personally. I think it gives a nicer sense of intimacy to an image.


More Mairi Experiments
ƒ/11.0 1/200s ISO200


Mairi Trois
ƒ/8.0 1/200s ISO200

Culminating at one of my favorites from the shoot, this nice chiaroscuro image up close:


Mairi (Closer)
ƒ/10.0 1/200s ISO200

It's always a pleasure to get a chance to shoot with Mairi. She's a natural in front of the camera, and has these huge expressive eyes that are always a draw.

Later this week, an update on PIXLS.US!

Krita 2.9.0: the Kickstarter Release

The culmination of over eight months of work, Krita 2.9 is the biggest Krita release until now! It’s so big, we can’t just do the release announcement in only one page, we’ve had to split it up into separate pages! Last year, 2014, was a huge year for Krita. We published Krita on Steam, we showed off Krita at SIGGRAPH, we got Krita reviewed in ImagineFX, gaining the Artist’s choice accolade —  and we got our first Kickstarter campaign more than funded, too! This meant that more work has gone into Krita than ever before.

And it shows: here are the results. Dozens of new features, improved functions, fixed bugs, spit and polish all over the place.  The initial port to OSX. Some of the new features took more than two years to implement, and others are a direct result of your support!

Eleven of the twelve Kickstarter-funded features are in, and we’ll be doing the last one for the next 2.9 release — 2.9.1. Krita can now open more than one image in a window, and show an image in more than one view or window. Great perspective drawing assistants. Creative painting in HDR more is now not just possible, it’s fun. Lots and lots of workflow improvements. So, be prepared… Wolthera and Scott have prepared a big, big overview of all the changes for you:

Overview of New Features and Release Notes
https://krita.org/krita-2-9-the-kickstarter-release/

Without all your support, whether through direct donations, Steam sales, the Kickstarter campaign, the work of testers and bug reporters and documentation, tutorial, translation and code contributors, Krita would never have gotten this far! So, thanks and hugs all around!

Enjoy!

 

mascot_20150204_kiki_c_1920x1080

Kiki in Spring Time, by Tyson Tan

February 24, 2015

Announcing issue 2.3 of Libre Graphics magazine

cover-photo-medium

We’re very pleased to announce the long-awaited release of Libre Graphics magazine issue 2.3. This issue is guest-edited by Manuel Schmalstieg and addresses a theme we’ve been wanting to tackle for some time: type design. From specimen design to international fonts, constraint-based type to foundry building, this issue shows off the many faces of libre type design.

With the usual cast of columnists, stunning showcases and intriguing features, issue 2.3, The Type Issue, given an entrée into what’s now and next in F/LOSS fonts.

The Type Issue is the third issue in volume two of Libre Graphics magazine. Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses.

The theory of everything

Life of  Stephen Hawking’s, based on his ex-wife’s biography. The movie is attractive and romantic, yet not exaggerated or over dramatized. Instead of focusing on Hawking’s life tragedy or  listing his contributions to Physics, the movie takes a personal angle. The amazing cinematography, clean script and brilliant performances make the movie justified and impressive.

Tips for developing on a web host that offers only FTP

Generally, when I work on a website, I maintain a local copy of all the files. Ideally, I use version control (git, svn or whatever), but failing that, I use rsync over ssh to keep my files in sync with the web server's files.

But I'm helping with a local nonprofit's website, and the cheap web hosting plan they chose doesn't offer ssh, just ftp.

While I have to question the wisdom of an ISP that insists that its customers use insecure ftp rather than a secure encrypted protocol, that's their problem. My problem is how to keep my files in sync with theirs. And the other folks working on the website aren't developers and are very resistant to the idea of using any version control system, so I have to be careful to check for changed files before modifying anything.

In web searches, I haven't found much written about reasonable workflows on an ftp-only web host. I struggled a lot with scripts calling ncftp or lftp. But then I discovered curftpfs, which makes things much easier.

I put a line in /etc/fstab like this:

curlftpfs#user:password@example.com/ /servername fuse rw,allow_other,noauto,user 0 0

Then all I have to do is type mount /servername and the ftp connection is made automagically. From then on, I can treat it like a (very slow and somewhat limited) filesystem.

For instance, if I want to rsync, I can

rsync -avn --size-only /servername/subdir/ ~/servername/subdir/
for any particular subdirectory I want to check. A few things to know about this:
  1. I have to use --size-only because timestamps aren't reliable. I'm not sure whether this is a problem with the ftp protocol, or whether this particular ISP's server has problems with its dates. I suspect it's a problem inherent in ftp, because if I ls -l, I see things like this:
    -rw-rw---- 1 root root 7651 Feb 23  2015 guide-geo.php
    -rw-rw---- 1 root root 1801 Feb 14 17:16 guide-header.php
    -rw-rw---- 1 root root 8738 Feb 23  2015 guide-table.php
    
    Note that a file modified a week ago shows a modification time, but files modified today show only a day and year, not a time. I'm not sure what to make of this.
  2. Note the -n flag. I don't automatically rsync from the server to my local directory, because if I have any local changes newer than what's on the server they'd be overwritten. So I check the diffs by hand with tkdiff or meld before copying.
  3. It's important to rsync only the specific directories you're working on. You really don't want to see how long it takes to get the full file tree of a web server recursively over ftp.

How do you change and update files? It is possible to edit the files on the curlftpfs filesystem directly. But at least with emacs, it's incredibly slow: emacs likes to check file modification dates whenever you change anything, and that requires an ftp round-trip so it could be ten or twenty seconds before anything you type actually makes it into the file, with even longer delays any time you save.

So instead, I edit my local copy, and when I'm ready to push to the server, I cp filename /servername/path/to/filename.

Of course, I have aliases and shell functions to make all of this easier to type, especially the long pathnames: I can't rely on autocompletion like I usually would, because autocompleting a file or directory name on /servername requires an ftp round-trip to ls the remote directory.

Oh, and version control? I use a local git repository. Just because the other people working on the website don't want version control is no reason I can't have a record of my own changes.

None of this is as satisfactory as a nice git or svn repository and a good ssh connection. But it's a lot better than struggling with ftp clients every time you need to test a file.

February 23, 2015

Boyhood

Boyhood is stunning. Just like Linklater’s Trilogy, the movie deals the most sophisticated human emotion with a simple, micro storyline. This time Linklater’s narration follows a boy and his life for 12 years (and it took 12 years in making). The brilliant making, interesting way of story telling, indirect representation of time makes the awesome. […]

February 20, 2015

SVG Working Group Meeting Report — Sydney

The SVG Working Group had a four day face-to-face meeting in Sydney this month. The first day was a joint meeting with the CSS Working Group.

I would like to thank the Inkscape board for funding my travel. This was an expensive trip as I was traveling from Paris and Sydney is an expensive city… but I think it was well worth it as the SVG WG (and CSS WG, where appropriate) approved all of my proposals and worked through all of the issues I raised. Unfortunately, due to the high cost of this trip, I have exhausted the budgeted funding from Inkscape for SVG WG travel this year and will probably miss the two other planned meetings, one in Sweden in June and one in Japan in October. We target the Sweden meeting for moving the SVG 2 specification from Working Draft to Candidate Recommendation so it would be especially good to be there. If anyone has ideas for alternative funding, please let me know.

Highlights:

A summary of selected topics, grouped by day, follows:

Joint CSS and SVG Meeting

Minutes

  • SVG sizing in HTML.

    We spent some time discussing how SVG should be sized in HTML. For corner cases, the browsers disagree on how large an SVG should be displayed. There is going to be a lot work required to get this nailed down.

  • CSS Filter Effects:

    We spent a lot of time going through and resolving the remaining issues in the CSS Filter Effects specification. (This is basically SVG 1.1 filters repackaged for use by HTML with some extra syntax sugar coating.) We then agreed to publish the specification as a Candidate Recommendation.

  • CSS Blending:

    We discussed publishing the CSS Blending specification as a Recommendation, the final step in creating a specification. I raised a point that most of the tests assumed HTML content. It was requested that more SVG specific test be created. (Part of the requirement for Recommendation status is that there be a test suite and that two independently developed renderers pass each test in the suite.)

  • SVG in OpenType, Color Palettes:

    The new OpenType specification allows for multi-colored SVG glyphs. It would be nice to set those colors through CSS. We discussed several methods for doing so and decided on one method. It will be added to the CSS Fonts Level 4 specification.

  • Text Rendering:

    The ‘text-rendering‘ property gives renderers a hint on what speed/precision trade-offs should be made. It was pointed out that the layout of text flowed into a box will change as one zooms in and out on a page in Firefox due to font-hinting, font-size rounding, etc. The Google docs people would like to prevent this. It was decided that the ‘geometricPrecision’ value should require that font-metrics and text-measurement be independent of device resolution and zoom level. (Note: this property is defined in SVG but both Firefox and Chrome support it on HTML content.)

  • Text Properties:

    Text in SVG 2 relies heavily on CSS specifications that are in various states of readiness. I asked the CSS/SVG groups what is the policy for referencing these specs. In particular, SVG 2 needs to reference the CSS Shapes Level 2 specification in order to implement text wrapping inside of SVG shapes. The CSS WG agreed to publish CSS Shapes Level 2 as a Working Draft so we can reference it. We also discussed various technical issues in defining how text wraps around excluded areas and in flowing text into more than one shape.

SVG Day 1

Minutes

  • CamelCase Names

    The SVG WG decided some time ago to avoid new CamelCase names like ‘LinearGradient’ which cause problems with integration in HTML (HTML is case insensitive and CamelCase SVG names must be added by hand to HTML parsers). We went through the list of new CamelCase names in SVG 2 and decided which ones could be changed, weighing arguments for consistency against the desire to not introduce new CamelCase names. It was decided that <meshGradient> should be changed to <mesh>. This was mostly motivated by the ability to use a mesh as a standalone entity (and not only as a paint server). Other changes include: <hatchPath> to <hatchpath>, <solidColor> to <solidcolor>, …

  • Requiring <foreignObject> HTML to be rendered.

    There was a proposal to require any HTML content in a <foreignObject> element to be rendered. I pointed out that not all SVG renderers are HTML renderers (Inkscape as an example). It was decided to have separate conformance classes, one requiring HTML content to be rendered and one not.

  • Requiring Style Sheets Support:

    It was decided to require style sheet support. We discussed what kind of style sheets to require. We decided to require basic style sheet support at the CSS 1 or CSS 2.1 level (that part of the discussion was not minuted).

  • Open Issues:

    We spent considerable time going through the specification chapter by chapter looking at open issues that would block publishing the specification as a Candidate Recommendation. This was a long multi-day process.

SVG Day 2

Minutes

Note: Day 2 and Day 3 minutes are merged.

  • Superpaths:

    Superpaths is the name for the ability to reuse path segment data. This is useful, for example, to define the boundary between two shapes just once, reusing the path segment for both shapes. SVG renderers might be able to exploit this information to provide better anti-aliasing between two shapes knowing they share a common border. The SVG WG endorses this proposal but it probably won’t be ready in time for SVG 2. Instead, it will be developed in a separate Path enhancement module.

  • Line-Join: Miter Clipped:

    It was proposed on the SVG mailing list that there be a new behavior for the miter ‘line-join’ value in regards to the ‘miter-limit’ property. At the moment, if a miter produces a line cap that extends farther than the ‘miter-limit’ value then the miter type is changed to bevel. This causes abrupt jumps when the angle between the joined lines changes such that the miter length crosses over the ‘miter-limit’ value (see demo). A better solution is to clip the line join at the ‘miter-limit’. This is done by some rendering libraries including the one used on Windows. We decided to create a new value for ‘line-join’ with this behavior.

  • Auto-Path Closing:

    The ‘z’ path command closes paths by drawing a line segment to the first point in the path. This is fine if the path is made up of straight lines but becomes problematic if the path is made up of curves. For example, it can cause rendering problems for markers as there will be an extra line segment between the start and end of the path. If the last point is exactly on top of the first point, one can remove this closing line segment but this isn’t always possible, especially if one is using the relative path commands with rounding errors. A more detailed discussion can be found here. We decided to allow a ‘z’ command to fill in missing point data using the first point in the path. For example in: d=”m 100,125 c 0,-75 100,-75 100,0 c 0,75 -100,75 z” the missing point of the second Bezier curve is filled in by the first point in the path.

  • Text on a Shape:

    An Inkscape developer has been working on putting text on a shape by converting shapes to paths while storing the original shape in the <defs> section. It would be much easier if SVG just allowed text on a shape. I proposed that we include this in SVG 2. This is actually quite easy to specify as we have already defined how shapes are converted to paths (needed by markers on shapes and putting dash patterns on shapes). A couple minor points needed to be decided: Do we allow negative path offsets? (Yes) How do we decide which side of a path the text should be put? (A new attribute) The SVG WG approved adding text on a shape to SVG 2.

  • Marker knockouts, mid-markers, etc:

    A number of new marker features still need some work. To facilitate finishing SVG 2 we decided to move them to a separate specification. There is some hesitation to do so as there is fear that once removed from the main SVG specification they will be forgotten about. This will be a trial of how well separating parts of SVG 2 into separates specifications works. The marker knockout feature, very useful for arrowheads is one feature moved into the new specification. On day 3 we approved publishing the new Markers Level 1 specification as a First Public Working Draft.

  • Text properties:

    With our new reliance on CSS for text layout, just what CSS properties should SVG 2 support? We don’t want to necessarily list them all in the SVG 2 specification as the list could change as CSS adds new properties. We decided that we should support all paragraph level properties (‘text-indent’, ‘text-justification’, etc.). We’ll ask the CSS working group to create a definition for CSS paragraph properties that we can then reference.

  • Text ‘dx’, ‘dy’, and ‘rotate’ attributes:

    SVG 1.1 has the properties ‘dx’, ‘dy’, and ‘rotate’ attributes that allow individual glyphs to be shifted and rotated. While not difficult to support on auto-wrapped text (they would be applied after CSS text layout), we decided that they weren’t really needed. They can still be used on SVG 1.1 style text (which is still part of SVG 2).

SVG Day 3

Minutes

Note: Day 3 minutes are at end of Day 2 minutes.

  • Stroking Enhancements:

    As part of trying to push SVG 2 quickly, we decided to move some of the stroking enhancements that still need work into a separate specification. This includes better dashing algorithms (such as controlling dash position at intersections) and variable width strokes. We agreed to the publication of SVG Strokes as a First Public Working Draft.

  • Smoothing in Mesh Gradients:

    Coons-Patch mesh gradients have one problem: the color profile at the boundary between patches is not always smooth. This leads to visible artifacts which are enhanced by Mach Banding. I’ve discussed this in more detail here. I proposed to the SVG WG that we include the option of auto-smoothing meshes using monotonic-bicubic interpolation. (There is an experimental implementation in Inkscape trunk which I demonstrated to the group.) The SVG WG accepted my proposal.

  • Motion Path:

    SVG has the ability to animate a graphical object along a path. This ability is desired for HTML. The SVG and CSS working groups have produced a new specification, Motion Path Module Level 1, for this purpose. We agreed to publish the specification as a First Public Working Draft.

February 19, 2015

Finding core dump files

Someone on the SVLUG list posted about a shell script he'd written to find core dumps.

It sounded like a simple task -- just locate core | grep -w core, right? I mean, any sensible packager avoids naming files or directories "core" for just that reason, don't they?

But not so: turns out in the modern world, insane numbers of software projects include directories called "core", including projects that are developed primarily on Linux so you'd think they would avoid it ... even the kernel. On my system, locate core | grep -w core | wc -l returned 13641 filenames.

Okay, so clearly that isn't working. I had to agree with the SVLUG poster that using "file" to find out which files were actual core dumps is now the only reliable way to do it. The output looks like this:

$ file core
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (375)

The poster was using a shell script, but I was fairly sure it could be done in a single shell pipeline. Let's see: you need to run locate to find any files with 'core" in the name.

Then you pipe it through grep to make sure the filename is actually core: since locate gives you a full pathname, like /lib/modules/3.14-2-686-pae/kernel/drivers/edac/edac_core.ko or /lib/modules/3.14-2-686-pae/kernel/drivers/memstick/core, you want lines where only the final component is core -- so core has a slash before it and an end-of-line (in grep that's denoted by a dollar sign, $) after it. So grep '/core$' should do it.

Then take the output of that locate | grep and run file on it, and pipe the output of that file command through grep to find the lines that include the phrase 'core file'.

That gives you lines like

/home/akkana/geology/NorCal/pinnaclesGIS/core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (523)

But those lines are long and all you really need are the filenames; so pass it through sed to get rid of anything to the right of "core" followed by a colon.

Here's the final command:

file `locate core | grep '/core$'` | grep 'core file' | sed 's/core:.*//'

On my system that gave me 11 files, and they were all really core dumps. I deleted them all.

February 18, 2015

OpenRaster Python Plugin

OpenRaster Python Plugin

Early in 2014, version 0.0.2 of the OpenRaster specification added a requirement that each file should include a full size pre-rendered image (mergedimage.png) so that other programs could more easily view OpenRaster files. [Developers: if your program can open a zip file and show a PNG you could add support for viewing OpenRaster files.*]

The GNU Image Manipulation Program includes a python plugin for OpenRaster support, but it did not yet include mergedimage.png so I made the changes myself. You do not need to wait for the next release, or for your distribution to eventually package that release you can benefit from this change immediately. If you are using the GNU Image Manipulation Program version 2.6 you will need to make sure you have support for python plugins included in your version (if you are using Windows you wont), and if you are using version 2.8 it should already be included.

It was only a small change but working with Python and not having to wait for code to compile make it so much easier.

* Although it would probably be best if viewer support was added at the toolkit level, so that many applications could benefit.
[Edit: Updated link]

Wed 2015/Feb/18

  • Integer overflow in librsvg

    Another bug that showed up through fuzz-testing in librsvg was due to an overflow during integer multiplication.

    SVG supports using a convolution matrix for its pixel-based filters. Within the feConvolveMatrix element, one can use the order attribute to specify the size of the convolution matrix. This is usually a small value, like 3 or 5. But what did fuzz-testing generate?

    <feConvolveMatrix order="65536">

    That would be an evil, slow convolution matrix in itself, but in librsvg it caused trouble not because of its size, but because C sucks.

    The code had something like this:

    struct _RsvgFilterPrimitiveConvolveMatrix {
        ...
        double *KernelMatrix;
        ...
        gint orderx, ordery;
        ...
    };
    	      

    The values for the convolution matrix are stored in KernelMatrix, which is just a flattened rectangular array of orderx × ordery elements.

    The code tries to be careful in ensuring that the array with the convolution matrix is of the correct size. In the code below, filter->orderx and filter->ordery have both been set to the dimensions of the array, in this case, both 65536:

    guint listlen = 0;
    
    ...
    
    if ((value = rsvg_property_bag_lookup (atts, "kernelMatrix")))
        filter->KernelMatrix = rsvg_css_parse_number_list (value, &listlen);
    
    ...
    
    if ((gint) listlen != filter->orderx * filter->ordery)
        filter->orderx = filter->ordery = 0;
    	    

    Here, the code first parses the kernelMatrix number list and stores its length in listlen. Later, it compares listlen to orderx * ordery to see if KernelMatrix array has the correct length. Both filter->orderx and ordery are of type int. Later, the code iterates through the values in the filter>KernelMatrix when doing the convolution, and doesn't touch anything if orderx or ordery are zero. Effectively, when those values are zero it means that the array is not to be touched at all — maybe because the SVG is invalid, as in this case.

    But in the bug, the orderx and ordery are not being sanitized to be zero; they remain at 65536, and the KernelMatrix gets accessed incorrectly as a result. Let's see what happens when you mutiply 65536 by itself with ints.

    (gdb) p (int) 65536 * (int) 65536
    $1 = 0
    	    

    Well, of course — the result doesn't fit in 32-bit ints. Let's use 64-bit ints instead:

    (gdb) p (long long) 65536 * 65536
    $2 = 4294967296
    	    

    Which is what one expects.

    What is happening with C? We'll go back to the faulty code and get a disassembly (I recompiled this without optimizations so the code is easy):

    $ objdump --disassemble --source .libs/librsvg_2_la-rsvg-filter.o
    ...
        if ((gint) listlen != filter->orderx * filter->ordery)
        4018:       8b 45 cc                mov    -0x34(%rbp),%eax    
        401b:       89 c2                   mov    %eax,%edx           %edx = listlen
        401d:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4021:       8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx     %ecx = filter->orderx
        4027:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        402b:       8b 80 ac 00 00 00       mov    0xac(%rax),%eax     %eax = filter->ordery
        4031:       0f af c1                imul   %ecx,%eax
        4034:       39 c2                   cmp    %eax,%edx
        4036:       74 22                   je     405a <rsvg_filter_primitive_convolve_matrix_set_atts+0x4c6>
            filter->orderx = filter->ordery = 0;
        4038:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        403c:       c7 80 ac 00 00 00 00    movl   $0x0,0xac(%rax)
        4043:       00 00 00 
        4046:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        404a:       8b 90 ac 00 00 00       mov    0xac(%rax),%edx
        4050:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4054:       89 90 a8 00 00 00       mov    %edx,0xa8(%rax)
    	    

    The highligted lines do the multiplication of filter->orderx * filter->ordery and the comparison against listlen. The imul operation overflows and gives us 0 as a result, which is of course wrong.

    Let's look at the overflow in slow motion. We'll set a breakpoint in the offending line, disassemble, and look at each instruction.

    Breakpoint 3, rsvg_filter_primitive_convolve_matrix_set_atts (self=0x69dc50, ctx=0x7b80d0, atts=0x83f980) at rsvg-filter.c:1276
    1276        if ((gint) listlen != filter->orderx * filter->ordery)
    (gdb) set disassemble-next-line 1
    (gdb) stepi
    
    ...
    
    (gdb) stepi
    0x00007ffff7baf055      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
    => 0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
       0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x10000  65536
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    ...
    eflags         0x206    [ PF IF ]
    	    

    Okay! So, right there, the code is about to do the multiplication. Both eax and ecx, which are 32-bit registers, have 65536 in them — you can see the 64-bit "big" registers that contain them in rax and rcx.

    Type "stepi" and the multiplication gets executed:

    (gdb) stepi
    0x00007ffff7baf058      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
       0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
    => 0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x0      0
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    eflags         0xa07    [ CF PF IF OF ]
    	    

    Kaboom. The register eax (inside rax) now is 0, which is the (wrong) result of the multiplication. But look at the flags! There is a big fat OF flag, the overflow flag! The processor knows! And it tries to tell us... with a single bit... that the C language doesn't bother to check!

    Handover

    (The solution in the code, at least for now, is simple enough — use gint64 for the actual operations so the values fit. It should probably set a reasonable limit for the size of convolution matrices, too.)

    So, could anything do better?

    Scheme uses exact arithmetic if possible, so (* MAXLONG MAXLONG) doesn't overflow, but gives you a bignum without you doing anything special. Subsequent code may go into the slow case for bignums when it happens to use that value, but at least you won't get garbage.

    I think Python does the same, at least for integer values (Scheme goes further and uses exact arithmetic for all rational numbers, not just integers).

    C# lets you use checked operations, which will throw an exception if something overflows. This is not the default — the default is "everything gets clipped to the operand size", like in C. I'm not sure if this is a mistake or not. The rest of the language has very nice safety properties, and it lets you "go fast" if you know what you are doing. Operations that overflow by default, with opt-in safety, seem contrary to this philosophy. On the other hand, the language will protect you if you try to do something stupid like accessing an array element with a negative index (... that you got from an overflowed operation), so maybe it's not that bad in the end.

February 17, 2015

Reanimation of MacBook Air

For some months our MacBook Air was broken. Finally good time to replace, I thought. On the other side, the old notebook was quite useful even 6 years after purchasing. Coding on the road, web surfing, SVG/PDF presentations and so on worked fine on the Core2Duo device from 2008. The first breaking symptoms started with video errors on a DVI connected WUXGA/HDTV+ sized display. The error looked like non stable frequency handling, with the upper scan lines being visually ok and the lower end wobbling to the right. A black desktop background with a small sized window was sometimes a workaround. This notebook type uses a Nvidia 9400M on the logic board. Another non portable computer of mine which uses Nvidia 9300 Go on board graphics runs without such issues. So I expected no reason to worry about the type of graphics chip. Later on, the notebook stopped completely, even without attached external display. It showed a well known one beep every 5 seconds during startup. On MacBook Pro/Air’s this symptom means usually broken RAM.

The RAM is soldered directly on the logic board. Replacing @ Apple appeared prohibitive. Now that I began to look around to sell the broken hardware to hobbyists, I found an article talking about these early MacBook Air’s. This specific one is a 2.1 rev A 2.13 GHz. It was mentioned, that early devices suffered from lead-free soldering, which performs somewhat worse in regards to ductility than normal soldering. The result was that many of these devices suffered from electrical disconnections of its circuitry during the course of warming and cooling and the related thermal expansion and contraction. The device showed the one beep symptom on startup without booting. An engineer from Apple was unofficially cited to suggest, that putting the logic board in around 100° Celsius for a few minutes would eventually suffice to solve the issue. That sounded worth a try to me. As I love to open up many devices to look into and eventually repair them, taking my time for dismounting the logic board and not bringing it to a repair service was fine for me. But be warned, doing so can be difficult for beginners. I placed the board on some wool in the oven @120 ° and after 10 minutes and some more for montage, the laptop started again to work. I am not sure if soldering is really solved now or if the experienced symptoms will come back. I guess that some memory chips on the board were resetted and stopped telling that RAM is broken. So my device works again and will keep us happy for a while – I hope.

February 16, 2015

Old projects, new images

We use to make 3D images of old projects of some of our clients, to give their websites a bit of a refresh, and we don't do it for ourselves? No sir, no more! Here is a bit of revamp on two oldies but goodies of our projects, Casa GL and the PACE ONG. ...

KMZ Zorki 4 (Soviet Rangefinder)

The Leica rangefinder

Rangefinder type cameras predate modern single lens reflex cameras. People still use them. It’s just a different way of shooting. Since they’re no longer a mainstream type camera most manufacturers have stopped making them a long time ago. Except Leica, Leica still makes digital and film rangefinders and as you might guess, they come at significant cost. Even old Leica film rangefinders easily cost upwards of € 1000. While Leica certainly wasn’t the only brand to manufacture rangefinders throughout photographic history, it was (and still is) certainly the most iconic rangefinder brand.

The Zorki rangefinder

Now the Soviets essentially tried to copy Leica’s cameras, the result of which, the Zorki series of cameras, was produced at KMZ. Many different versions exist, having produced nearly 2 million cameras across more than 15 years, the Zorki-4 was without a doubt it’s most popular incarnation. Many consider the Zorki-4 to be the one where the Soviets got it (mostly) right.

That said, the Zorki-4 vaguely looks like a Leica M with it’s single coupled viewfinder/rangefinder window. In most other ways it’s more like a pre-M Leica, with it’s 39mm LTM lens screw mount. Earlier Zorki-4’s have a body finished with vulcanite which is though as nails, but if damaged is very difficult to fix/replace. Later Zorki-4’s have a body finished with relatively cheap leatherette, which is much more easily damaged, and is commonly starting to peel off, but should be relatively easy to make better than new. Most Zorki’s come with either a Jupiter-8 50mm f/2.0 lens (being a Zeiss Sonnar inspired design), or an Industar-50 50mm f/3.5 (being a Zeiss Tessar inspired design). I’d highly recommend getting a Zorki-4 with a Jupiter-8 if you can find one.

Buying a Zorki rangefinder with a Jupiter lens

If you’re looking to buy a Zorki there are a few things to be aware of. Zorki’s were produced during the fifties, the sixties and the seventies in Soviet Russia often favoring quantity over quality presumably to be able to meet quota’s. The same is likely true for most Soviet optics as well. So they are both old and may not have met the highest quality standards to begin with. So when buying a Zorki you need to keep in mind it might need repairs and CLA (clean, lube, adjust). My particular Zorki had a dim viewfinder because of dirt both inside and out, the shutterspeed dial was completely stuck at 1/60th of a second and the film takeup spool was missing. I sent my Zorki-4 and Jupiter-8 to Oleg Khalyavin for repairs, shutter curtain replacement and CLA. Oleg was also able to provide me with a replacement film takeup spool or two as well. All in all having work done on your Zorki will easily set you back about € 100 including significant shipping expenses. Keep this in mind before buying. And even if you get your Zorki in a usable state, you’ll probably have to have it serviced at some point. You may very well want to consider having it serviced rather sooner than later, allowing yourself the benefit of enjoying a newly serviced camera.

Complementary accessories

Zorki’s usually come without a lens hood, and the Jupiter-8’s glass elements are said to be only single coasted, so a lens hood isn’t exactly a luxury. A suitable aftermarket lens hood isn’t hard to find though.

While my Zorki did come with it’s original clumsy (and in my case stinky) leather carrying case, it doesn’t come with a regular camera strap. Matin’s Deneb-12LN leather strap can be an affordable but stylish companion to the Zorki. The strap is relatively short, but it’s long enough to wear around your neck or arm. It’s also fairly stiff when it’s still brand new, but it will loosen up after using it for a few days. The strap seems to show signs of wear fairly quickly though.

To some it might seem asif the Zorki has a hot shoe, but it doesn’t, it’s actually a cold shoe, merely intended as an accessory mount and since it’s all metal even with a flash connected via PC Sync it’s likely to be permanently shorted. To mount a regular hot shoe flash you will need a hot shoe adapter both for isolation and PC Sync connectivity.

Choosing a film stock

So now you have a nice Zorki-4, waiting for film to be loaded into it. As of this writing (2015) there is a smörgåsbord of film available. I like shooting black & white, and I often shoot Ilford XP2 Super 400. Ilford’s XP2 is the only B&W film left that’s meant to be processed along with color print film in regular C41 chemicals (so it can be processed by a one-hour-photo service, if you’re lucky enough to still have one of those around). Like most color print film, XP2 has a big exposure latitude, remaining usable between ISO 50 — 800, which isn’t a luxury since the Zorki-4 is not equipped with a built-in lightmeter. While Ilford recommends shooting it at ISO 400, I’d suggest shooting it as if it’s ISO 200 film, giving you two stops of both underexposure and overexposure leeway.

BiertjeWith regard to color print film, I’ve only shot Kodak Gold 200 color print film thus far with pretty decent results. Kodak New Portra 400 quickly comes to mind as another good option. An inexpensive alternative could possibly be Fuji Superia X-TRA 400, which can be found very cheaply as most store-brand 400 speed color print film.

Shooting with a Zorki rangefinder

Once you have a Zorki, there are still some caveats you need to be aware of… Most importantly, don’t change shutter speeds while the shutter isn’t cocked (cocking the shutter is done by advancing the film), not heeding this warning may damage the cameras internal mechanisms. Other notable issues of lesser importance are minding the viewfinder’s parallax error (particularly when shooting at short distances) and making sure you load the film straight, I’ve managed to load film at a slight angle a couple of times already.

As I’ve mentioned, the Zorki-4 does not come with a built-in lightmeter, which means the camera won’t be helping you getting the exposure right, you are on your own. You could use a pricy dedicated light meter (or a less pricy smartphone app, which may or may not work well on your particular phone), either of which are fairly cumbersome. Considering XP2’s wide exposure latitude means an educated guesswork approach becomes feasible. There’s a rule of thumb system called Sunny 16 for making educated guesstimates of exposure for outdoors environments. Sunny 16 states that if you set your shutter speed to the closest reciprocal of your film speed, bright sunny daylight requires an aperture of f/16 to get a decent exposure. Other weather conditions require opening up the aperture according to this table:


Sunny
Slightly
Overcast

Overcast
Heavy
Overcast
Open
Shade
f/16
f/11
f/8
f/5.6
f/4

If you have doubts when classifying shooting conditions, you may want to err on the side of overexposure as color print film tends to prefer overexposure over underexposure. If you’re shooting slide film you should probably avoid using Sunny 16 altogether, as slide film can be very unforgiving if improperly exposed. Additionally, you can manually read a film canisters DX CAS code to see what a films minimum exposure tolerance is.

Quick example: When shooting XP2 on an overcast day, assuming an alternate base ISO of 200 (as suggested earlier), the shutter speed should be set at 1/250th of a second and our aperture should be set at f/8, giving a fairly large field of depth. Now if we want to reduce our field of depth we can trade +2 aperture stops for -2 stops of shutterspeed, where we end up shooting at 1/1000th of a second at f/4.

Having film processed

After shooting a roll of XP2 (or any roll of color print film) you need to take it to a local photo shop, chemist or supermarket to have a it processed, scanned and printed. Usually you’ll be able to have your film processed in C41 chemicals, scanned to CD and get a set of small prints for about € 15 or so. Keep in mind that most shops will cut your filmroll into strips of 4, 5 or 6 negatives, if left to their own devices, depending on the type of protective sleeves they use. Some shops might not offer scanning services without ordering prints, since scanning may be considered a byproduct of the printmaking process. Resulting JPEG scans are usually about 2 megapixel (1800×1200), or sometimes slightly less (1536×1024). A particular note when using XP2, since it’s processed as if it’s color print film means it’s usually scanned as if it’s color print film, where the resulting should-be-monochrome scans (and prints for that matter) can often have a slight color cast. This color cast varies, my particular local lab usually does a fairly decent job, where the scans have a subtle color cast, which isn’t too unpleasant. But I’ve heard about nasty heavier color casts as well. Regardless you need to keep in mind that you might need to convert the scans to proper monochrome manually, which can be easily done with any random photo editing software in a heartbeat. Same goes for rotating the images, aside from the usual 90 degree turns occasionally I get my images scanned upside down, where they need either 180 degree or 270 degree turns, you’ll likely need to do that yourself as well.

Post-processing the scans

Generally speaking I personally like preprocessing my scanned images using some scripted commandline tools before importing them into an image management program like for example Shotwell.

First I remove all useless data from the source JPEG, and in particular for black and white film, like XP2, remove the JPEGs chroma channels, to losslessly remove any color cast (avoiding generational loss):

$ jpegtran -copy none -grayscale -optimize -perfect ORIGINAL.JPG > OUTPUT.JPG

Using the clean image we previously created as a base, we can then add basic EXIF metadata:

$ exiv2 \
   -M"set Exif.Image.Artist John Doe" \
   -M"set Exif.Image.Make KMZ" \
   -M"set Exif.Image.Model Zorki-4" \
   -M"set Exif.Image.ImageNumber \
      $(echo ORIGINAL.JPG | tr -cd '0-9' | sed 's#^0*##g')" \
   -M"set Exif.Image.Orientation 0" \
   -M"set Exif.Image.XResolution 300/1" \
   -M"set Exif.Image.YResolution 300/1" \
   -M"set Exif.Image.ResolutionUnit 2" \
   -M"set Exif.Photo.DateTimeDigitized \
      $(stat --format="%y" ORIGINAL.JPG | awk -F '.' '{print $1}' | tr '-' ':')" \
   -M"set Exif.Photo.UserComment Ilford XP2 Super" \
   -M"set Exif.Photo.ExposureProgram 1" \
   -M"set Exif.Photo.ISOSpeedRatings 400" \
   -M"set Exif.Photo.FocalLength 50/1" \
   -M"set Exif.Image.MaxApertureValue 20/10" \
   -M"set Exif.Photo.LensMake KMZ" \
   -M"set Exif.Photo.LensModel Jupiter-8" \
   -M"set Exif.Photo.FileSource 1" \
   -M"set Exif.Photo.ColorSpace 1" \
   OUTPUT.JPG

As I previously mentioned I tend to get my scans back upside down, which is why I’m usually setting the Orientation tag to 3 (180 degree turn). Other useful values are 0 (do nothing), 6 (rotate 90 degrees clockwise) and 9 (rotate 270 degrees clockwise).

Keeping track

When you’re going to shoot a lot of film it can become a bit of a challenge keeping track of the various rolls of film you may have at an arbitrary point in your workflow. FilmTrackr has you covered.

Manual

You can find a scanned manual for the Zorki-4 rangefinder camera on Mike Butkus’ website.

Moar

If you want to read more about film photography you may want to consider adding Film Is Not Dead and Hot Shots to your bookshelf. You may also want to browse through istillshootfilm.org which seems to be a pretty good resource as well. And for your viewing pleasure, the [FRAMED] Film Show on YouTube.

Interview with Chris Jones

exogenesis by Chris Jones
Would you like to tell us something about yourself?

I live in Melbourne, Australia, and have worked as an illustrator, concept artist, matte painter and 3D artist on a variety of print, game, film and TV projects. I’m probably best known for my short animated film The Passenger, and my on-going 3D human project.

Do you paint professionally or as a hobby artist?

Mostly professionally, but I’m hoping to work up some more personal pieces soon.

When and how did you end up trying digital painting for the first time?

I dabbled with Logo and Mouse Paint when I was a kid in the 1980s, but it wasn’t until 1996 that I was able to properly migrate my drawing and painting skills to the digital domain when I bought a Wacom tablet and Painter 4. I’ve barely touched a pencil or paintbrush ever since.

What is it that makes you choose digital over traditional painting?

Undo, redo and being able to revert to earlier versions; the freedom to experiment as much as I want without wasting expensive art materials; being able to use and create tools that don’t exist in reality; not needing any physical storage space (other than a hard drive); being able to back-up the originals without any loss of quality; no waiting for paint to dry, dealing with a clogged airbrush, wrestling with Frisket and getting paint fumes up my nose … need I go on? :)

I must admit though, I do miss perusing all the nice tools and materials in the art shop.

How did you first find out about open source communities? What is your opinion about them?

I don’t remember how I first found out about them, but it must have been sometime soon after I started using the internet in 1996. I was a bit puzzled as to what would possess people to give their commercially viable software away for free, with no strings attached.

Now that I’m using such software, I find that I have a more direct influence on the shape and direction of the tools I use, which provides me with incentive to contribute, and probably helps explain some of the driving force behind these communities.

Have you worked for any FOSS project or contributed in some way?

Krita is the only one I’ve been involved with in any way so far, other than Blender, which I’ve only skimmed the surface of.

How did you find out about Krita?

I first came across it a few years ago when I was looking for a replacement for my aging copy of Painter 8, but at the time it was either too uncooked or simply unavailable on Windows. In early 2013 I saw it mentioned in a forum discussion about Photoshop alternatives, so I thought I’d take another look.

What was your first impression?

It was still early on in its Windows development at the time so it was full of bugs and highly unstable, but despite this I was pleasantly surprised to find that feature-wise it compared favourably with Painter 8 (which itself was pretty buggy anyway), and even gave Painter 12 a run for its money. It was like a version of Painter with all the bloat stripped out, and some long-standing fundamental issues and omissions finally addressed.

What do you love about Krita?

The pop-up menu; flexible UI; transform and assistant tools; a plethora of colour blending modes; mirror modes; being able to flip the image instantaneously using the “m” key; being able to convert the currently selected brush into an eraser using the “e” key; undo history; layers that behave predictably; responsive developers who engage frequently and openly with users; the rapid pace of development; and of course an ongoing stream of free upgrades!

What do you think needs improvement in Krita? Also, anything that you really hate?

Nothing I’m particularly hateful about – mainly I’d like to see speed improvements, particularly when using large brushes and large images. I think I heard some murmurings about progress in that area though. Changing the layer order can also be quite sluggish, amongst other things. Stability is getting pretty good now, although there’s still room for improvement.

I’ve accumulated a list of niggles and requests that I’ll get around to verifying/reporting one of these days…

In your opinion, what sets Krita apart from the other tools that you use?

Most apps feel like they’re designed for someone else, and I have to try and adapt to their workflow. Krita feels more like it was built with me in mind, and whenever I feel something should behave differently, someone is usually already on the case before I even make mention of it. As far as 2D software goes, Krita fits my needs better than any of the alternatives.

Anything else you’d like to share?

Krita has infiltrated my 3D work as well (which can be found at www.chrisj.com.au), and it’s proven to be well suited to editing textures, as well as painting them from scratch. I look forward to using it more extensively in this area.

February 14, 2015

The Sangre de Cristos wish you a Happy Valentine's Day

[Snow hearts on the Sangre de Cristo mountains]

The snow is melting fast in the lovely sunny weather we've been having; but there's still enough snow on the Sangre de Cristos to see the dual snow hearts on the slopes of Thompson Peak above Santa Fe, wishing everyone for miles around a happy Valentine's Day.

Dave and I are celebrating for a different reason: yesterday was our 1-year anniversary of moving to New Mexico. No regrets yet! Even after a tough dirty work session clearing dead sage from the yard.

So Happy Valentine's Day, everyone! Even if you don't put much stock in commercial Hallmark holidays. As I heard someone say yesterday, "Valentine's day is coming up, and you know what that means. That's right: absolutely nothing!"

But never mind what you may think about the holiday -- you just go ahead and have a happy day anyway, y'hear? Look at whatever pretty scenery you have near you; and be sure to enjoy some good chocolate.