October 21, 2014

A GNOME Kernel wishlist

GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

October 20, 2014

KMZ Zorki 4 (Soviet Rangefinder)

Leica rangefinders

Rangefinder type cameras predate modern single lens reflex camera’s. People still use them. It’s just a different way of shooting. Since they’re no longer a mainstream type camera most manufacturers have stopped making them a long time ago. Except Leica, Leica still makes digital and film rangefinders, as you might guess, they come at significant cost. Even old Leica film rangefinders easily cost upwards of a 1000 EUR. While Leica wasn’t the only brand to manufacture rangefinder through photographic history, it was (and still is) certainly the most iconic brand.

Zorki rangefinders

Now the soviets essentially tried to copy Leica’s cameras, the result of which, the Zorki camera was produced at KMZ. Many different versions exist, having produced nearly 2 million cameras across more than 15 years, the Zorki 4 was without a doubt it’s most popular incarnation. Many consider the Zorki 4 to be the camera where the soviets got it right.

That said, the Zorki 4 more or less looks like a Leica M with it’s single coupled viewfinder/rangefinder window. In most other ways it’s more like a pre-M Leica, with it’s m39 lens screw mount. Earlier Zorki 4’s have a body finished with vulcanite which is though as nails, but if damaged is nearly impossible to fix/replace. Later Zorki’s have a body finished with relatively cheap leatherette, which is much more easily damaged, and is commonly starting to peel off, but it should be relatively easy to make better than new. Most Zorki’s come with either a Jupiter-8 50mm f/2.0 lens (being a Zeiss Sonnar inspired design), or an Industar-50 50mm f/3.5 (being a Zeiss Tessar inspired design). I’d highly recommend getting a Jupiter-8 if you can find one.

Buying a Zorki with a Jupiter

If your looking to buy a Zorki there are a few things to be aware of. Zorki’s were produced during the fifties, the sixties and the seventies in Soviet Russia often favoring quantity over quality presumably to be able to meet quota’s. The same is likely true for most soviet lenses as well. So they are both old and may not have met the high quality standards to begin with. So when buying a Zorki you need to keep in mind it might need repairs and CLA (clean / lube / adjust). My particular Zorki had a dim viewfinder because of dirt both inside and out, the shutterspeed dial was completely stuck at 1/60sec and the film takeup spool was missing. I sent my Zorki and Jupiter-8 to Oleg Khalyavin for repairs, shutter curtain replacement and CLA. Oleg was also able to provide me with a replacement film takeup spool or two. All in all having work done on your Zorki will easily set you back about 100 EUR including shipping expenses. Keep this in mind before buying. And even if you get your Zorki in a usable state, you’ll probably have to have it serviced at some point. You may very well want to consider having it serviced rather sooner than later, allowing yourself the benefit of enjoying a newly serviced camera.

Zorki’s come without a lens hood, and the Jupiter-8’s glass elements are typically only single coasted, so a hood isn’t exactly a luxury. A suitable aftermarket lens hood isn’t hard to find though.

Choosing a film stock

So now you have a nice Zorki 4, waiting for film to be loaded into it. As of this writing (2014) there is a smörgåsbord of film available. I like shooting black & white, and I often shoot Ilford XP2 Super 400. Ilford’s XP2 is the only B&W film left that’s meant to be processed along with regular color negative film in regular C41 chemicals (so it can be processed by a one-hour-photo-service). Like most color negative film, XP2 has a big exposure latitude, remaining usable between ISO 50 — 800, which isn’t a luxury since the Zorki does not come with a lightmeter. While Ilford recommends shooting it at ISO 400, I’d suggest shooting it as if it’s ISO 200 film, giving you two stops of both underexposure and overexposure leeway.

I haven’t shot any real color negative film yet in the Zorki, but Kodak New Portra 400 quickly comes to mind. An inexpensive alternative could possibly be Fuji Superia X-TRA 400, which can be found very cheaply as most store brand 400 speed film.

Shooting a Zorki

Once you have a Zorki, there are still some caveats you need to be aware of… Most importantly, don’t change shutter speed while the shutter isn’t cocked (cocking the shutter is done by advancing the film), not heeding this warning may result in internal damage to the camera mechanism. Other notable issues of lesser importance are minding the viewfinder’s parallax error (particular when shooting at short distances) and making sure you load the film straight.

As I’ve already mentioned the Zorki 4 does not come with a lightmeter, which means the camera won’t be helping you getting the exposure right, you are on your own. You could use a pricy dedicated light meter (or less pricy smartphone app), either of which is fairly cumbersome. Considering XP2’s exposure latitude means an educated guesswork approach becomes feasible. There’s a rule of thumb system called Sunny 16 for making educated guesstimates of exposure. Sunny 16 states that if you set your shutter speed to the closest reciprocal of your film speed, bright sunny daylight requires an aperture of f/16 to get a decent exposure. Other weather conditions require opening up the aperture according to this table:

Sunny f/16
Slightly Overcast f/11
Overcast f/8
Heavy Overcast f/5.6
Open Shade f/4

If you have doubts when classifying shooting conditions, you may want to err on the side of overexposure as color negative film tends to prefer overexposure over underexposure. If you’re shooting slide film you should probably avoid using Sunny 16 altogether, as slide film can be very unforgiving if improperly exposed.

Quick example: When shooting XP2 on an overcast day, assuming an alternate base ISO of 200 (as suggested earlier), the shutter speed should be set at 1/250th of a second and our aperture should be set at f8, giving a fairly large field of depth. Now if we can to reduce our field of depth we can trade +2 stops aperture for -2 stops of shutterspeed, where we end up shooting at 1/1000th of a second at f4.

Having film processed

After shooting a roll of XP2 (or any roll of color negative film) you need to take it to a local photo shop, chemist or supermarket to have a it processed, scanned and printed. Usually you’ll be able to have your film processed in C41 chemicals, scanned to CD and get a set of prints for about 15 EUR or so. Keep in mind that most shops cut your filmroll into strips of 4, 5 or 6 negatives depending on the sleeves they use. Also some shops might not offer scanning services without ordering prints, since scanning is an integral part of the printmaking process. Resulting JPEG scans are usually about 2 megapixel (1800×1200) equivalent, or sometimes slightly lower (1536×1024). A particular note when using XP2, since it’s processed as if it’s color negative film, also means it’s usually scanned as if it’s color negative film, where the resulting should-be-monochrome scans (and prints for that matter) can often have a slight color cast. This color cast varies, my particular lab usually does a fairly decent job, where the scans have a subtle warm color cast, which isn’t unpleasant at all. But I’ve heard about nasty purplish color casts as well. Regardless you need to keep in mind that you might need to convert the scans to proper monochrome manually, which can be easily done with any random photo editing software in a heartbeat. Same goes for rotating the images, aside from the usual 90 degree turns occasionally I get my images scanned upside down, where they need either 180 degree or 270 degree turns, you’ll need to do that yourself as well.

Post-processing the images

First remove all useless data from the source JPEG, and in particular for XP2, remove the JPEGs chroma (UV) channels, to remove any color cast:

$ jpegtran -copy none -grayscale -optimize -perfect 0001.JPG > ZRK_0001.JPG

Then add basic EXIF metadata:

$ exiv2 \
   -M"set Exif.Image.Artist Pascal de Bruijn" \
   -M"set Exif.Image.Make KMZ" \
   -M"set Exif.Image.Model Zorki 4" \
   -M"set Exif.Image.ImageNumber $(echo 0001.JPG | tr -cd '0-9' | sed 's#^0*##g')" \
   -M"set Exif.Image.Orientation 3" \
   -M"set Exif.Image.XResolution 300/1" \
   -M"set Exif.Image.YResolution 300/1" \
   -M"set Exif.Image.ResolutionUnit 2" \
   -M"set Exif.Image.YCbCrPositioning 1" \
   -M"set Exif.Photo.ComponentsConfiguration 1 2 3 0" \
   -M"set Exif.Photo.FlashpixVersion 48 49 48 48" \
   -M"set Exif.Photo.ExifVersion 48 50 51 48" \
   -M"set Exif.Photo.DateTimeDigitized $(stat --format="%y" 0001.JPG | awk -F '.' '{print $1}' | tr '-' ':')" \
   -M"set Exif.Photo.UserComment Ilford XP2 Super" \
   -M"set Exif.Photo.ExposureProgram 1" \
   -M"set Exif.Photo.ISOSpeedRatings 400" \
   -M"set Exif.Photo.FocalLength 50/1" \
   -M"set Exif.Photo.LensMake KMZ" \
   -M"set Exif.Photo.LensModel Jupiter-8 50/2" \
   -M"set Exif.Photo.FileSource 1" \
   -M"set Exif.Photo.ColorSpace 1" \



If you want to read more about film photography you may want to consider adding Film Is Not Dead to your shelf.

October 19, 2014

What's next

Some thoughts on further steps in Synfig development....

TNT Drama Series ‘Legends’ Teaser


Loica, a production studio based in Santiago, Chile have used a novel combination of photography, 3D scanning and Blender to produce a stunning promo for TNT’s recent drama series ‘Legends’ starring British actor Sean Bean.

Working with their office in Santa Monica, Hollywood they have collaborated with the Turner Creative team for a month in which they took still photography and 3D scans to the next level developing post production techniques to create a sequence that showcases a hyper real cinematic feel, fitting with the high production values of the show and expressing its psychological mystery.

Loica_Legends_titleFirst steps.

Starting with detailed photographs and 3D scans of the cast and props, they refined the models with sculpting tools, later adding layers such as hair, shaders and lighting to emphasize the realism and life of the scenes. They built Sean Bean as a low poly mesh object and using a multires modifier, they sculpted fine detail. The model was rigged to enable the team to easily pose and adapt the character into multiple positions to match the basis photographs, which combined with the scans gave them the ability to fully control the look and feel of each shot.


Shading was added in the form of several passes including diffuse layers enhanced with cloning and stencil work while reflection layers were used to add life to the eyes and a feeling of texture to the various materials of character’s clothes and props. Movement of light sources in the scenes was used to bring motion to the otherwise completely still characters giving a sense of time frozen in a moment.


Blender’s hair system was used in order to produce simulations of Sean’s beard hair and the team even went as far as adding smaller details such as skin hair on the nose and eyelashes. In motion, these touches bring a sense of true depth and realism to the shots.

‘Memory Loss’

The team has made good use of the Cycles renderer by using a modified glass shader to produce a heavy depth of field effect that they have named ‘Memory Loss’. They explored this route after finding that simple post effect depth of field effects weren’t producing enough of what they envisioned. Passing a 3D plane with this shader through the character allowed them to finely control the spatial and focal elements of the shot.


The studio has reported that the team had a great experience with Blender especially in terms of the single program workflow. To be able to model, texture, sculpt, preview and render in the same package helped streamline the workflow which accelerated production.


Loica’s show reel exhibits a broad range of styles without watering down the quality and beauty of the work they do. From the artsy UNICEF promo and cheeky use of visual effectsfor Volkswagen to the refined quality of ABC’s ‘Once Upon a Time’ promo they have proved themselves adept in multiple fields of graphic production. The promo can be viewed here and their website is http://loica.tv/


Stellarium 0.13.1 has been released!

The Stellarium development team after 3 months of development is proud to announce the first correcting release of Stellarium in series 0.13.x - version 0.13.1.

This release brings few new features and fixes:
- Added: Light layer for old_style landscapes
- Added: Auto-detect location via network lookup.
- Added: Seasonal rules for displaying constellations
- Added: Coordinates can be displayed as decimal degrees (LP: #1106743)
- Added: Support of multi-touch gestures on Windows 8 (LP: #1165754)
- Added: FOV on bottom bar can be displayed in DMS rather than fractional degrees (LP: #1361582)
- Added: Oculars plugins support eyepieces with permanent crosshairs (LP: #1364139)
- Added: Pointer Coordinates Plugin can displayed not only RA/Dec (J2000.0) (LP: #1365784, #1377995)
- Added: Angle Measure Plugin can measure positional angles to the horizon now (LP: #1208143)
- Added: Search tool can search position not only for RA/Dec (J2000.0) (LP: #1358706)
- Fixed: Galactic plane renamed to correct: Galactic equator (LP: #1367744)
- Fixed: Speed issues when computing lots of comets (LP: #1350418)
- Fixed: Spherical mirror distortion work correctly now (LP: #676260, #1338252)
- Fixed: Location coordinates on the bottom bar displayed correctly now (LP: #1357799)
- Fixed: Ecliptic coordinates for J2000.0 and grids diplayed correctly now (LP: #1366567, #1369166)
- Fixed: Rule for select a celestial objects (LP: #1357917)
- Fixed: Loading extra star catalogs (LP: #1329500, #1379241)
- Fixed: Creates spurious directory on startup (LP: #1357758)
- Fixed: Various GUI/rendering improvements (LP: #1380502, #1320065, #1338252, #1096050, #1376550, #1382689)
- Fixed: "missing disk in drive <whatever>" (LP: #1371183)

A huge thanks to our community whose contributions help to make Stellarium better!

October 18, 2014

Synfig Studio 0.64.2

The new stable version of Synfig Studio is released!...

October 16, 2014

Aspens are turning the mountains gold

Last week both of the local mountain ranges turned gold simultaneously as the aspens turned. Here are the Sangre de Cristos on a stormy day:

[Sangre de Cristos gold with aspens]

And then over the weekend, a windstorm blew a lot of those leaves away, and a lot of the gold is gone now. But the aspen groves are still beautiful up close ... here's one from Pajarito Mountain yesterday.

[Sangre de Cristos gold with aspens]

October 15, 2014

Quick update

Hi all

Just today realize how long since my last post, sadly being in Cuba prevents me from regular updates.
Last months ive being working polishing A LOT the FillHoles tools for purposes way beyond regular sculpting to the point of making it a very powerful tool on they own for mesh healing. I will probably make a short video soon featuring a complete workflow for that tools but honestly, this offline situation that has prolonged more than a year now is demotivating me a lot.
Ive started working on a quadrangulation tool that proves to be challenging and interesting enough to drive me trough this situation.
Hope in new post I will be a little more happy :)

Cheers to all

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <summary>Open source versions of several commercial fonts</summary>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
  <url type="homepage">http://fedorahosted.org/liberation-fonts/</url>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

October 14, 2014

Blenderart Mag Issue #45 now available

Welcome to Issue #45, “Cycles Circus

Come jump on the Cycles Circus merry-go-round with us as we not only explore some fun features of cycles, but play on a comet and meet the geniuses of Ray and Clovis

So grab your copy today. Also be sure to check out our gallery of wonderful images submitted by very talented members of our community.

Table of Contents: 

  • Quick Comet Animation
  • Book Review: Cycles Materials and Textures
  • Baby Elephant
  • Ray and Clovis

And Lot More…

October 12, 2014

Synfig Studio 0.64.2 - Release Candidate #2

The second release candidate of upcoming Synfig Studio 0.64.2 is available for download now....

October 11, 2014

Railroading exponentially

or: Smart communities can still be stupid

I attended my first Los Alamos County Council meeting yesterday. What a railroad job!

The controversial issue of the day was the town's "branding". Currently, as you drive into Los Alamos on highway 502, you pass a tasteful rock sign proclaiming "LOS ALAMOS: WHERE DISCOVERIES ARE MADE". But back in May, the county council announced the unanimous approval of a new slogan, for which they'd paid an ad agency some $55,000: "LIVE EXPONENTIALLY".

As you might expect in a town full of scientists, the announcement was greeted with much dismay. What is it supposed to mean, anyway? Is it a reference to exponential population growth? Malignant tumor growth? Gaining lots of weight as we age?

The local online daily, tired of printing the flood of letters protesting the stupid new slogan, ran a survey about the "Live Exponentially" slogan. The results were that 8.24% liked it, 72.61% didn't, and 19.16% didn't like it and offered alternatives or comments. My favorites were Dave's suggestion of "It's Da Bomb!", and a suggestion from another reader, "Discover Our Secrets"; but many of the alternate suggestions were excellent, or hilarious, or both -- follow the link to read them all.

For further giggles, try a web search on the term. If you search without quotes, Ebola tops the list. With quotes, you get mostly religious tracts and motivational speakers.

The Council Meeting

(The rest of this is probably only of interest to Los Alamos folk.)

Dave read somewhere -- it wasn't widely announced -- that Friday's council meeting included an agenda item to approve spending $225,000 -- yes, nearly a quarter of a million dollars -- on "brand implementation". Of course, we had to go.

In the council discussion leading up to the call for public comment, everyone spoke vaguely of "branding" without mentioning the slogan. Maybe they hoped no one would realize what they were really voting for. But in the call for public comment, Dave raised the issue and urged them to reconsider the slogan.

Kristin Henderson seemed to have quite a speech prepared. She acknowledged that "people who work with math" universally thought the slogan was stupid, but she said that people from a liberal arts background, like herself, use the term to mean hiking, living close to nature, listening to great music, having smart friends and all the other things that make this such a great place to live. (I confess to being skeptical -- I can't say I've ever heard "exponential" used in that way.)

Henderson also stressed the research and effort that had already gone into choosing the current slogan, and dismissed the idea that spending another $50,000 on top of the $55k already spent would be "throwing money after bad." She added that showing the community some images to go with the slogan might change people's minds.

David Izraelevitz admitted that being an engineer, he initially didn't like "Live Exponentially". But he compared it to Apple's "Think Different": though some might think it ungrammatical, it turned out to be a highly successful brand because it was coupled with pictures of Gandhi and Einstein. (Hmm, maybe that slogan should be "Live Exponential".)

Izraelevitz described how he convinced a local business owner by showing him the ad agency's full presentation, with pictures as well as the slogan, and said that we wouldn't know how effective the slogan was until we'd spent the $50k for logo design and an implementation plan. If the council didn't like the results they could choose not to go forward with the remaining $175,000 for "brand implementation". (Councilor Fran Berting had previously gotten clarification that those two parts of the proposal were separate.)

Rick Reiss said that what really mattered was getting business owners to approve the new branding -- "the people who would have to use it." It wasn't so important what people in the community thought, since they didn't have logos or ads that might incorporate the new branding.

Pete Sheehey spoke up as the sole dissenter. He pointed out that most of the community input on the slogan has been negative, and that should be taken into account. The proposed slogan might have a positive impact on some people but it would have a negative impact on others, and he couldn't support the proposal.

Fran Berting said she was "not all that taken" with the slogan, but agreed with Izraelevitz that we wouldn't know if it was any good without spending the $50k. She echoed the "so much work has already gone into it" argument. Reiss also echoed "so much work", and that he liked the slogan because he saw it in print with a picture.

But further discussion was cut off. It was 1:30, the fixed end time for the meeting, and chairman Geoff Rodgers (who had pretty much stayed out of the discussion to this point) called for a vote. When the roll call got to Sheehey, he objected to the forced vote while they were still in the middle of a discussion. But after a brief consultation on Robert's Rules of Order, chairman Rogers declared the discussion over and said the vote would continue. The motion was approved 5-1.

The Exponential Railroad

Quite a railroading. One could almost think it had been planned that way.

First, the item was listed as one of two in the "Consent Agenda" -- items which were expected to be approved all together in one vote with no discussion or public comment. It was moved at the last minute into "Business"; but that put it last on the agenda.

Normally that wouldn't have mattered. But although the council more often meets in the evenings and goes as long as it needs to, Friday's meeting had a fixed time of noon to 1:30. Even I could see that wasn't much time for all the items on the agenda.

And that mid-day timing meant that working folk weren't likely to be able to listen or comment. Further, the branding issue didn't come up until 1 pm, after some of the audience had already left to go back to work. As a result, there were only two public comments.

Logic deficit

I heard three main arguments repeated by every council member who spoke in favor:

  1. the slogan makes much more sense when viewed with pictures -- they all voted for it because they'd seen it presented with visuals;
  2. a lot of time, effort and money has already gone into this slogan, so it didn't make sense to drop it now; and
  3. if they didn't like the logo after spending the first $50k, they didn't have to approve the other $175k.

The first argument doesn't make any sense. If the pictures the council saw were so convincing, why weren't they showing those images to the public? Why spend an additional $50,000 for different pictures? I guess $50k is just pocket change, and anyone who thinks it's a lot of money is just being silly.

As for the second and third, they contradict each other. If most of the board thinks now that the initial $50k contract was so much work that we have to go forward with the next $50k, what are the chances that they'll decide not to continue after they've already invested $100k?

Exponentially low, I'd say.

I was glad of one thing, though. As a newcomer to the area faced with a ballot next month, it was good to see the council members in action, seeing their attitudes toward spending and how much they care about community input. That will be helpful come ballot time.

If you're in the same boat but couldn't make the meeting, catch the October 10, 2014 County Council Meeting video.

And now for some hardware (Onda v975w)

Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Update: On Google+ and in comments of this blog, it was pointed out that the seller on Aliexpress was trying to scam people. All my apologies, I just selected the cheapest from this website. I personally bought it on Amazon.fr using NewTec24 FR as the vendor.

October 09, 2014

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.


Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.


Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.


A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.


Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.


I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

What's nesting in our truck's engine?

We park the Rav4 outside, under an overhang. A few weeks ago, we raised the hood to check the oil before heading out on an adventure, and discovered a nest of sticks and grass wedged in above the valve cover. (Sorry, no photos -- we were in a hurry to be off and I didn't think to grab the camera.)

Pack rats were the obvious culprits, of course. There are lots of them around, and we've caught quite a few pack rats in our live traps. Knowing that rodents can be a problem since they like to chew through hoses and wiring, we decided we'd better keep an eye on the Rav and maybe investigate some sort of rodent-repelling technology.

Sunday, we got back from another adventure, parked the Rav in its usual place, went inside to unload before heading out for an evening walk, and when we came back out, there was a small flock of birds hanging around under the Rav. Towhees! Not only hanging around under the still-warm engine, but several times we actually saw one fly between the tires and disappear.

Could towhees really be our engine nest builders? And why would they be nesting in fall, with the days getting shorter and colder?

I'm keeping an eye on that engine compartment now, checking every few days. There are still a few sticks and juniper sprigs in there, but no real nest has reappeared so far. If it does, I'll post a photo.

October 08, 2014

Wed 2014/Oct/08

  • Growstuff's Crowdfunding Campaign for an API for Open Food Data

    During GUADEC 2012, Alex Skud Bailey gave a keynote titled What's Next? From Open Source to Open Everything. It was about how principles like de-centralization, piecemeal growth, and shared knowledge are being applied in many areas, not just software development. I was delighted to listen to such a keynote, which validated my own talk from that year, GNOME and the Systems of Free Infrastructure.

    During the hallway track I had the chance to talk to Skud. She is an avid knitter and was telling me about Ravelry, a web site for people who knit/crochet. They have an excellent database of knitting patterns, a yarn database, and all sorts of deep knowledge on the craft gathered over the years.

    At that time I was starting my vegetable garden at home. It turned out that Skud is also an avid gardener. We ended up talking about how it would be nice to have a site like Ravelry, but for small-scale food gardeners. You would be able to track your own crops, but also consult about the best times to plant and harvest certain species. You would be able to say how well a certain variety did in your location and climate. Over time, by aggregating people's data, we would be able to compile a free database of crop data, local varieties, and climate information.

    Growstuff begins


    Skud started coding Growstuff from scratch. I had never seen a project start from zero-lines-of-code, and be run in an agile fashion, for absolutely everything, and I must say: I am very impressed!

    Every single feature runs through the same process: definition of a story, pair programming, integration. Newbies are encouraged to participate. They pair up with a more experienced developer, and they get mentored.

    They did that even for the very basic skeleton of the web site: in the beginning there were stories for "the web site should display a footer with links to About and the FAQ", and "the web site should have a login form". I used to think that in order to have a collaboratively-developed project, one had to start with at least a basic skeleton, or a working prototype — Growstuff proved me wrong. By having a friendly, mentoring environment with a well-defined process, you can start from zero-lines-of-code and get excellent results quickly. The site has been fully operational for a couple of years now, and it is a great place to be.

    Growstuff is about the friendliest project I have seen.

    Local crop data

    Tomato heirloom        varieties

    I learned the basics of gardening from a couple of "classic" books: the 1970s books by John Seymour which my mom had kept around, and How to Grow More Vegetables, by John Jeavons. These are nominally excellent — they teach you how to double-dig to loosen the soil and keep the topsoil, how to transplant fragile seedlings so you don't damage them, how to do crop rotation.

    However, their recommendations on garden layouts or crop rotations are biased towards the author's location. John Seymour's books are beautifully illustrated, but are about the United Kingdom, where apples and rhubarb may do well, but would be scorched where I live in Mexico. Jeavons's book is biased towards California, which is somewhat closer climate-wise to where I live, but some of the species/varieties he mentions are practically impossible to get here — and, of course, species which are everyday fare here are completely missing in his book. Pity the people outside the tropics, for whom mangoes are a legend from faraway lands.

    The problem is that the books lack knowledge of good crops for wherever you may live. This is the kind of thing that is easily crowdsourced, where "easily" means a Simple Matter Of Programming.

    An API for Open Food Data

    Growstuff has been gathering crop data from people's use of the site. Someone plants spinach. Someone harvests tomatoes. Someone puts out seeds for trade. The next steps are to populate the site with fine-grained varieties of major crops (e.g. the zillions of varieties of peppers or tomatoes), and to provide an API to access planting information in a convenient way for analysis.

    Right now, Growstuff is running a fundraising campaign to implement this API — allowing developers to work on this full-time, instead of scraping from their "free time" otherwise.

    I encourage you to give money to Growstuff's campaign. These are good people.

    To give you a taste of the non-trivialness of implementing this, I invite you to read Skud's post on interop and unique IDs for food data. This campaign is not just about adding some features to Growstuff; it is about making it possible for open food projects to interoperate. Right now there are various free-culture projects around food production, but little communication between them. This fundraising campaign attempts to solve part of that problem.

    I hope you can contribute to Growstuff's campaign. If you are into local food production, local economies, crowdsourced databases, and that sort of thing — these are your people; help them out.

    Resources for more in-depth awesomeness

October 07, 2014

Synfig Studio 0.64.2 - Release Candidate

The Release Candidate of new bugfix release is available for download!...

October 03, 2014

Thu 2014/Oct/02

  • Announcing the safety-list

    I'm happy to announce that we now have a safety-list mailing list. This is for discussions around safety, privacy, and security.

    This is some introductory material which you may have already read:

    Everyone is welcome to join! The list's web page is here: https://mail.gnome.org/mailman/listinfo/safety-list

    Thanks to the sysadmin team for their quick response in creating this list!

  • Talleres Libres gets a Sewing workshop

    Since a month ago, when I broke my collarbone after flying over the handlebars, I've been incapacitated in the bicycling and woodworking departments. So, I've been learning to sew. Oralia introduced me to her sewing machine, and I've been looking at leatherworking videos.

    The project: bicycle luggage — bike panniers, which are hard to get in my town.

    First        prototype of bike panniers

    Those are a work-in-progress of a pair of small panniers for Luciana's small bike. I still have to add strips of reinforcing leather on all seams, flaps to close the bags, and belts for mounting on the bike's luggage rack.

    I'm still at the "I have no idea what I'm doing" stage. When I get to the point of knowing what I'm doing, I'll post patterns/instructions.

October 02, 2014

Photographing a double rainbow

[double rainbow]

The wonderful summer thunderstorm season here seems to have died down. But while it lasted, we had some spectacular double rainbows. And I kept feeling frustrated when I took the SLR outside only to find that my 18-55mm kit lens was nowhere near wide enough to capture it. I could try stitching it together as a panorama, but panoramas of rainbows turn out to be quite difficult -- there are no clean edges in the photo to tell you where to join one image to the next, and automated programs like Hugin won't even try.

There are plenty of other beautiful vistas here too -- cloudscapes, mesas, stars. Clearly, it was time to invest in a wide-angle lens. But how wide would it need to be to capture a double rainbow?

All over the web you can find out that a rainbow has a radius of 42 degrees, so you need a lens that covers 84 degrees to get the whole thing.

But what about a double rainbow? My web searches came to naught. Lots of pages talk about double rainbows, but Google wasn't finding anything that would tell me the angle.

I eventually gave up on the web and went to my physical bookshelf, where Color and Light in Nature gave me a nice table of primary and secondary rainbow angles of various wavelengths of light. It turns out that 42 degrees everybody quotes is for light of 600 nm wavelength, a blue-green or cyan color. At that wavelength, the primary angle is 42.0° and the secondary angle is 51.0°.

Armed with that information, I went back to Google and searched for double rainbow 51 OR 102 angle and found a nice Slate article on a Double rainbow and lightning photo. The photo in the article, while lovely (lightning and a double rainbow in the South Dakota badlands), only shows a tiny piece of the rainbow, not the whole one I'm hoping to capture; but the article does mention the 51-degree angle.

Okay, so 51°×2 captures both bows in cyan light. But what about other wavelengths? A typical eye can see from about 400 nm (deep purple) to about 760 nm (deep red). From the table in the book:

Wavelength Primary Secondary
400 40.5° 53.7°
600 42.0° 51.0°
700 42.4° 50.3°

Notice that while the primary angles get smaller with shorter wavelengths, the secondary angles go the other way. That makes sense if you remember that the outer rainbow has its colors reversed from the inner one: red is on the outside of the primary bow, but the inside of the secondary one.

So if I want to photograph a complete double rainbow in one shot, I need a lens that can cover at least 108 degrees.

What focal length lens does that translate to? Howard's Astronomical Adventures has a nice focal length calculator. If I look up my Rebel XSi on Wikipedia to find out that other countries call it a 450D, and plug that in to the calculator, then try various focal lengths (the calculator offers a chart but it didn't work for me), it turns out that I need an 8mm lens, which will give me an 108° 26‘ 46" field of view -- just about right.

[Double rainbow with the Rokinon 8mm fisheye] So that's what I ordered -- a Rokinon 8mm fisheye. And it turns out to be far wider than I need -- apparently the actual field of view in fisheyes varies widely from lens to lens, and this one claims to have a 180° field. So the focal length calculator isn't all that useful. At any rate, this lens is plenty wide enough to capture those double rainbows, as you can see.

About those books

By the way, that book I linked to earlier is apparently out of print and has become ridiculously expensive. Another excellent book on atmospheric phenomena is Light and Color in the Outdoors by Marcel Minnaert (I actually have his earlier version, titled The Nature of Light and Color in the Open Air). Minnaert doesn't give the useful table of frequencies and angles, but he has lots of other fun and useful information on rainbows and related phenomena, including detailed instructions for making rainbows indoors if you want to measure angles or other quantities yourself.

Bullet used in NASA Tensegrity Robotics Toolkit, book Multithreading for Visual Effects



Nasa is using Bullet in their new open source Tensegrity Robotics Toolkit. You can find more information and video link here: http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=17&t=9978

The new book Multithreading for Visual Effects includes a chapter on the OpenCL optimizations for upcoming Bullet 3.x. Other chapters include multithreading development experiences from OpenSubDiv, Houdini, Pixar Presto and Dreamworks Fluids and LibEE. You can get it at the publisher AK Peters/CRC Press or at Amazon.

Development on upcoming Bullet 2.83 and Bullet 3.x is making good progress, hopefully an update follows soon.

Chinese version of Synfig Training Package

Synfig Training Package is available in Chinese language now!...

October 01, 2014

Ivan Mahonin is back

We are happy to announce that our full time developer Ivan Mahonin is back to Synfig development!...

Testing Survival

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id […]

September 30, 2014

GTK+ widget templates now in Javascript

Let's get the features in early!

If you're working on a Javascript application for GNOME, you'll be interested to know that you can now write GTK+ widget templates in gjs.

Many thanks to Giovanni for writing the original patches. And now to a small example:

const MyComplexGtkSubclass = new Lang.Class({
Name: 'MyComplexGtkSubclass',
Extends: Gtk.Grid,
Template: 'resource:///org/gnome/myapp/widget.xml',
Children: ['label-child'],

_init: function(params) {

this._internalLabel = this.get_template_child(MyComplexGtkSubclass,

And now you just need to create your widget:

let content = new MyComplexGtkSubclass();
content._internalLabel.set_label("My updated label");

You'll need gjs from git master to use this feature. And if you see anything that breaks, don't hesitate to file a bug against gjs in the GNOME Bugzilla.

September 29, 2014

New video encoding features

Synfig Studio just got improvements for video encoding. Check out the demonstration video and development snapshots inside....

Shipping larger application icons in Fedora 22

In GNOME 3.14 we show any valid application in the software center with an application icon of 32×32 or larger. Currently a 32×32 icon has to be padded with 16 pixels of whitespace on all 4 edges, and also has to be scaled x2 to match other UI elements on HiDPI screens. This looks very fuzzy and out of place and lowers the quality of an otherwise beautiful installing experience.

For GNOME 3.16 (Fedora 22) we are planning to increase the minimum icon size to 48×48, with recommended installed sizes of 16×16, 24×24, 32×32, 48×48 and 256×256 (or SVG in some cases). Modern desktop applications typically ship multiple sizes of icons in known locations, and it’s very much the minority of applications that only ship one small icon.

Soon I’m going to start nagging upstream maintainers to install larger icons than 32×32. If you’re re-doing the icon, please generate a 256×256 or 64×64 icon with alpha channel, as the latter will probably be the minimum size for F23 and beyond.

At the end of November I’ll change the minimum icon size in the AppStream generator used for Fedora so that applications not fixed will be dropped from the metadata. You can of course install the applications manually on the command line, but they won’t be visible in the software center until they are installed.

If you’re unclear on what needs to be done in order to be listed in the AppStream metadata, refer to the guidelines or send me email.

September 28, 2014

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

Trip to Nuremberg and Munich

This month I visited my friend and colleague Garrett in Germany. We visited the Christmas markets there. Lots of fun. Here are some pictures.

Petroglyphs, ancient and modern

In the canyons below White Rock there are many wonderful petroglyphs, some dating back many centuries, like this jaguar: [jaguar petroglyph in White Rock Canyon]

as well as collections like these:
[pictographs] [petroglyph collection]

Of course, to see them you have to negotiate a trail down the basalt cliff face. [Red Dot trail]

Up the hill in Los Alamos there are petroglyphs too, on trails that are a bit more accessible ... but I suspect they're not nearly so old. [petroglyph face]

Getting Around in GIMP - Luminosity Masks Revisited

Brorfelde landscape by Stig Nygaard (cb)
After adding an aggressive curve along with a mid-tone luminosity mask.

I had previously written about adapting Tony Kuyper’s Luminosity Masks for GIMP. I won’t re-hash all of the details and theory here (just head back over to that post and brush up on them there), but rather I’d like to re-visit them using channels. Specifically to have another look at using the mid-tones mask to give a little pop to images.

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP
Original tutorial on Luminosity Masks:
Getting Around in GIMP - Luminosity Masks

Let’s Build Some Luminosity Masks!

The way I had approached building the luminosity masks previously were to create them as a function of layer blending modes. In this re-visit, I’d like to build them from selection sets in the Channels tab of GIMP.

For the Impatient:
I’ve also written a Script-Fu that automates the creation of these channels mimicking the steps below.

Download from: Google Drive

Download from: GIMP Registry (registry.gimp.org)

Once installed, you’ll find it under:
Filters → Generic → Luminosity Masks (patdavid)
Yet another reason to love open-source - Saul Goode over at this post on GimpChat updated my script to run faster and cleaner.
You can get a copy of his version at the same Registry link above.
(Saul’s a bit of a Script-Fu guru, so it’s always worth seeing what he’s up to!)

We’ll start off in a similar way as we did previously.

Duplicate your base image

Either through the menus, or by Right-Clicking on the layer in the Layer Dialog:
Layer → Duplicate Layer
Pat David GIMP Luminosity Mask Tutorial Duplicate Layer

Desaturate the Duplicated Layer

Now desaturate the duplicated layer. I use Luminosity to desaturate:
Colors → Desaturate…

Pat David GIMP Luminosity Mask Tutorial Desaturate Layer

This desaturated copy of your color image represents the “Lights” channel. What we want to do is to create a new channel based on this layer.

Create a New Channel “Lights”

The easiest way to do this is to go to your Channels Dialog.

If you don’t see it, you can open it by going to:
Windows → Dockable Dialogs → Channels

Pat David GIMP Luminosity Mask Tutorial Channels Dialog
The Channels dialog

On the top half of this window you’ll see the an entry for each channel in your image (Red, Green, Blue, and Alpha). On the bottom will be a list of any channels you have previously defined.

To create a new channel that will become your “Lights” channel, drag any one of the RGB channels down to the lower window (it doesn’t matter which - they all have the same data due to the desaturation operation).

Now rename this channel to something meaningful (like “L” for instance!), by double-clicking on its name (in my case it‘s called “Blue Channel Copy”) and entering a new one.

This now gives us our “Lights” channel, L :

Pat David GIMP Luminosity Mask Tutorial L Channel

Now that we have the “Lights” channel created, we can use it to create its inverse, the “Darks” channel...

Create a New Channel “Darks”

To create the “Darks” channel, it helps to realize that it should be the inverse of the “Lights” channel. We can get this selection through a few simple operations.

We are going to basically select the entire image, then subtract the “Lights” channel from it. What is left should be our new “Darks” channel.

Select the Entire Image

First, have the entire image selected:
Select → All

Remember, you should be seeing the “marching ants” around your selection - in this case the entire image.

Subtract the “Lights” Channel

With the entire image selected, now we just have to subtract the “Lights” channel. In the Channels dialog, just Right-Click on the “Lights” channel, and choose “Subtract from Selection”:

Pat David GIMP Luminosity Mask Tutorial L Channel Subtract

You’ll now see a new selection on your image. This selection represents the inverse of the “Lights” channel...

Create a New “Darks” Channel from the Selection

Now we just need to save the current selection to a new channel (which we’ll call... Darks!). To save the current selection to a channel, we can just use:
Select → Save to Channel

This will create a new channel in the Channel dialog (probably named “Selection Mask copy”). To give it a better name, just Double-Click on the name to rename it. Let’s choose something exciting, like “D”!

More Darker!

At this point, you’ll have a “Lights” and a “Darks” channel. If you wanted to create some channels that target darker and darker regions of the image, you can subtract the “Lights” channel again (this time from the current selection, “Darks”, as opposed to the entire image).

Once you’ve subtracted the “Lights” channel again, don’t forget to save the selection to a new channel (and name it appropriately - I like to name subsequent masks things like, “DD”, in this case - if I subtracted again, I’d call the next one “DDD” and so on…).

I’ll usually make 3 levels of “Darks” channels, D, DD, and DDD:

Pat David GIMP Luminosity Mask Tutorial Darks Channels
Three levels of Dark masks created.

Here’s what the final three different channels of darks looks like:

Pat David GIMP Luminosity Mask Tutorial All Darks Channels
The D, DD, and DDD channels

Lighter Lights

At this point we have one “Lights” channel, and three “Darks” channels. Now we can go ahead and create two more “Lights” channels, to target lighter and lighter tones.

The process is identical to creating the darker channels, just in reverse.

Lights Channel to Selection

To get started, activate the “Lights” channel as a selection:

Pat David GIMP Luminosity Mask Tutorial L Channel Activate

With the “Lights” channel as a selection, now all we have to do is Subtract the “Darks” channel from it. Then save that selection as a new channel (which will become our “LL” channel, and so on…

Pat David GIMP Luminosity Mask Tutorial Subtract D Channel
Subtracting the D channel from the L selection

To get an even lighter channel, you can subtract D one more time from the selection so far as well.

Here are what the three channels look like, starting with L up to LLL:

Pat David GIMP Luminosity Mask Tutorial All Lights Channels
The L, LL, and LLL channels

Mid Tones Channels

By this point, we’ve got 6 new channels now, three each for light and dark tones:

Pat David GIMP Luminosity Mask Tutorial L+D Channels

Now we can generate our mid-tone channels from these.

The concept of generating the mid-tones is relatively simple - we’re just going to intersect dark and light channels to produce whats left - midtones.

Intersecting Channels for Midtones

To get started, first select the “L” channel, and set it to the current selection (just like above). Right-Click → Channel to Selection.

Then, Right-Click on the “D” channel, and choose “Intersect with Selection”.

You likely won’t see any selection active on your image, but it’s there, I promise. Now as before, just save the selection to a channel:
Select → Save to Channel

Give it a neat name. Sayyy, “M”? :)

You can repeat for each of the other levels, creating an MM and MMM if you’d like.

Now remember, the mid-tones channels are intended to isolate mid values as a mask, so they can look a little strange at first glance. Here’s what the basic mid-tones mask looks like:

Pat David GIMP Luminosity Mask Tutorial Mid Channel
Basic Mid-tones channel

Remember, black tones in this mask represent full transparency to the layer below, while white represents full opacity, from the associated layer.

Using the Masks

The basic idea behind creating these channels is that you can now mask particular tonal ranges in your images, and the mask will be self-feathering (due to how we created them). So we can now isolate specific tones in the image for manipulation.

Previously, I had shown how this could be used to do some simple split-toning of an image. In that case I worked on a B&W image, and tinted it. Here I’ll do the same with our image we’ve been working on so far...

Split Toning

Using the image I’ve been working through so far, we have the base layer to start with:

Pat David GIMP Luminosity Mask Tutorial Split Tone Base

Create Duplicates

We are going to want two duplicates of this base layer. One to tone the lighter values, and another to tone the darker ones. We’ll start by considering the dark tones first. Duplicate the base layer:
Layer → Duplicate Layer

Then rename the copy something descriptive. In my example, I’ll call this layer “Dark” (original, I know):

Pat David GIMP Luminosity Mask Tutorial Split Tone Darks

Add a Mask

Now we can add a layer mask to this layer. You can either Right-Click the layer, and choose “Add Layer Mask”, or you can go through the menus:
Layer → Mask → Add Layer Mask

You’ll then be presented with options about how to initialize the mask. You’ll want to Initialize Layer Mask to: “Channel”, then choose one of your luminosity masks from the drop-down. In my case, I’ll use the DD mask we previously made:

Pat David GIMP Luminosity Mask Tutorial Add Layer Mask Split Tone

Adjust the Layer

Pat David GIMP Luminosity Mask Tutorial Split Tone Activate DD Mask
Now you’ll have a Dark layer with a DD mask that will restrict any modification you do to this layer to only apply to the darker tones.

Make sure you select the layer, and not it’s mask, by clicking on it (you’ll see a white outline around the active layer). Otherwise any operations you do may accidentally get applied to the mask, and not the layer.

At this point, we now want to modify the colors of this layer in some way. There are literally endless ways to approach this, bounded only by your creativity and imagination. For this example, we are going to tone the image with a cool teal/blue color (just like before), which combined with the DD layer mask, will restrict it to modifying only the darker tones.

So I’ll use the Colorize option to tone the entire layer a new color:
Colors → Colorize

To get a Teal-ish color, I’ll pull the Hue slider over to about 200:

Pat David GIMP Luminosity Mask Tutorial Split Tone Colorize

Now, pay attention to what’s happening on your image canvas at this point. Drag the Hue slider around and see how it changes the colors in your image. Especially note that the color shifts will be restricted to the darker tones thanks to the DD mask being used!

To illustrate, mouseover the different hue values in the caption of the image below to change the Hue, and see how it effects the image with the DD mask active:

Mouseover to change Hue to: 0 - 90 - 180 - 270

So after I choose a new Hue of 200 for my layer, I should be seeing this:

Pat David GIMP Luminosity Mask Tutorial Split Tone Dark Tinted

Repeat for Light Tones

Now just repeat the above steps, but this time for the light tones. So duplicate the base layer again, and add a layer mask, but this time try using the LL channel as a mask.

For the lighter tones, I chose a Hue of around 25 instead (more orange-ish than blue):

Pat David GIMP Luminosity Mask Tutorial Split Tone Light Tinted

In the end, here are the results that I achieved:

Pat David GIMP Luminosity Mask Tutorial Split Tone Result
After a quick split-tone (mouseover to compare to original)

The real power here comes from experimentation. I encourage you to try using a different mask to restrict the changes to different areas (try the LLL for instance). You can also adjust the opacity of the layers now to modify how strongly the color tones will effect those areas as well. Play!

Mid-Tones Masks

The mid-tone masks were very interesting to me. In Tony’s original article, he mentioned how much he loved using them to provide a nice boost to contrast and saturation in the image. Well, he’s right. It certainly does do that! (He also feels that it’s similar to shooting the image on Velvia).

Pat David GIMP Luminosity Mask Tutorial Mid Tones Mask
Let’s have a look.

I’ve deleted the layers from my split-toning exercise above, and am back to just the base image layer again.

To try out the mid-tones mask, we only need to duplicate the base layer, and apply a layer mask to it.

This time I’ll choose the basic mid-tones mask M.

What’s interesting about using this mask is that you can use pretty aggressive curve modifications to it, and still keep the image from blowing up. We are only targeting the mid-tones.

To illustrate, I’m going to apply a fairly aggressive compression to the curves by using Adjust Color Curves:
Colors → Curves

When I say aggressive, here is what I’m referring to:

Pat David GIMP Luminosity Mask Tutorial Aggresive Curve Mid Tone Mask

Here is the effect it has on the image when using the M mid-tones mask:

Aggressive curve with Mid-Tone layer mask
(mouseover to compare to original)

As you can see, there is an increase in contrast across the image, as well a nice little boost to saturation. You don’t need to worry about blowing out highlights or losing shadow detail, because the mask will not allow you to modify those values.

More Samples of the Mid-Tone Mask in Use

Pat David GIMP Luminosity Mask Tutorial
Pat David GIMP Luminosity Mask Tutorial
The lede image again, with another aggressive curve applied to a mid-tone masked layer
(mouseover to compare to original)

Pat David GIMP Luminosity Mask Tutorial
Red Tailed Black Cockatoo at f/4 by Debi Dalio on Flickr (used with permission)
(mouseover to compare to original)

Pat David GIMP Luminosity Mask Tutorial
Landscape Ballon by Lennart Tange on Flickr (cb)
(mouseover to compare to original)

Pat David GIMP Luminosity Mask Tutorial
Landscapes by Tom Hannigan on Flickr (cb)
(mouseover to compare to original)

Mixing Films

This is something that I’ve found myself doing quite often. It’s a very powerful method for combining color toning that you may like from different film emulations. Consider what we just walked through.

These masks allow you to target modifications of layers to specific tones of an image. So if you like the saturation of, say, Fuji Velvia in the shadows, but like the upper tones to look similar to Polaroid Polachrome, then these luminosity masks are just what you’re looking for!

Just a little food for experimentation thought... :)

Stay tuned later in the week where I’ll investigate this idea in a little more depth.

In Conclusion

This is just another tool in our mental toolbox of image manipulation, but it’s a very powerful tool indeed. When considering your images, you can now look at them as a function of luminosity - with a neat and powerful way to isolate and target specific tones for modification.

As always, I encourage you to experiment and play. I’m willing to bet this method finds it’s way into at least a few peoples workflows in some fashion.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

September 27, 2014

Attending the Vienna GNOME/.NET hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME and .NET fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  1. Port SparkleShare's user interface to GTK+3.
  2. Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the GNOME Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.


For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.


The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.


If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  1. Saving the modification times of files
  2. Creating a binary Linux bundle
  3. SparkleShare folder location selection
  4. GNOME 3 integration
  5. …other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.


I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

Vienna GNOME/.NET hackfest report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my love for the Human Interface Guidelines and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.


I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna and The GNOME Foundation.

September 26, 2014

Fanart by Anastasia Majzhegisheva – 16

Young Marya Morevna. Artwork by Anastasia Majzhegisheva.

Young Marya Morevna. Artwork by Anastasia Majzhegisheva.

September 25, 2014

AppStream Progress in September

Last time I blogged about AppStream I announced that over 25% of applications in Fedora 21 were shipping the AppData files we needed. I’m pleased to say in the last two months we’ve gone up to 45% of applications in Fedora 22. This is thanks to a lot of work from Ryan and his friends, writing descriptions, taking screenshots and then including them in the fedora-appstream staging repo.

So fedora-appstream doesn’t sound very upstream or awesome. This week I’ve sent another 39 emails, and opened another 42 bugs (requiring 17 new bugilla/trac/random-forum accounts to be opened). Every single file in the fedora-appstream staging repo has been sent upstream in one form or another, and I’ve been adding an XML comment to each one for a rough audit log of what happened where.

Some have already been accepted upstream and we’re waiting for a new tarball release; when that happens we’ll delete the file from fedora-appstream. Some upstreams are really dead, and have no upstream maintainer, so they’ll probably languish in fedora-appstream until for some reason the package FTBFS and gets removed from the distribution. If the package gets removed, the AppData file will also be deleted from fedora-appstream.

Also, in the process I’ve found lots of applications which are shipping AppData files upstream, but for one reason or another are not being installed in the binary rpm file. If you had to tell me I was talking nonsense in an email this week, I apologize. For my sins I’ve updated over a dozen packages to the latest versions so the AppData file is included, and fixed a quite a few more.

Fedora 22 is on track to be the first release that mandates AppData files for applications. If upstream doesn’t ship one, we can either add it in the Fedora package, or in fedora-appstream.

September 24, 2014

DevAssistant Heuristic Review Part 2: Inventory of Issues

This is Part 2 of a 3-part blog series; this post builds on materials featured in an earlier post called DevAssistant Heuristic Review Part 1: Use Case Walkthroughs.

In this part of the DevAssistant heuristic review, we’ll walk through an itemized list of the issues uncovered by the use case-based walkthrough we did in part 1.

Since this is essentially just a list of issues, let me preface it by explaining how I came up with this list. Basically, I combed through the walkthrough and noted any issues that were encountered and mentioned in it, large and small. The result of this was a flat list of issues. Next, I went through the list and tried to determine a set of categories to organize them under by grouping together issues that seemed related. (You could do this in a group setting via a technique called “affinity mapping” – another fancy UX term that in essence just basically means writing everything out on post-its and sticking related post-it notes together. Fancy name for playing with sticky pieces of paper :) )

Breaking the issues into categories serves a few purposes:

  • It makes the list easier to read through and understand, since related issues are adjacent to each other.
  • It breaks up the list so you can review it in chunks – if you’re only interested in a certain area of the UI, for example, you can hone in on just the items relevant to you.
  • It tends to expose weak areas in great need of attention – e.g., if you have 5 categories and one of them is attached to a particular screen and the other 4 are more generic, you know that one screen needs attention (and should probably be a high priority for redesign.)

All right, so here is the list!

Base UI Design

These issues apply not to specific dialogs or screens, but more to the basic mechanics of the UI and the mental model it creates for users.

  1. Takes time to get situated: When I first look at this UI, I’m not sure where to get started. Because the first tab says “Create Project” and the second says “Modify Project,” I get the impression that you might progress through the tabs in order from left-to-right. From poking around, though, this doesn’t appear to be the case. So at initial glance, it’s hard for me to understand the overall flow or direction I’m meant to go in through the interface.
  2. Projects created/imported feel very disconnected from DevAssistant: It feels like there is no follow-up once you create or import a project via DevAssistant. I expected to be able to browse a list of the projects I’d imported/created using DevAssistant after doing so, but there doesn’t appear to be any link to them. DevAssistant seems to forget about them or not recognize them afterwards. Sure, they live on the file system, but they may live in all different places, and I may need some reminders / instruction about how to work with them – the filesystem doesn’t provide any context on that front.
  3. Little user guidance after project creation / import: After the user creates a project, all they really get is a green “Done” notification on the setup wizard window. I think there’s a lost opportunity here to guide the user through everything you set up for them so they know how to take advantage of it. Maybe have a little guide (easily dismissed or optional) that walks the user through the options they selected? For example, if they chose the vim option, have a section that activates on the post-project creation screen that talks about how DevAssistant customizes vim and how they can make use of it in their workflow. Basically, nudge the users towards what they need to do next! Offer to open up Eclipse or vim for them? Offer to open up the project in their file manager? Etc. etc.

Clarity / Context

These are issues where the UI isn’t clear about what information it needs or what is happening or why the user would pick a specific option. The cop-out fix for these types of issues is to write a lot of documentation; the right way to fix them is to redesign! If an option is confusing to explain and benefits all users, just turn it on by default if it’s not harmful instead of putting the burden of selecting it on the user. If the pros/cons of a config option aren’t clear, explain them – add some contextual documentation right into the app via tool tips or more clear text in the UI.

  1. Tab names unclear: The names of the tabs across the top don’t give me a good idea of what I can actually do. What does “prepare environment” mean? Looking at the interface, I think that going through one of the wizards under “Create Project,” would prepare an environment for the selected type of project, so why would I click on “Prepare Environment?”
  2. Prepare Environment options confusing: When I look at the options under “Prepare Environment,” I see “Custom Project” (what does this mean vs. the “Custom Task” tab?) and “DevAssistant.” These options don’t help me understand what “Prepare Environment” means. :-/
  3. DevAssistant button under Prepare Environment confusing: Why would I want to set up the environment for DevAssistant and checkout sources? Is the “Dev Assistant” button under “Prepare Environment” meant specifically for developers who work on DevAssistant itself?
  4. Some options could use more explanation / optimization: Some of the options in the dialogs could use more explanation but they don’t have any tooltips or anything. For example, why would I choose Python 2 vs. Python 3 when creating a new project? What are the pros/cons? How do I take advantange of the customizations offered for vim so I can determine if they’re worth setting up? (Or why wouldn’t I want to set them up? If it doesn’t take up much disk space and it’s a good value add why not just do it if I have a vimrc?)
  5. Not sure what “deps only” means: This could be my ignorance not being a real developer, but most if not all of the config dialogs I ran into had a ‘Deps-Only’ option and it’s still unclear to me what that actually means. I get depenencies in the context of yum/RPM, and I get them in the context of specific stacks, but I’m not sure how DevAssistant would generically determine them? Also, what happens if I don’t check off ‘Only install dependencies’ and check off everything else? Do the dependencies get installed or not? If I check off ‘Only install dependencies’ and also check off everything else, does that mean none of the other things I checked off happen because I checked off ‘Only install dependencies?’ The grammar of the string makes it ambiguous and the option itself could use some wordsmithing to be a bit clearer.
  6. What happens if you choose an option that requires something not installed? It’s not clear to me what happens if you pick vim or Eclipse, for example, in one of the options dialogs on a system that does not have them installed. Does it project setup fail, or does it pull in those apps? Maybe DevAssistant could check what development environments it supports that you already have installed and gray out the ones you don’t have installed, with a way to select them while explicitly choosing to install the development environment as well?
  7. Users need appropriate context for the information you’re asking them: There were a few cases, particularly related to connecting to github accounts, where the user is asked for their name and email address. It isn’t clear why this information is being asked for, and how it’s going to be used. For example, when you ask for my name, are you looking for my full name (Máirín Duffy,) just my first name (Máirín,) or my nick (mizmo?) (And can you support fadas? Or should I type “Mairin” instead of “Máirín”?)

Human Factors

This is a bucket for issues that put unnecessary burden / inconvenience on the user. A good example of this in web application design is a 3 levels deep javascript tiered dropdown menu that disappears if you mouse off of it. :) It makes the user physically have to take action or go through more steps than necessary to complete a task.

  1. Hover help text not easily discovered / pain to access: After a while of poking around, I notice that there are nice explanations of what each button under each tab means in a hover. My initial thought – putting this valuable information under a hover makes it more challenging to access, especially since you can only read the description for one item at a time. (This makes it hard to compare items when you’re trying to figure out which one is the right one for you.)
    Hover tips for create project buttons.

    Hover tips for create project buttons.

  2. Window Jumps – This happens when clicking on buttons in the main setup wizard window and new windows are launched. For example, go to “Modify Project” tab. Click on Eclipse Import or Vim Setup. It moves the DevAssistant up and to the right. Click back. The window remains up and to the right. Why did it move the window? It should remember where the user placed the window and stay there I think.
  3. Project directory creation defaults to user’s homedir: I think a lot of users try to keep their home directory neat and orderly – defaulting to creating / importing projects to users’ home directories seems the wrong approach. One thing to try would be to make a devassistant directory during the first run of the application, and defaulting to creating and importing projects to there. Another option, which could be done in conjunction with defaulting to a ~/devassistant directory, could be to ask the user during first run or have a configuration item somewhere so that the user can set their preferred repository directory in one place, rather than every time they create/import a project.
  4. No way to create new project directory in file chooser for project directory selection: In a lot of the specific project creation dialogs, there’s an option to specify a project directory other than the user’s home. However, the file chooser that pops up when you click on “Browse” doesn’t allow you to create a new directory. This makes it more of a hassle (you have to go outside the DevAssistant application) to create a fresh directory to store your projects in.
  5. Holy modal dialogs, Batman! I encountered a few modal dialogs during the process that made interactions with the application a bit awkward. Some examples:
    • There was a very large Java error dialog that was hidden under another window on my desktop, and it made buttons on the main progress/setup window unclickable so I couldn’t dismiss the main window without dismissing the Java error window. (And the Java error window didn’t have any button, not even an ‘X’ in the upper right corner, to dismiss it.) (See Use Case 2 for more details on this specific scenario.)
      This was too long to display fully on my 2560x1440 monitor... the button to close it wasn't accessible. Luckily I know how to Alt+F4.

      This was too long to display fully on my 2560×1440 monitor… the button to close it wasn’t accessible. Luckily I know how to Alt+F4.

    • During the Django setup process (Use Case 1) and during the C project use case (Use Case 2), there was a small modal dialog that popped up a little after the setup process began, asking for permission to install 20 RPM packages. It halted the progress being made – similar to how old Anaconda would pop up little dialogs during the install process. It’s better to ask questions like this up-front.
    • Another time during the Django setup process (Use Case 1), there was a tiny modal dialog that aligned to the upper left corner of the screen. I completely missed it, and this halted the project creation process. (It was a dialog asking for my name, related to Github account connection.)


  6. If the user fills something out incorrectly, it’s not possible to recover in some cases. This is just another vote to ask users needed information up front and avoiding modal dialogs – I filled out the wrong email address in one of the pop up modals during the project creation process and realized too late I used the wrong email address.
  7. Git name / email address very sticky and not easily fixed: During Use Case 1, I was prompted for my name and didn’t realize it was the name that would be attached to my git commits. Once it’s input though, there’s no way to update it. It’s unclear where it’s stored – I blew away the project I was creating when I input it, thinking it was stored in the .git/config file, but DevAssistant still remembered it. Configuration items like this that apply to all projects should be editable within the UI if they can be input in the UI.
  8. Text fields for inputting long paths are unnecessarily short: This is pointed out specifically in Use Case 2, but I think all the setup dialogs were affected by it. The text dialog for putting the path to your project in, for example, was only long enough in my case for “/home/duffy/Repo” to be visible. The field should be longer than that by default.

Status / Transactional

These are issues that revolve around the current status of the task the user is trying to complete. An example of a common issue related to this in UIs in general is lack of progress bars for long-running tasks.

  1. When main window is waiting on user input, it should be made clear: During the Django setup process in Use Case 1, I had opted to use Github for the new project I was creating. After filling out the config screen and pressing ‘Run,’ it looked like the project creation process was going to be a while so I multi-tasked and worked on something else. I checked on the main window a few times – at a certain point, it said “In progress” but it wasn’t actually doing anything – a tiny little window popped up in the upper-left corner, halting the whole process. It would have been better to ask me for that information up front, as to not halt the process. But it also would have been good, if the main window is waiting on something, for it to let the user know it’s waiting and isn’t “In progress.” (Maybe it could say, “Paused, waiting on user input?”)

  2. Ask users all of the information you need up front, so they can walk away from the computer while you set things up: Speaking of that last point – this was an issue we had in the old Anaconda, too. During the installation process, sometimes error messages or dialogs would pop up during the install process and they would halt install progress. So the user may have gone to get a coffee, come back, and everything wasn’t done because anaconda sat there a lot of the time asking if it was okay to import some GPG key. When you have a long-running process (a git repo sync, for example,) I think it’s better to ask the user for everything you need up front rather than as the application needs it. It’s akin to someone coming up to your desk to ask a question, going away for a couple of minutes, then tapping you on the shoulder and asking you another question, then coming back 3 minutes later to ask another one – people like that are annoying, right?! (Well, small children get away with this though, being as cute as they are. :) )
  3. Transaction status unclear after errors or even after completion: When I canceled the Django project creation in Use Case 1 because I input the wrong email address, I wasn’t sure of the state of the project and how to proceed cleanly. What directories had been created on my system? What packages had been installed? Etc. I would have liked to clean up if possible, but it wasn’t clear to me what had happened and it didn’t seem like there was an easy way to undo it. Ideally, after hitting cancel, the application would have explained to me something about the state of the system and recommended a forward course of action (is it better to blow everything away and start over? Or re-run the command using the same parameters?)
  4. Little/Vague progress indication: There’s a yellow “in progress” string on the wizard screen after you hit run, and the cursor goes into spinner mode if you focus on that dialog, but there could be better progress indication. A spinner built into the window (here’s an example) is a good option if it’s not possible to do a progress bar. Progress bars are a little better in that they give the user an indication of how much time it might take to complete.

Layout / Aesthetics

These are issues around the actual layout, typography, padding, arrangement of widgets, widget choices in a given screen or dialog. They tend to be surface issues, and usually it’s a better use of time to rethink and redesign a dialog completely rather than fix these kinds of issues only on the surface (which could be akin to putting a different shade of lipstick on.)

  1. Package install confirmation dialog layout issues: So this is pretty surface-level critique – but there’s a few issues with the layout of the dialog asking the user if it’s okay to install packages via Yum. Here’s what it looks like initially:

    Usually the button that moves you forward is the farthest to the right, and the cancel is to the left. Here, the ‘Show packages’ button is on the right. I think maybe, ‘show packages’ should not be on the same line as ‘Ok’ and ‘Cancel,’ but instead maybe it should be a link above the buttons that expands into the full list (limited to a specific max vertical height of course, as to not make the buttons disappear so they are not clickable!) The list itself, has numbers, ‘1’ and ‘2’ before some of the package names – it’s unclear to me why they are there. Also, the list is very long but there’s no way to filter or search through it, so that might be a nice thing to offer. What are people concerned about when evaluating a list of packages to be installed on their system? The number of packages and total size might be useful – the size isn’t listed so that could be a good piece of information to add. I could go on and on about this dialog, but hopefully this demonstrates that the dialog could use some more iteration.
  2. Layout of options via linear checkbox list can cause confusion about relationship between options: In several cases during the walkthrough, I became unsure of how to fill out the setup wizard for various project types because I wasn’t sure how the different checkbox options would work. In one case, it seemed as if particular selections of checkboxes could never result in success (e.g., in use case 4 when I tried to create a custom project with only the ‘deps only’ checkbox selected.) In other cases, some of the options seemed to be vaguely related to each other (Eclipse or vim) and others seemed less related (github vs python 3 or 2.) I think probably each screen needs to be reviewed and potentially rearranged a bit to better represent the dependencies between the checkboxes, base requirements, etc. For example, categorizing the options – put Eclipse and VIM under a “Development Environment” category, put git / github / etc. options under a “Version Control” category, etc.

Feature Set

These issues are around the features that are available (whether or not they are appropriate / useful or accessible / placed correctly) as well as features that might be needed that are missing.

  1. No way to import pre-existing project that wasn’t created with DevAssistant? This one seems like a bit of a show stopper. I tried to import a project that wasn’t created using DevAssistant (which is the majority of upstream projects at this point,) and it didn’t work. It bailed out after detecting there’s no .devassistant in the repo. If there is a way to do this, it’s not clear to me having gone through the UI. It would be nice if it could import an existing project and create a .devassistant for it and help manage it.
  2. The button to create a github project is under the ‘Modify Project’ tab, not the ‘Create Project’ tab: This is a bit of an oddity – the create project tab is more focused on languages / frameworks… creating a new project in GitHub is definitely creating a new project though, so it doesn’t make sense for it to be under “Modify Project.”


These are just outright bugs that don’t have so much to do with the UI specifically.

  1. Screen went black during root authentication during Django setup: I think this was some kind of bug, and wasn’t necessarily DevAssistant’s fault.
  2. Could not create a Django project on development version of Fedora: My Django project creation failed. The error message I got was “Package python-django-bash-completion-1.6.6-1.fc21.noarch.rpm is not signed. Failed to install dependencies, exiting.” Now, while this is a little unfair to point out since I was using F21 Alpha TC4 – DevAssistant should probably be able to fail more gracefully when the repos are messed up. When errors happen, the best error messages explain what happened and what the user could try to do to move forward. There’s no suggestion here for what to do. I tried both the “Back” and “Main window” buttons. Probably, at the least, it should offer to report the bug for me, and give me explicit suggestions, e.g., “You can click ‘Back’ to try again in case this is a temporary issue, or you may click ‘Main Window’ to work on a different project.” It probably could offer a link to the documentation or maybe some other help places (ask.fedoraproject.org questions tagged DevAssistant maybe?)
  3. Unable to pull from or push to github: During the Use Case 1 walkthrough, I was left with a repo that I couldn’t pull from or push to github. It looks like DevAssistant created a new RSA key, successfully hooked it up to my Github account, but for some reason the system was left in a state that couldn’t interact with github.
  4. C project / Eclipse project creation didn’t work: There was a Java/Eclipse error message pop up and an error with simple_threads.c. Seems like a bug? (Full error log)
  5. Tooltip for Eclipse import talks about running it in the projects directory: This seems like a bug – the string is written specifically for the command line interface. It should be more generic.

Up Next

Next, we’ll talk about some ways to address some of these issues, and hopefully walk through some sketchy mockups. This one might take a bit longer because I haven’t sketched anything out yet. If I’m not able to post Part 3 this week, expect it sometime next week.

DevAssistant Heuristic Review Part 1: Use Case Walkthroughs

You might be asking yourself, “What the heck is a heuristic review?”

It’s just a fancy term; I learned it from reading Jakob Nielsen‘s writings. It’s a simple process of walking through a user interface (or product, or whatever,) and comparing how it works to a set of general principles of good design, AKA ‘heuristics.’

To be honest, the way I do these generally is to walk through the interface and document the experience, giving particular attention to things that jump out to me as ‘not quite right’ (comparing them to the heuristics in my head. :) ) This is maybe more accurately termed an ‘expert evaluation,’ then, but I find that term kind of pompous (I don’t think UX folks are any better than the folks whose software they test,) so ‘heuristic review’ it shall be!

Anyway, Sheldon from the DevAssistant team was interested in what UX issues might pop out to me as I kicked the tires on it. So here’s what we’re going to do:

  • Here in Part 1, I’ll first map out all the various pieces of the UI so we can get a feel for everything that is available. Then, I’ll walk through four use cases for working with the tool, detailing all the issues I run into and various thoughts around the experience.
  • In Part 2, I’ll analyze the walkthrough results and create a categorized master list of all the issues I encountered.
  • In Part 3, I’ll suggest some fixes / redesigns to address the issues catalogued in Part 3.

Okay – ready for this? :) Let’s go!

Setup Wizard Mapping

(This is the initial dialog that appears when you start DevAssistant.)

Screenshot of the initial DevAssistant screen

Screenshot of the initial DevAssistant screen

I’m starting this review of DevAssistant’s GUI by walking through each tab and mapping out a hierarchy of how it is arranged at a high level. This helps me get an overall feel for how the application is laid out.

  • Create Project
    • C
    • C++
    • Java
      • Simple Apache Maven Project
      • Simple Java ServerFaces Projects
    • Node.js
      • Express.JS Web Framework
      • Node.JS application
    • Perl
      • Basic class
      • Dancer
    • PHP
      • Apache, MySQL, and PHP Helper
    • Python
      • Django
      • Flask
      • Lib
      • Python GTK+ 3
    • Ruby
      • Ruby on Rails
  • Modify Project
    • C/C++ projects
      • Adding header
      • Adding library
    • Docker
      • develop
    • Eclipse Import
    • Github
      • Create github repository
    • Vim Setup
  • Prepare Environment
    • Custom Project
    • DevAssistant
  • Custom Task
    • Make Coffee

Use Case Testing

I’m going to come up with some use cases based on what I know about DevAssistant and try to complete them using the UI.

Use Cases for Testing

  1. Create a new website using Django.
  2. Create a new C project, using Eclipse as a code editor.
  3. Import a project I already have on my system that I have cloned from Github, and import it into Eclipse.
  4. Begin working on an upstream project by locally cloning that project and creating a development environment around it.

Use Case 1: Create a new website using Django

I’m not much of a Django expert, so this may end up being hilarious. So I know Django is Python-based, and this is a new project, so I click on the “Create Project” tab, then I click on “Python.” I select “Django” from the little grey menu that pops up. The little grey menu looks a little bit weird and isn’t the type of widget I was expecting, but it works I guess, and I succesfully click on “Django.” Note: the items in the submenu under Python are organized alphabetically.

An example of the little grey menu - this one appears when you click on the Python button. Not all of the buttons have a grey menu.

An example of the little gray menu – this one appears when you click on the Python button. Not all of the buttons have a gray menu.

A new screen pops up, and the old one disappears. I had the old screen (the main DevAssistant window) placed in the lower right of my screen. The new screen that appears (Project: Create Project-> Python -> Django) jumps up and to the right – it’s centered perfectly on my left monitor. It looks like I’m meant to feel that this is a second page to the same window (for example, the way the subitems of GNOME control center work.) Instead, thought, it feels like a separate window because it’s a little bit larger than the first window and it jumped across the screen so dramatically.

Django project setup window

Django project setup window

This new window is a bit overwhleming for me. First it asks for a project name. I like ponies, and Django does too, so I call my project “Ponies.”

Next, it wants to know where to create the project. It suggests /home/duffy, but man is my home pretty messy. I click on “Browse” to pick somewhere else, thinking I might create a “Projects” subdirectory under /home/duffy to keep things nice and clean. There isn’t a way to create a directory in this file chooser, so I drop down to a terminal and create the folder I want, then fill out the field to say, “/home/duffy/Projects” and move on.

Now, it’s time to look through available options. Hm. This is definitely the most overwhelming part of the screen. Looking through the options… two seem to be related to coding environments – there’s a checkbox for eclipse, and there’s a checkbox for vim. There’s an option to use Python3 instead of Python 2. There’s an option to add dockerfile and create a docker image. There’s a virtualenv option, and a deps-only option. I think I understand all of these options except for “Deps-only,” which is labeled, “Only install dependencies.” If I don’t only install dependencies, then what happens? What is the alternative to clicking that box? I’m not sure.

Anyway, back to the editors. I like vim, but this is a fresh desktop spin installation and I know that doesn’t come with vim preinstalled. I wonder what will happen if I pick vim. I decide to do it.

Oh, and there’s a Github option. It will create a GitHub repo and push the sources there. That is pretty slick; I click that checkbox too and provide my github username. Then I click “Run” in the lower right corner. (Note that a lot of new GNOME 3 apps have the button to progress forward in the upper right.)

Next, pops up a screen that has a log that spits out some log style spew, looking like it’s installing some RPMs. Quickly, a modal dialog pops up that says:

Installing 20 RPM packages by Yum. Is this ok?

[ No ] [ Yes ] [ Show packages ]

The modal dialog has the same problem of being centered to the whole desktop rather than centered along where the parent window was. I like that it offers to show me which packages it’s going to install. I click on “Show packages.” I get a very nice scrollable display in the same window, neat and clean. I click “hide packages” to hide the list. Then I click “Yes” to move forward.

Now things got a bit weird. My whole screen went black. A gnome-shell style black dialog is in the center of this black screen and it is asking for my root password. I don’t think the screen behind the dialog should be black. It feels a little weird. (turns out this was a F21 TC4 issue only.) I type in my root password and click to continue.

And it seems the process failed. (To be fair, I am doing this on an alpha test candidate – F21 TC4 – so the issue may be with the repos and not DevAssistant’s fault.) It says:

Resolving RPM dependencies ...
Failed to install dependencies, exiting.

I like the option to copy the error message to the clipboard, and to view and copy to clipboard the debug logs. It errored out because of a packaging issue, it looks like:

Package python-django-bash-completion-1.6.6-1.fc21.noarch.rpm is not signed
Failed to install dependencies, exiting.
Failed to install dependencies, exiting.

There is also a “Back” button and a “Main window” button; I’m not sure which to click. I try “Back” first. That brings me back to the screen I filled all the details in for my project; however I know it won’t work now when I click run.

So at this point, I emailed Sheldon to let him know that I ran into some breakage, and he told me that it wasn’t necessary to test on F21 TC4 – what’s in F20 at this point is reasonably recent and worth doing a heuristic review on. So let’s continue from this point, using F20. :)

This time, I pick the same options on the create new Django project screen, and when I press forward, it says it’s installing 21 packages. Okay. It seems to be going, and I realize after wasting precious minutes of life reading crap on Twitter that it has been quite some time. I check back on the DevAssistant window – it looks like it’s still working, but it’s kind of not clear what it’s really doing.

Then I notice the tiny little dialog peering down at me from the extreme upper left corner of my laptop screen (it is easy to find in this screenshot; harder when other windows are open):

Hey, little guy! Whatcha doing up there?

Hey, little guy! Whatcha doing up there?

So this is another window positioning issue. I drag that little guy (who has some padding and alignment issues himself, but nothing earth-shattering) closer to the center of the screen so I can fill him out. The problem is, I’m not really sure of the context – why does it want my name? Does it just want my nickname, my first name, my full name, my IRC handle…? I end up typing ‘mairin’ and hit enter.

This little dialog is centered with the main window, thankfully.

This little dialog is centered with the main window, thankfully.

And then, something clicks. “I bet it wants my name and email address for the git config.” Well, crap. I already typed in “mairin,” and that’s not the name I want on my commits. I hit “Cancel” on the email dialog shown above, and try to “start over” by going back to the main window and creating the “Ponies” project again. But… ugh:

I changed the path from ~/Projects to ~/Code just because.

I changed the path from ~/Projects to ~/Code just because.

So there are a few problems here:

  • The form field for my name lacked enough context for me to understand what information the software really wanted from me.
  • I figured out what the software wanted from me too late – and there isn’t any way for me to go back and fix it via the user interface, as far as I can tell.
  • There’s a transactional issue: in order to completely finish creating the project as I requested, DevAssistant needed some additional information. I bailed out of providing that information, leaving the project in an unknown state. (Will it work, and just miss the features that required information I didn’t provide? Since I bailed out early, which features will be missing? Is there a way to fix it by filling them in afterwards? Should I just delete from the filesystem and start over again?

The latter is what I did – I went into nautilus, nuked my ~/Code/Ponies directory, and ran through the Django project creation process (same options) from the main DevAssistant window one more time.

Unfortunately, it remembered the name I had given it. Normally this is a wonderful thing – interfaces that ask the same question of a user over and over again are annoying. In this instance, however, the politeness of remembering my name was a bit unforgiving – how could I correct my name now? Will all projects I create in the future using DevAssistant have “mairin” as my name instead of the “Máirín Duffy” of my vain desires??

Well, a rose by any other name would smell as sweet – whatever. Let’s carry on. So I am asked my GitHub password after my email address, which I provide, and soon afterwars I am greeted with a completed progress screen, a link to the project I created on GitHub, and a perusable log of everything DevAssistant just did:

That was definitely an easy way to create a project on GitHub.

That was definitely an easy way to create a project on GitHub.

So I think I’m done at this point? Maybe? I’m not 100% clear where to go from here. Some potential issues I’ll note at this point:

  • The project I created on GitHub through this process is completely empty. I was expecting some Django-specific boilerplate content to be present in the repo by default, and maybe some of the files suggested by GitHub (README, LICENSE, .gitignore.) But maybe that part happens later?
  • There’s an ssh issue in the logs. Ah. Now we see why my repo on GitHub is empty:

    Problem pushing source code: ssh_askpass: exec(/usr/libexec/openssh/ssh-askpass): No such file or directory
    Host key verification failed.
    fatal: Could not read from remote repository.

I didn’t see a seahorse dialog pop up asking me to unlock my ssh key. I open up seahorse – it looks like DevAssistant made an RSA key for me. I’m not sure what’s going on here, then. It never asked me for a passphrase to create a new key.

I have an interesting test case in that I have a new laptop that I didn’t copy my real ssh key over to yet. I wonder how this would have gone done if I did have my real ssh key on this system…

Then, I get an email from GitHub:

The following SSH key was added to your account:


If you believe this key was added in error, you can remove the key and disable
access at the following location:


If the new ssh key was added to my account, then why didn’t this work? :-/

My big question now is: what do I do next? Here is what I have:

  • A new boilerplate Django project in my home directory.
  • An empty GitHub project.
  • Some stuff that got added to vim (how do I use it?)

What I don’t have that I was expecting:

  • Some kind of button or link or something to the boilerplate code that was created locally with some tips / hints / tricks for how to work with it. (Links to tutorials? Open up some of the key files you start working with in that environment in tabs in Geany or Eclipse or some IDE? Okay so I selected vim – tell me how to open this up in vim?)
  • Some acknowledgement of the ‘Ponies’ project I just created in the DevAssistant UI. I feel that my ponies have been forgotten. There isn’t any tab or space in the interface where I can view a list of projects I created using DevAssistant and manage them (e.g., like changing the ssh key or changing my name / email address associated with the project.)

I’m feeling a bit lost. Like when the lady in my GPS is telling me how to get to Manhattan and she stops talking to me somewhere in the Bronx.

Use Case 2: Create a new C project, using Eclipse as a code editor.

Back to the main window in DevAssistant! I click on the “C” button and right away am greeted with the “Create Project -> C” screen, which I dutifully fill out to indicate a desire to use Eclipse and to upload to GitHub:

Screenshot from 2014-08-30 23:14:44

A modal alert dialog ask me if it’s okay to install 139 packages (and again, helpfully offers a list of them if I want it.)

(The alignment within the dialog is a bit off; there’s a lot of extra padding on the bottom and the buttons are a bit high up. The OK button should probably be the right-most one, and a different widget used for ‘show packages,’ (like a disclosure triangle, maybe.)

The dialog that asks for permission to install required dependencies. The alignment within the dialog is a bit off; there's a lot of extra padding on the bottom and the buttons are a bit high up. The OK button should probably be the right-most one, and a different widget used for 'show packages,' (like a disclosure triangle, maybe.)

The dialog that asks for permission to install required dependencies. The alignment within the dialog is a bit off; there’s a lot of extra padding on the bottom and the buttons are a bit high up. The OK button should probably be the right-most one, and a different widget used for ‘show packages,’ (like a disclosure triangle, maybe.)

First, I click to show the list of packages. Now I see why there is so much padding on the bottom of the dialog. :) But it’s not enough space to comfortably skim the list of dependencies:


I drag out the window size to make it a bit bigger to more comfortably view the list. Some package names have a “1:” in front of them, some have “2:” in front of them, some have nothing in the front. I’m not sure why.


Anyway, enough playing around. I agree it’s okay to install the dependencies.

I watch the dialog. 139 packages is a lot of packages. While they are downloaded, there’s no progress bar or animation or anything to let me know that it’s still actively working and not crashed or otherwise unstable. The only indications I have are the cursor getting set to spinner mode when I go to the DevAssistant window, and the text, “Downloading Packages:” at the bottom of the visible log in the DevAssistant window:

Screenshot from 2014-08-30 23:20:00

After a little while, unfortunately, things didn’t go so well:

DevAssistant setup wizard_104

Here’s the full error log.

So now I’m not sure what state I’m left in. The “DevAssistant setup wizard” window has grayed out “Main window” and “Debug logs” buttons – the only live button is “Copy to clipboard.” I click on “x” in the upper right corner and it tries to quit but it doesn’t seem to do anything. Then I notice the large Java error popup window hidden behind my browser window:

This was too long to display fully on my 2560x1440 monitor... the button to close it wasn't accessible. Luckily I know how to Alt+F4.

This was too long to display fully on my 2560×1440 monitor… the button to close it wasn’t accessible. Luckily I know how to Alt+F4.

Once I closed that window, the main DevAssistant wizard window changed, and I was able to get access to the main window and back buttons.

On to the next use case!

Use Case 3: Import a project I already have on my system that I have cloned from Github, and import it into Eclipse

All right, so what project should I import? I’m going to import my selinux-coloring-book repo. :) This is a git repo I created on github and have synced locally. Let’s see if I can import it and open it in eclipse.

So I go back to the main DevAssistant setup wizard window and I click on the ‘Modify Project’ tab along the top (is this the right one to use? I’m not sure):

The "Modify Project" tab

The “Modify Project” tab

I’m not sure whether I should do “Eclipse Import” or “Github.” If I hover over the Github button, it says:

Subassistants of Github assistant provide various ways to work with Github repos. Available subassistants: Create Github repository.

While the first sentence of the description makes this seem like the right choice, the last sentence gives me the sense that the only thing this button can do is create a new github repo since that seems to be the only available subassistant (whatever a subassistant is.)

The Eclipse Import hover message is:

This assistant can import already created project into Eclipse. Just run it in the projects directory.

This seems like what I want, except the last line has me confused. I’m running a UI, so why is it telling to me to run something in a directory? (I’m assuming this is maybe a shared help text with a command-line client, so it wasn’t written with the GUI in mind?) Anyway, I’m going to go with the “Eclipse Import” button.

Again, the main DevAssistant window disappears and a new window pops up, ignoring my window placement of the first window and centering itself on top of my windows in the middle of my active screen. Here’s what that new window looks like:

Window shown after the Eclipse Import process is started.

Window shown after the Eclipse Import process is started.

So I notice a few issues on this screen (although note some of them may be because I’m not a real developer and I haven’t used Eclipse in years):

  • There are two text fields where the user can specify a path on the file system. While such paths are usually pretty long, the fields aren’t wide enough to show much beyond the portion of my path that points to my home directory – /home/duffy. So these fields should probably be wider, given the length of these kinds of paths.
  • I’m not sure what Deps-Only is going to do – what kind of project is it assuming I have? Is it going to somehow detect the dependencies (from make files?) and install them without importing the project? Why would I want to do that?
  • The options are listed out with checkboxes – and you can click them all at once. Does that make sense to do? I guess it does – I could specify the Eclipse workspace directory (although is that in ~/workspace or $PROJECT-PATH/eclipse?), and the path to the project, and that I only want deps-only. It seems like maybe ‘deps-only’ is an option that is subject to path though – if I don’t specify a path, how is it going to detect deps?
  • In fact, the process fails completely if I only select the “Deps-Only” checkbox and nothing else. So this selection shouldn’t be possible.

I end up just specifying my path (which points to /home/duffy/Repositories/selinux-coloring-book,) checking nothing else off, and clicking “run.” This doesn’t work – I get a blank “Log from current process” window that says “Failed” on the bottom:

DevAssistant setup wizard_110

On a whim, I check to make sure Eclipse is installed – yeah it is. That’s not the issue. I go back and check of the “Eclipse” checkbox in addition to the “Path” checkbox – looks like the failed C project creation made a ~/workspace directory, so I use that one. I hit run again…

DevAssistant setup wizard_110

I’m not sure where to go from here, so I’ll move on to the next use case.

Use Case 4: Begin working on an upstream project by locally cloning that project and creating a development environment around it

What upstream project should I work on? Let’s try something I don’t already have synced locally that isn’t too huge. I’ll choose fedora-tagger.

Okay back to the main DevAssistant window:

Screenshot of the initial DevAssistant screen

Screenshot of the initial DevAssistant screen

Where do I start? Not “Create Project,” because that’s for a new project. I look under “Prepare Environment:”

Prepare environment tab

Prepare environment tab

I’m not going to be working on DevAssistant or OpenStack, so I examine the hover text for “Custom Project:”

Only use this with projects whose upstream you trust, since the project can specify arbitrary custom commands that will run on your machine. The custom assistant sets up environment for developing custom project previously created with DevAssistant.

Hm. So two things here:

  • This won’t work, because fedora-tagger wasn’t created with DevAssistant.
  • I don’t think I would trust any project with this option, unless I could at least examine the commands it specified to run before running them. It would be nice to have a way to do that. Looking at the dialog linked to this button, it doesn’t look like there is.

Okay, so now what will I do? It doesn’t seem like there is a way to complete this use case under “Prepare Environment” unless my upstream is OpenStack or DevAssistant. The “Custom Task” tab won’t work, because the only option there is “Make Coffee.” So I’ll try to “Modify Project” tab:

The "Modify Project" tab

The “Modify Project” tab

Even though I know from an earlier use case that the hover text for the “Github” button seems to indicate that it can only create new Github projects, I try it. Nope, it just lets you create a new project. Hm. Well fedora-tagger is a python project. There’s a C/C++ projects button – maybe I should pick a C project instead.

So I’ll try a C project. What is written in C? I know a lot of GNOME stuff is written in C; I’m sure I could find something there. So I look at the tooltip on the C/C++ projects button:

This assistant will help you ito [sic] modify your C/C++ projects already created by devassistant. Avalaible [sic] subassistants: Adding header, Adding library

Well, I don’t know any GNOME C projects that were created with DevAssistant. :-/ At this point, I’m not sure how to move forward. I click on the “Get help…” link in the upper right:

Problem loading page - Mozilla Firefox_122

It looks like doc.devassistant.org doesn’t work at all. Nor does devassistant.org …. it must be down. It’s still up on readthedocs.org though… okay let’s see:

Preparing: Custom – checkout a custom previously created project from SCM (git only so far) and install needed dependencies

This seems to be what I need? But the button for “Custom Project” said that the project should have been created with DevAssistant. Let’s try it anyway. :) I’ll use gnome-hello, which is a small C demo project.

First screen for "Custom Project"

First screen for “Custom Project”

A couple things on this screen:

  • All of the items are checkboxes, except for URL which has a red asterisk (‘*’) – why? Is that one required? Then it should probably have a checked and grayed checkbox?
  • Again, the text fields for typically long path strings are pretty narrow.

I paste in the gnome-hello git url (git://git.gnome.org/gnome-hello) and hit “Run.”

Custom Project completion dialog.

Custom Project completion dialog.

Hmm, okay. So it didn’t find a .devassistant and bailed out. What did it do on my filesystem?

Project directory created by Custom Project wizard

Project directory created by Custom Project wizard

I would have preferred it not dump the git repo in my home dir – but I suppose that the path was an option I could have specified on the screen before. It might be better for me to be able to specify that I keep my git repos in ~/Repositories… so by default DevAssistant uses that so I don’t have to input it every time.

I don’t think there’s anything else I can do here, so I’ll finish here.

On to analysis!

In Part 2, we’ll go through the walkthrough and pull out all of the issues encountered, then sort them into different categories. Look for that post soon. :)

Fanart by Anastasia Majzhegisheva – 15

Maya Morevna by Anastasia Majzhegisheva

Maya Morevna by Anastasia Majzhegisheva

Winter comes here in Siberia. So, Anastasia brings a paper-an-pencil artwork of Maya Morevna, with a winter spirit. And yes – there is no typo in character name here.

September 22, 2014

Pi Crittercam vs. Bushnell Trophycam

I had the opportunity to borrow a commercial crittercam for a week from the local wildlife center. [Bushnell Trophycam vs. Raspberry Pi Crittercam] Having grown frustrated with the high number of false positives on my Raspberry Pi based crittercam, I was looking forward to see how a commercial camera compared.

The Bushnell Trophycam I borrowed is a nicely compact, waterproof unit, meant to strap to a tree or similar object. It has an 8-megapixel camera that records photos to the SD card -- no wi-fi. (I believe there are more expensive models that offer wi-fi.) The camera captures IR as well as visible light, like the PiCam NoIR, and there's an IR LED illuminator (quite a bit stronger than the cheap one I bought for my crittercam) as well as what looks like a passive IR sensor.

I know the TrophyCam isn't immune to false positives; I've heard complaints along those lines from a student who's using them to do wildlife monitoring for LANL. But how would it compare with my homebuilt crittercam?

I put out the TrophyCam first night, with bait (sunflower seeds) in front of the camera. In the morning I had ... nothing. No false positives, but no critters either. I did have some shots of myself, walking away from it after setting it up, walking up to it to adjust it after it got dark, and some sideways shots while I fiddled with the latches trying to turn it off in the morning, so I know it was working. But no woodrats -- and I always catch a woodrat or two in PiCritterCam runs. Besides, the seeds I'd put out were gone, so somebody had definitely been by during the night. Obviously I needed a more sensitive setting.

I fiddled with the options, changed the sensitivity from automatic to the most sensitive setting, and set it out for a second night, side by side with my Pi Crittercam. This time it did a little better, though not by much: one nighttime shot with a something in it, plus one shot of someone's furry back and two shots of a mourning dove after sunrise.

[blown-out image from Bushnell Trophycam] What few nighttime shots there were were mostly so blown out you couldn't see any detail to be sure. Doesn't this camera know how to adjust its exposure? The shot here has a creature in it. See it? I didn't either, at first. It's just to the right of the bush. You can just see the curve of its back and the beginning of a tail.

Meanwhile, the Pi cam sitting next to it caught eight reasonably exposed nocturnal woodrat shots and two dove shots after dawn. And 369 false positives where a leaf had moved in the wind or a dawn shadow was marching across the ground. The TrophyCam only shot 47 photos total: 24 were of me, fiddling with the camera setup to get them both pointing in the right direction, leaving 20 false positives.

So the Bushnell, clearly, gives you fewer false positives to hunt through -- but you're also a lot less likely to catch an actual critter. It also doesn't deal well with exposures in small areas and close distances: its IR light source seems to be too bright for the camera to cope with. I'm guessing, based on the name, that it's designed for shooting deer walking by fifty feet away, not woodrats at a two-foot distance.

Okay, so let's see what the camera can do in a larger space. The next two nights I set it up in large open areas to see what walked by. The first night it caught four rabbit shots that night, with only five false positives. The quality wasn't great, though: all long exposures of blurred bunnies. The second night it caught nothing at all overnight, but three rabbit shots the next morning. No false positives.

[coyote caught on the TrophyCam] The final night, I strapped it to a piñon tree facing a little clearing in the woods. Only two morning rabbits, but during the night it caught a coyote. And only 5 false positives. I've never caught a coyote (or anything else larger than a rabbit) with the PiCam.

So I'm not sure what to think. It's certainly a lot more relaxing to go through the minimal output of the TrophyCam to see what I caught. And it's certainly a lot easier to set up, and more waterproof, than my jury-rigged milk carton setup with its two AC cords, one for the Pi and one for the IR sensor. Being self-contained and battery operated makes it easy to set up anywhere, not just near a power plug.

But it's made me rethink my pessimistic notion that I should give up on this homemade PiCam setup and buy a commercial camera. Even on its most sensitive setting, I can't make the TrophyCam sensitive enough to catch small animals. And the PiCam gets better picture quality than the Bushnell, not to mention the option of hooking up a separate camera with flash.

So I guess I can't give up on the Pi setup yet. I just have to come up with a sensible way of taming the false positives. I've been doing a lot of experimenting with SimpleCV image processing, but alas, it's no better at detecting actual critters than my simple pixel-counting script was. But maybe I'll find the answer, one of these days. Meanwhile, I may look into battery power.

On fêtera la sortie de GNOME 3.14 mardi soir à Lyon

In French, for a change :)

Mardi soir, le 23 septembre, quelques-uns d'entre nous se retrouveront vers 18h30 au Smoking Dog pour quelques boissons, et poursuivront avec un dîner indien prés du métro St-Jean.

N'hésitez pas à vous inscrire sur le Wiki, que vous soyez utilisateurs de GNOME, développeurs ou simplement des amis du logiciel libre.

À mardi!

September 21, 2014

Fresh software from the 3.14 menu

Here is a small recap of the GNOME 3.14 features I worked on. Some are already well publicised, through blogs:
And obviously loads of bug fixes, and patch reviews. And I do mean loads :)

To look forward to

If all goes according to plan, I'll be able to merge the aforementioned automatic rotation support into systemd/udev. The kernel API is pretty bad, which makes the user-space code look bad...

The first parts of ebooks support in gnome-documents have already been written, scheduled for 3.16.

And my favourites

Note: With links that will open up like a Christmas present when GNOME 3.14 is released.

There are a lot of big, new features in GNOME 3.14. The Adwaita rewrite made it possible to polish the theme greatly. The captive portals support is very useful, the travelling you will enjoy this (I certainly have!).

But my favourite new feature has to be the gestures support in gnome-shell. I'll make good use of that :)

September 20, 2014

Concept of Baba Yaga Character

Here is a concept for another character – Baba Yaga. You can find a lot of references to this name in Russian folklore, but in our story this old lady is an outstanding scientist of cybernetics. That strong and unconditional person bears a dark past, directly related to the birth of the main antagonist of the story – Koshchei The Deathless. Much thanks to Anastasia Majzhegisheva for the artwork.

Baba Yaga concept by Anastasia Majzhegisheva.

Baba Yaga concept by Anastasia Majzhegisheva.

Fri 2014/Sep/19

  • I finally got off my ass and posted my presentation from GUADEC: GPG, SSH, and Identity for Beginners (PDF) (ODP). Enjoy!

  • Growstuff.org, the awesome gardening website that Alex Bailey started after GUADEC 2012, is running a fundraising campaign for the development of an API for open food data. If you are a free-culture minded gardener, or if you care about local food production, please support the campaign!

September 19, 2014

Fanart by Anastasia Majzhegisheva – 14

Marya Morevna. Pencil artwork by Anastasia Majzhegisheva.

Marya Morevna. Pencil artwork by Anastasia Majzhegisheva.

Mirror, mirror

A female hummingbird -- probably a black-chinned -- hanging out at our window feeder on a cool cloudy morning.

[female hummingbird at the window feeder]

September 18, 2014

Woodcut/Hedcut(ish) Effect

Rolf as a woodcut/hedcut

I was working on the About page over on PIXLS.US the other night. I was including some headshots of myself and one of Rolf Steinort when I got pulled off onto yet another tangent (this happens often to me).

The rest of my GIMP tutorials can be found here:

This time I was thinking of those awesome hand-painted(!) portraits used by the Wall Street Journal by the artist Randy Glass.

Of course, the problem was that I had neither the time or skill to hand paint a portrait in this style.

What I did have was a rudimentary understanding of a general effect that I thought would look neat. So I started playing around. I finally got to something that I thought looked neat (see lede image), but I didn't take very good notes while I was playing.

This meant that I had to go back and re-trace my steps and settings a couple of times before I could describe exactly what it was I did.

So after some trial and error, here is what I did to create the effect you see.


Starting with your base image, desaturate using a method you like. I'm going to use an old favorite of mine, Mairi:

The base image, desaturated.

Duplicate this layer, and on the duplicate run the G'MIC filter, “Graphic Novel” by Photocomix.

Filters → G'MIC
Artistic → Graphic Novel

Check the box to "Skip this step" for "Apply Local Normalization", and adjust the "Pencil amplitude" to taste (I ended up at about 66). This gives me this result:

After running G'MIC/Graphic Novel

I then adjusted the opacity to taste on the G'MIC layer, reducing it to about 75%. Then create a new layer from visible (Right-click layer, “New from visible”).

Here is what I have so far:

On this new new layer (should be called “Visible” by default), run the GIMP filter:

Filters → Artistic → Engrave

If you don't have the filter, you can find the .scm at the registry here.

The only settings I change are the “Line width”, which I set to about 1/100 of the image height, and make sure the “Line type” is set to “Black on bottom”. Oh, and I set the “Blur radius” to 1.

This leaves me with a top layer looking like this:

After running Engrave

(If you want to see something cool, step back a few feet from your monitor and look at this image - the Engrave plugin is neat).

Now on this layer, I will run the G'MIC deformation filter “random” to give some variety to the lines:

G'MIC → Deformations → Random

I used an amplitude of about 2.35 in my image. We are looking to just add some random waviness to the engrave lines. Adjust to taste.

I ended up with:

Results after applying G'MIC/Random deformation to the engrave layer.

At this point I will apply a layer mask to the layer. I will then copy the starting desaturated layer and paste it into the layer mask.

I added a layer mask to the engraved layer (Right-click the layer, “Add layer mask...” - initialize it to white). I then selected the lowest layer, copied it (Ctrl/Cmd + C), selected the layer mask and pasted (Ctrl/Cmd + V). Once pasted, anchor the selection to apply it to the mask.

This is what it looks like with the layer mask applied:

The engrave layer with the mask applied

At this point I will use a brush and paint over the background with black to mask more of the effect, particularly from the background and edges of her face and hair. Once I'm done, I'm left with this:

After cleaning up the edges of the mask with black

I'll now set the layer blending mode to “Darken Only”, and create a new layer from visible again.

Add a layer mask to the new visible layer (should be the top layer), copy the layer mask from the layer below it (the engrave layer), and paste it into the top layer mask:

Now adjust the levels of the top layer (not the mask!), by selecting it, and opening the levels dialog:

Colors → Levels...

Adjust to taste. In my image I pulled the white point down to about 175.

At this point, my image looks like this:

After adjusting levels to brighten up the face a bit

At this point, create a new layer from visible again.

Now make sure that your background color is white.

On this new layer, I'll run a strange filter that I've never used before:

Filters → Distorts → Erase Every Other Row...

In the dialog, I'll set it to use “Columns”, and “Fill with BG”. Once it's done running, set the layer mode to “Overlay”. This leaves me with this:

After running “Erase Every Other Row...”

At this point, all that's left is to do any touchups you may want to do. I like to paint with white and a low opacity in a similar way to dodging an image. That is, I'll paint white with a soft brush on areas of highlights to accentuate them.

Here is my final result after doing this:

I'd recommend playing with each of the steps to suit your images. On some images, it helps to modify the parameters of the “Graphic Novel” filter to get a good effect. After you've tried it a couple of times through you should get a good feel for how the different steps change the final outcome.

As always, have fun and share your results! :)


There seems to be many steps, but it's not so bad once you've done it. In a nutshell:

  1. Desaturate the image, and create a duplicate of the layer.
  2. Run G'MIC/Graphic Novel filter, skip local normalization. Set layer opacity to about 40-60% (experiment).
  3. Create new layer from visible.
    1. Run Filters → Artistic → Engrave (not Filters → Distorts → Engrave!).
      • Set the Line Height ~ 1/100 of image height, black on bottom
    2. On the same engrave layer, run G'MIC → Deformation → Random
      • Set amplitude to taste
    3. Change layer mode to “Darken only”
    4. Add a layer mask, use the original desaturated layer for the mask
  4. Create new layer from visible
    1. Add a layer mask, using the original desaturated layer for mask again (or the mask from previous layer)
    2. Adjust levels of layer to brighten it up a bit
  5. Create (another) new layer from visible
    1. Set background color to white
    2. Run Filters → Distorts → Erase Every Other Row...
      • Set to columns, and fill with BG color
    3. Set layer blend mode to “Overlay”

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.

September 17, 2014

What’s in a job title?

Over on Google+, Aaron Seigo in his inimitable way launched a discussion about  people who call themselves community managers.. In his words: “the “community manager” role that is increasingly common in the free software world is a fraud and a farce”. As you would expect when casting aspertions on people whose job is to talk to people in public, the post generated a great, and mostly constructive, discussion in the comments – I encourage you to go over there and read some of the highlights, including comments from Richard Esplin, my colleague Jan Wildeboer, Mark Shuttleworth, Michael Hall, Lenz Grimmer and other community luminaries. Well worth the read.

My humble observation here is that the community manager title is useful, but does not affect the person’s relationships with other community members.

First: what about alternative titles? Community liaison, evangelist, gardener, concierge, “cat herder”, ombudsman, Chief Community Officer, community engagement… all have been used as job titles to describe what is essentially the same role. And while I like the metaphors used for some of the titles like the gardener, I don’t think we do ourselves a service by using them. By using some terrible made-up titles, we deprive ourselves of the opportunity to let people know what we can do.

Job titles serve a number of roles in the industry: communicating your authority on a subject to people who have not worked with you (for example, in a panel or a job interview), and letting people know what you did in your job in short-hand. Now, tell me, does a “community ombudsman” rank higher than a “chief cat-herder”? Should I trust the opinion of a “Chief Community Officer” more than a “community gardener”? I can’t tell.

For better or worse, “Community manager” is widely used, and more or less understood. A community manager is someone who tries to keep existing community members happy and engaged, and grows the community by recruiting new members. The second order consequences of that can be varied: we can make our community happy by having better products, so some community managers focus a lot on technology (roadmaps, bug tracking, QA, documentation). Or you can make them happier by better communicating technology which is there – so other community managers concentrate on communication, blogging, Twitter, putting a public face on the development process. You can grow your community by recruiting new users and developers through promotion and outreach, or through business development.

While the role of a community manager is pretty well understood, it is a broad enough title to cover evangelist, product manager, marketing director, developer, release engineer and more.

Second: The job title will not matter inside your community. People in your company will give respect and authority according to who your boss is, perhaps, but people in the community will very quickly pigeon-hole you – are you doing good work and removing roadblocks, or are you a corporate mouthpiece, there to explain why unpopular decisions over which you had no control are actually good for the community? Sometimes you need to be both, but whatever you are predominantly, your community will see through it and categorize you appropriately.

What matters to me is that I am working with and in a community, working toward a vision I believe in, and enabling that community to be a nice place to work in where great things happen. Once I’m checking all those boxes, I really don’t care what my job title is, and I don’t think fellow community members and colleagues do either. My vision of community managers is that they are people who make the lives of community members (regardless of employers) a little better every day, often in ways that are invisible, and as long as you’re doing that, I don’t care what’s on your business card.


A follow up to yesterday's Videos new for 3.14

The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

Making of GNOME 3.14

The release of GNOME 3.14 is slowly approaching, so I stole some time from actual design work and created this little promo to show what goes into a release that probably isn’t immediately obvious (and a large portion of it doesn’t even make it in).

Watch on Youtube

I’d like to thank all the usual suspects that make the wheels spinning, Matthias, Benjamin and Allan in particular. The crown goes to Lapo Calamandrei though, because the amount of work he’s done on Adwaita this cycle will really benefit us in the next couple of releases. Thanks everyone, 3.14 will be a great release*!

* I keep saying that every release, but you simply feel it when you’re forced to log in to your “old” GNOME session rather than jhbuild.

September 16, 2014

Videos 3.14 features

We've added a few, but nonetheless interesting features to Videos in GNOME 3.14.

Auto-rotation of videos

If you capture videos in portrait orientation on your phone, we are now able to rotate them automatically in the movie player, as well as in the thumbnails.

Better streaming

You can now seek anywhere inside streamed videos, even if we didn't download all the way to that point. That's particularly useful for long videos, or slow servers (or a combination of both).

Thumbnails generation

Finally, videos without thumbnails in your videos directory will have thumbnails automatically generated, without having to browse them in Files. This makes the first experience of videos more pleasing to the eye.

What's next?

We'll work on integrating Victor Toso's work on grilo plugins, to show information about the film or TV series on your computer, such as grouping episodes of a series together, showing genres, covers and synopsis for films.

With a bit of luck, we should also be able to provide you with more video content as well, through partners.

Back from Akademy 2014

So last week-end I came back from Akademy 2014, it was a loooong road, but really worth it of course!
Great to meet so much nice people, old friends and new ones. Lots of interesting discussions.

I won’t tell again everything that happened as it’s been already well covered in the dot and several blog posts on planet.kde, with lots of great photos in this gallery.

On my part, I’m especially happy to have met Jens Reuterberg and other people from the new Visual Design Group. We could discuss about the tools we have and how we could try to improve/resurrect Karbon and Krita vector tools.. And share ideas about some redesign like for the network manager…

Then another important point was the BoF we had with all other french people, about our local communication on the web and about planning for Akademy-Fr that will be co-hosted again with Le Capitole du Libre in Toulouse in November.

Thanks again to everyone who helped organize it, and to KDE e.V. for the travel support that allowed me to be there.

PS: And Thanks a lot Adriaan for the story, that was very fun.. Héhé sure I’ll think about drawing it, when I’ll have time.. ;)

September 14, 2014

Global key bindings in Emacs

Global key bindings in emacs. What's hard about that, right? Just something simple like

(global-set-key "\C-m" 'newline-and-indent)
and you're all set.

Well, no. global-set-key gives you a nice key binding that works ... until the next time you load a mode that wants to redefine that key binding out from under you.

For many years I've had a huge collection of mode hooks that run when specific modes load. For instance, python-mode defines \C-c\C-r, my binding that normally runs revert-buffer, to do something called run-python. I never need to run python inside emacs -- I do that in a shell window. But I fairly frequently want to revert a python file back to the last version I saved. So I had a hook that ran whenever python-mode loaded to override that key binding and set it back to what I'd already set it to:

(defun reset-revert-buffer ()
  (define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)

That worked fine -- but you have to do it for every mode that overrides key bindings and every binding that gets overridden. It's a constant chase, where you keep needing to stop editing whatever you wanted to edit and go add yet another mode-hook to .emacs after chasing down which mode is causing the problem. There must be a better solution.

A web search quickly led me to the StackOverflow discussion Globally override key bindings. I tried the techniques there; but they didn't work.

It took a lot of help from the kind folks on #emacs, but after an hour or so they finally found the key: emulation-mode-map-alists. It's only barely documented -- the key there is "The “active” keymaps in each alist are used before minor-mode-map-alist and minor-mode-overriding-map-alist" -- and there seem to be no examples anywhere on the web for how to use it. It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?

Okay, here's what it means. First you define a new keymap and add your bindings to it:

(defvar global-keys-minor-mode-map (make-sparse-keymap)
  "global-keys-minor-mode keymap.")

(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)

Now define a minor mode that will use that keymap. You'll use that minor mode for basically everything.

(define-minor-mode global-keys-minor-mode
  "A minor mode so that global key settings override annoying major modes."
  t "global-keys" 'global-keys-minor-mode-map)

(global-keys-minor-mode 1)

Now build an alist consisting of a list containing a single dotted pair: the name of the minor mode and the keymap.

;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode

Finally, set emulation-mode-map-alists to a list containing only the global-minor-mode-alist.

(setf emulation-mode-map-alists '(global-minor-mode-alist))

There's one final step. Even though you want these bindings to be global and work everywhere, there is one place where you might not want them: the minibuffer. To be honest, I'm not sure if this part is necessary, but it sounds like a good idea so I've kept it.

(defun my-minibuffer-setup-hook ()
  (global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)

Whew! It's a lot of work, but it'll let me clean up my .emacs file and save me from endlessly adding new mode-hooks.

September 13, 2014

About panels and blocks - new elements for FreeCAD

I've more or less recently been working on two new features for the Architecture workbench of FreeCAD: Panels and furniture. None of these is in what we could call a finished state, but I thought it would be interesting to share some of the process here. Panels are a new type of object, that inherits all...

September 12, 2014

Thu 2014/Sep/11

September 11, 2014

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
HTML version:

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to &amp; everywhere in the link.

If you take the link from the text email and replace &amp; with &, the link works, and takes you to the specific discussion.


Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&amp;", "&") + \
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

September 08, 2014

Dot Reminders

I read about cool computer tricks all the time. I think "Wow, that would be a real timesaver!" And then a week later, when it actually would save me time, I've long since forgotten all about it.

After yet another session where I wanted to open a frequently opened file in emacs and thought "I think I made a bookmark for that a while back", but then decided it's easier to type the whole long pathname rather than go re-learn how to use emacs bookmarks, I finally decided I needed a reminder system -- something that would poke me and remind me of a few things I want to learn.

I used to keep cheat sheets and quick reference cards on my desk; but that never worked for me. Quick reference cards tend to be 50 things I already know, 40 things I'll never care about and 4 really great things I should try to remember. And eventually they get burned in a pile of other papers on my desk and I never see them again.

My new system is working much better. I created a file in my home directory called .reminders, in which I put a few -- just a few -- things I want to learn and start using regularly. It started out at about 6 lines but now it's grown to 12.

Then I put this in my .zlogin (of course, you can do this for any shell, not just zsh, though the syntax may vary):

if [[ -f ~/.reminders ]]; then
  cat ~/.reminders

Now, in every login shell (which for me is each new terminal window I create on my desktop), I see my reminders. Of course, I don't read them every time; but I look at them often enough that I can't forget the existence of great things like emacs bookmarks, or diff <(cmd1) <(cmd2).

And if I forget the exact keystroke or syntax, I can always cat ~/.reminders to remind myself. And after a few weeks of regular use, I finally have internalized some of these tricks, and can remove them from my .reminders file.

It's not just for tech tips, either; I've used a similar technique for reminding myself of hard-to-remember vocabulary words when I was studying Spanish. It could work for anything you want to teach yourself.

Although the details of my .reminders are specific to Linux/Unix and zsh, of course you could use a similar system on any computer. If you don't open new terminal windows, you can set a reminder to pop up when you first log in, or once a day, or whatever is right for you. The important part is to have a small set of tips that you see regularly.

September 05, 2014

SVG Working Group Meeting Report — London

The SVG Working Group had a four day Face-to-Face meeting just before The Graphical Web conference in Winchester (UK). The meetings were hosted by Mozilla in their London office.

Here are some highlights of the meeting:

Day 1


  • Symbol and marker placement shorthands:

    Map makers use symbols quite extensively. We decided at a previous meeting to add the ‘refX’ and ‘refY’ attributes (from <marker>) to <symbol> so that symbols can be aligned to a particular point on a map without having to do manual position adjustments. We have since been asked to provide ‘shorthand’ values for ‘refX’ and ‘refY’. I proposed adding ‘left’, ‘center’, and ‘right’ to ‘refX’ (defined as 0%, 50%, and 100%) of the view box as well as ‘top’, ‘center’, and ‘bottom’ to ‘refY’. These values follow those used in the ‘transform-origin’ property. We debated the usefulness and decided to postpone the decision until we had feedback from those using SVG for maps (see Day 4).

    For example, to center a symbol at the moment, one has to subtract off half the width and height from the ‘x’ and ‘y’ attributes of the <use> element:

      <symbol id="MySquare" viewBox="0 0 20 20">
        <rect width="100%" height="100%"
      <use x="100" y="100" width="100" height="100"

    By using ‘refX’ and ‘refY’ set to ‘center’, one no longer needs to perform the manual calculations:

      <symbol id="MySquare" viewBox="0 0 20 20"
                      refX="center" refY="center">
        <rect width="100%" height="100%"
      <use x="150" y="150" width="100" height="100"
    A square symbol centered in an SVG.

    An example of a square <symbol> centered inside an SVG.

  • Marker and symbol overflow:

    One common ‘gotcha’ in using hand-written markers and symbols is that by default anything drawn outside the marker or symbol viewport is hidden. People sometimes naively draw a marker or symbol around the origin. Since this is the upper-left corner of the viewport, only one quarter of the marker or symbol is shown. We decided to change the default to not hide the region outside the viewport, however, if this is shown to break too much existing content, the change might be reverted (it is possible that some markers/symbols have hidden content outside the viewport).

  • Two triangle paths with markers on corners. Only one-fourth of each marker on the left path is shown.

    Example of markers drawn around origin point. Left: overflow=’hidden’ (default), right: overflow=”visible’.

  • Variable-stroke width:

    Having the ability to vary stroke width along a path is one of the most requested things for SVG. Inkscape has the Live Path Effect ‘Power Stroke’ extension that does just that. However, getting this into a standard is not a simple process. We must deal with all kinds of special cases. The most difficult part will be to decide how to handle line joins. (See my post from the Tokyo meeting for more details.) As a step towards moving this along, we need to decide how to interpolate between points. One method is to use a Centripital Catmull-Rom function. Johan Engelen quickly added this function as an option to Inkscape’ Power Stroke implementation (which he wrote) for us to test.

Day 2


  • Path animations:

    In the context of discussing the possibility of having a canonical path decomposition into Bezier curves (for speed optimization) we briefly discussed allowing animation between paths with different structures. Currently, SVG path animations require the start and end paths to have the same structure (i.e. same types of path segments).

  • Catmull-Rom path segments.

    We had a lengthy discussion on the merits of Catmull-Rom path segments. The main advantage of Catmull-Rom paths is that the path goes through all the specified points (unlike Bezier path segments where the path does not go through the handles). There are some disadvantages… adding a new segment changes the shape of the previous segment, the paths tend not to be particularly pretty, and if one is connecting data points, the curves have the tendency to over/under shoot the data. The majority of the working group supports adding these curves although there is some rather strong dissent. The SVG 2 specification already contains Catmull-Rom paths text.

    After discussing the merits of Catmull-Rom path segments we turned to some technical discussions: what exact form of Catmull-Rom should we use, how should start and end segments be specified, how should Catmull-Rom segments interact with other segment types, how should paths be closed?

    Here is a demo of Catmull-Rom curves.

Day 3


  • <tref> decission:

    One problem I see with the working group is that it is dominated by browser interests: Opera, Google (both Blink), Mozilla (Gecko), and Adobe (Blink, Webkit, Gecko). (Apple and Microsoft aren’t actively involved with the group although we did have a Microsoft rep at this meeting.) This leaves those using SVG for other purposes sometimes high and dry. Take the case of <tref>. This element is used in the air-traffic control industry to shadow text so it is visible on the screen over multi-color backgrounds. Admittedly, this is not the best way to do this (the new ‘paint-order’ property is a perfect fit for this) but the fact is that it is being used and flight-control software can’t be changed at a moments notice. Last year there was a discussion on the SVG email list about deprecating <tref> due to some security issues. From reading the thread, it appeared the conclusion was reached that <tref> should be kept around using the same security model that <use> has.

    Deprecating <tref> came up again a few weeks ago and it was decided to remove the feature altogether and not just deprecate it (unfortunately I missed the call). The specification was updated quickly and Blink removed the feature immediately (Firefox had never implemented it… probably due to an oversight). It has reached the point of no-return. It seems that Blink in particular is eager to remove as much cruft as possible… but one person’s cruft is someone else’s essential tool. (<tref> had other uses too, such as allowing localization of Web pages through a server.)

  • Blending on ‘fill’ and ‘stroke’:

    We have already decided to allow multiple paint servers (color, gradient, pattern, hatch) on fills and strokes. It has been proposed that blending be allowed. This would follow the model of the ‘background-blend-mode’ property. (Blending is already allowed between various element using the ‘mix-blend-mode’ property’, available in Firefox (nightly), Chrome, and the trunk version of Inkscape.)

  • CSS Layout Properties:

    The SVG attributes: ‘x’, ‘y’, ‘cx’, ‘cy’, ‘r’, ‘rx’, ‘ry’ have been promoted to properties (see SVG Layout Properties). This allows them to be set via CSS. There is an experimental implementation in Webkit (nightly). It also allows them to be animated via CSS animations.

A pink square centered in SVG if attributes supported, nothing otherwise.

A test of support of ‘x’, ‘y’, ‘width’, and ‘height’ as properties. If supported, a pink square will be displayed on the center of the image.

Day 4


  • Shared path segments (Superpaths):

    Sharing path segments between paths is quite useful. For example, the boundary between two countries could be given as one sub-path, shared between the paths of the two countries. Not only does this reduce the amount of data needed to describe a map but it also allows the renderer to optimize the aliasing between the regions. There is an example polyfill available.

    We discussed various syntax issues. One requirement is the ability to specify the direction of the inserted path. We settled for directly referencing the sub-path as d=”m 20,20 #subpath …” or d=”m 20,20 -#subpath…”, the latter for when the subpath should be reversed. We also decided that the subpath should be inserted into the path before any other operation takes place. This would nominally exclude having separate properties for each sub-path but it makes implementation easier.

  • Here, MySubpath is shared between two paths:

      <path id="MySubpath" d="m 150,80 c 20,20 -20,120 0,140"/>
      <path d="m 50,220 c -40,-30 -20,-120 10,-140 30,-20 80,-10
                       90,0 #MySubpath c 0,20 -60,30 -100,0 z"
    	style="fill:lightblue" />
      <path d="m 150,80 c 20,-14 30,-20 50,-20 20,0 50,40 50,90
                       0,50 -30,120 -100,70 -#MySubPath z"
    	style="fill:pink" />

    This SVG code would render as:

    Two closed paths sharing a common section.

    The two closed paths share a common section.

  • Stroke position:

    An often requested feature is to be able to position a stroke with some percentage inside or outside a path. We were going to punt this to a future edition of SVG but there seems to be quite a demand. The easiest way to implement this is to offset the path and then stroke that (remember, one has to be able to handle dashes, line joins, and end caps). If we can come up with a simple algorithm to offset a stroke we will add this to SVG 2. This is actually a challenging task as an offset of a Bezier curve is not a Bezier… thus some sort of approximation must be used. The Inkscape ‘Path->Linked Offset’ is one example of offsetting. So is the Inkscape Power Stroke Live Path Effect (available in trunk).

  • Symbol and marker placement shorthands, revisited:

    After feedback from mappers, we have decided to include the symbol and marker placement shorthands: ‘left’, ‘center’, ‘right’, ‘top’, and ‘bottom’.

  • Units in path data:

    Currently all path data is in User Units (pixels if untransformed). There is some desire to have the ability to specify a unit in the path data. Personally, I think this is mostly useless, especially as units (cm, mm, inch, etc.) are useless as there is no way to set a preferred pixel to inch ratio (and never will be). The one unit that could be useful is percent. In any case, we will be investigating this further.

Lots of other technical and administrative topics were discussed: improved DOM, embedding SVG in HTML, specification annotations, testing, etc.

September 04, 2014

Something Wicked This Way Comes...

I've been working on something, and I figured that I should share it with anyone who actually reads the stuff I publish here.

I originally started writing here as a small attempt at bringing tutorials for doing high-quality photography using F/OSS to everyone. So far, it's been amazing and I've really loved meeting and getting to know many like-minded folks.

I'm not leaving. Having just re-read that previous paragraph makes it sound like I am. I'm not.

I am, however, working on something new that I'd like to share with you, though. I've called it:

I've been writing to hopefully help fill in some gaps on high-quality photographic processes using all of the amazing F/OSS tools that so many great groups have built, and now I think it's time to move that effort into its own home.

F/OSS photography deserves its own site focused on demonstrating just how amazing these projects are and how fantastic the results can be when using them.

I'm hoping pixls.us can be that home. Pixel-editing for all of us!

I'm been building the site in my spare time over the past couple of weeks (I'm building it from scratch, so it's going a little slower than just slapping up a wordpess/blogger/CMS site). I want the new site to focus on the content above all else, and to make it as accessible and attractive as possible for users. I also want to keep the quality of the content as high as possible.

If anyone would like to contribute anything to help out: expertise, artwork, images, tutorials and more, please feel free to contact me and let me know. I'm in the process of porting my old GIMP tutorials over to the new site (and probably updating/re-writing a bunch of it as well), so we can have at least some content to start out with.

If you want to follow along my progress at the moment while I build out the site, I'm blogging about it on the site itself at http://pixls.us/blog. As mentioned in the comments, I actually do have an RSS feed for the blog posts, I just hadn't linked to it yet (working on it quickly). The location (should your feedreader not pick it up automatically now) is: http://pixls.us/blog/feed.xml.

If you happen to subscribe in a feedreader, please let me know if anything looks off or broken so I can fix it! :)

Things are in a constant state of flux at the moment (did I mention that I'm still building out the back end?), so please bear with me. Please don't hesitate for a moment to let me know if something looks strange, or with any suggestions as well!

When it's ready to go, I'm going to ask for everyones help to get the word out, link to it, talk about it, etc. The sooner I can get it ready to go, the sooner we can help folks find out just how great these projects are and what they can do with them!


September 03, 2014

(Locally) Testing ansible deployments

I’ve always felt my playbooks undertested. I know about a possible solution of spinning up new OpenStack instances with the ansible nova module, but felt it to be too complex as a good idea to implement. Now I’ve found a quicker way to test your playbooks by using Docker.

In principal, all my test does is:

  1. create a docker container
  2. create a copy of the current ansible playbook in a temporary directory and mount it as a volume
  3. inside the docker container, run the playbook

This is obviously not perfect, since:

  • running a playbook locally vs connecting via ssh can be a different beast to test
  • can become resource intensive if you want to test different scenarios represented as docker images.

There is possibly more, but for myself in small it is a workable solution so far.

Find the code on github if you’d like to have a look. Improvements welcome!


September 02, 2014

Using strace to find configuration file locations

I was using strace to figure out how to set up a program, lftp, and a friend commented that he didn't know how to use it and would like to learn. I don't use strace often, but when I do, it's indispensible -- and it's easy to use. So here's a little tutorial.

My problem, in this case, was that I needed to find out what configuration file I needed to modify in order to set up an alias in lftp. The lftp man page tells you how to define an alias, but doesn't tell you how to save it for future sessions; apparently you have to edit the configuration file yourself.

But where? The man page suggested a couple of possible config file locations -- ~/.lftprc and ~/.config/lftp/rc -- but neither of those existed. I wanted to use the one that already existed. I had already set up bookmarks in lftp and it remembered them, so it must have a config file already, somewhere. I wanted to find that file and use it.

So the question was, what files does lftp read when it starts up? strace lets you snoop on a program and see what it's doing.

strace shows you all system calls being used by a program. What's a system call? Well, it's anything in section 2 of the Unix manual. You can get a complete list by typing: man 2 syscalls (you may have to install developer man pages first -- on Debian that's the manpages-dev package). But the important thing is that most file access calls -- open, read, chmod, rename, unlink (that's how you remove a file), and so on -- are system calls.

You can run a program under strace directly:

$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.

Pruning the output

And of course, you'll see tons of crap you're not interested in, like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid of that first. The easiest way is to use grep. Let's say I want to know every file that lftp opens. I can do it like this:

$ strace lftp sitename |& grep open

I have to use |& instead of just | because strace prints its output on stderr instead of stdout.

That's pretty useful, but it's still too much. I really don't care to know about strace opening a bazillion files in /usr/share/locale/en_US/LC_MESSAGES, or libraries like /usr/lib/i386-linux-gnu/libp11-kit.so.0.

In this case, I'm looking for config files, so I really only want to know which files it opens in my home directory. Like this:

$ strace lftp sitename |& grep 'open.*/home/akkana'

In other words, show me just the lines that have either the word "open" or "read" followed later by the string "/home/akkana".

Digression: grep pipelines

Now, you might think that you could use a simpler pipeline with two greps:

$ strace lftp sitename |& grep open | grep /home/akkana

But that doesn't work -- nothing prints out. Why? Because grep, under certain circumstances that aren't clear to me, buffers its output, so in some cases when you pipe grep | grep, the second grep will wait until it has collected quite a lot of output before it prints anything. (This comes up a lot with tail -f as well.) You can avoid that with

$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.

Back to that strace | grep

Okay, whichever way you grep for open and your home directory, it gives:

open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks is ~/.local/share/lftp/bookmarks -- and I probably can't use that to set my alias.

But wait, why doesn't it show lftp trying to open those other config files?

Using script to save the output

At this point, you might be sick of running those grep pipelines over and over. Most of the time, when I run strace, instead of piping it through grep I run it under script to save the whole output.

script is one of those poorly named, ungoogleable commands, but it's incredibly useful. It runs a subshell and saves everything that appears in that subshell, both what you type and all the output, in a file.

Start script, then run lftp inside it:

$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename

After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp, then another Ctrl-D to exit the subshell script is using. Now all the strace output was in /tmp/lftp.strace and I can grep in it, view it in an editor or anything I want.

So, what files is it looking for in my home directory and why don't they show up as open attemps?

$ grep /home/akkana /tmp/lftp.strace

Ah, there it is! A bunch of lines like this:

access("/home/akkana/.lftprc", R_OK)    = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755)     = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0

So I should have looked for access and stat as well as open. Now I have the list of files it's looking for. And, curiously, it creates ~/.config/lftp if it doesn't exist already, even though it's not going to write anything there.

So I created ~/.config/lftp/rc and put my alias there. Worked fine. And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks later when I had a need for that. All thanks to strace.

August 29, 2014

Putting PackageKit metadata on the Fedora LiveCD

While working on the preview of GNOME Software for Fedora 20, one problem became very apparent: When you launched the “Software” application for the first time, it went and downloaded metadata and then built the libsolv cache. This could take a few minutes of looking at a spinner, and was a really bad first experience. We tried really hard to mitagate this, in that when we ask PackageKit for data we say we don’t mind the cache being old, but on a LiveCD or on first install there wasn’t any metadata at all.

So, what are we doing for F21? We can’t run packagekitd when constructing the live image as it’s a D-Bus daemon and will be looking at the system root, not the live-cd root. Enter packagekit-direct. This is an admin-only tool (no man page) installed in /usr/libexec that designed to be run when you want to run the PackageKit backend without getting D-Bus involved.

For Fedora 21 we’ll be running something like DESTDIR=$INSTALL_ROOT /usr/libexec/packagekit-direct refresh in fedora-live-workstation.ks. This means that when the Live image is booted we’ve got both the distro metadata to use, and the libsolv files already built. Launching gnome-software then takes 440ms until it’s usable.

Ansible Variables all of a Sudden Go Missing?

I’ve written a playbook which deploys a working development environment for some of our internal systems. I’ve tested it with various versions of RHEL. Yet when I ran it against a fresh install of Fedora it failed:

fatal: [] => {'msg': "One or more undefined variables: 'ansible_lsb' is undefined", 'failed': True}

It turned out, that ansible gets it’s facts through different programs on the remote machine. If some of these programs are not available (in this instance it was lsb_release) the variables are not populated resulting in this error.

So check if all variables you access are indeed available with:

$ ansible -m setup <yourhost>

August 28, 2014

Debugging a mysterious terminal setting

For the last several months, I repeatedly find myself in a mode where my terminal isn't working quite right. In particular, Ctrl-C doesn't work to interrupt a running program. It's always in a terminal where I've been doing web work. The site I'm working on sadly has only ftp access, so I've been using ncftp to upload files to the site, and git and meld to do local version control on the copy of the site I keep on my local machine. I was pretty sure the problem was coming from either git, meld, or ncftp, but I couldn't reproduce it.

Running reset fixed the problem. But since I didn't know what program was causing the problem, I didn't know when I needed to type reset.

The first step was to find out which of the three programs was at fault. Most of the time when this happened, I wouldn't notice until hours later, the next time I needed to stop a program with Ctrl-C. I speculated that there was probably some way to make zsh run a check after every command ... if I could just figure out what to check.

Terminal modes and stty -a

It seemed like my terminal was getting put into raw mode. In programming lingo, a terminal is in raw mode when characters from it are processed one at a time, and special characters like Ctrl-C, which would normally interrupt whatever program is running, are just passed like any other character.

You can list your terminal modes with stty -a:

$ stty -a
speed 38400 baud; rows 32; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ;
eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff
-iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig icanon -iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke

But that's a lot of information. Unfortunately there's no single flag for raw mode; it's a collection of a lot of flags. I checked the interrupt character: yep, intr = ^C, just like it should be. So what was the problem?

I saved the output with stty -a >/tmp/stty.bad, then I started up a new xterm and made a copy of what it should look like with stty -a >/tmp/stty.good. Then I looked for differences: meld /tmp/stty.good /tmp/stty.bad. I saw these flags differing in the bad one: ignbrk ignpar -iexten -ixon, while the good one had -ignbrk -ignpar iexten ixon. So I should be able to run:

$ stty -ignbrk -ignpar iexten ixon
and that would fix the problem. But it didn't. Ctrl-C still didn't work.

Setting a trap, with precmd

However, knowing some things that differed did give me something to test for in the shell, so I could test after every command and find out exactly when this happened. In zsh, you do that by defining a precmd function, so here's what I did:

    stty -a | fgrep -- -ignbrk > /dev/null
    if [ $? -ne 0 ]; then
        echo "STTY SETTINGS HAVE CHANGED \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!"
Pardon all the exclams. I wanted to make sure I saw the notice when it happened.

And this fairly quickly found the problem: it happened when I suspended ncftp with Ctrl-Z.

stty sane and isig

Okay, now I knew the culprit, and that if I switched to a different ftp client the problem would probably go away. But I still wanted to know why my stty command didn't work, and what the actual terminal difference was.

Somewhere in my web searching I'd stumbled upon some pages suggesting stty sane as an alternative to reset. I tried it, and it worked.

According to man stty, stty sane is equivalent to

$ stty cread -ignbrk brkint -inlcr -igncr icrnl -iutf8 -ixoff -iuclc -ixany  imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

Eek! But actually that's helpful. All I had to do was get a bad terminal (easy now that I knew ncftp was the culprit), then try:

$ stty cread 
$ stty -ignbrk 
$ stty brkint
... and so on, trying Ctrl-C each time to see if things were back to normal. Or I could speed up the process by grouping them:
$ stty cread -ignbrk brkint
$ stty -inlcr -igncr icrnl -iutf8 -ixoff
... and so forth. Which is what I did. And that quickly narrowed it down to isig. I ran reset, then ncftp again to get the terminal in "bad" mode, and tried:
$ stty isig
and sure enough, that was the difference.

I'm still not sure why meld didn't show me the isig difference. But if nothing else, I learned a bit about debugging stty settings, and about stty sane, which is a much nicer way of resetting the terminal than reset since it doesn't clear the screen.

Siggraph 2014 report

20140812_115707Blender Foundation/Institute has been present at this year’s Siggraph again. In beautiful Vancouver we’ve caught up with many old friends, industry relations and made a lot of new connections.

Birds of a feather

As usual we had a loaded room with 120+ people attending both presentations. Best part I still find is the introduction round, hearing where everyone comes from and what they do with Blender is always cool – including surprising visitors from the (film) industry attending (Like Image Engine, BMW, Digital Tutors).

My presentation slides (pdf) can be downloaded here.



Having a small booth on the show always works very well to get meetings organized and reach out to many people you would otherwise never meet. Instead of having to hunt for new contacts, they’re just dropping by :) Here are my notes and impressions from Siggraph in general.

  • This year we had a space next to the Krita Foundation – and of course immediately removed the dividing barrier to make it a 30 foot long presentation! Krita is really impressive these days – they’re definitely leaving GIMP behind in some areas, becoming the artists’ choice, especially for more serious work and productions.
  • We also had support from HP and CGCookie again this year – making it financially possible to do this presentation. Thanks a lot guys!
  • We showed OpenSubdiv – and (yes, really) Blender is the first of the 3d tools offering GPU OpenSubdiv in a release. The Pixar folks were very happy with that, and complimented us with the quality of feedback they had (our developer Sergey Sharybin fixed several issues for them).
  • OpenSubdiv is going to get an important upgrade with 3.0 this year, which will speed up initialization time 5 fold (or better).
  • Cycles – this is still our flagship project, and there’s serious interest from parties to pick this up for their own needs – including evaluation it as in-house rendering engine for vfx. Can’t say more about this though…
  • Several visitors mentioned their interest – or active support – for the Blender 101 project (a release-compatible Blender version configured for learning purposes). Follow-ups with big companies and educational initiatives are being done now – will post about this when things get more tangible.
  • Path-tracers! Everyone makes ray-tracers these days, and AMD showed proudly their own OpenCL render engine – codenamed “Fire Render” for now. A release date is unknown. According to the AMD contact they would release it under a permissive open source license – maybe even public domain.
  • The render engine was fast and looked good – but it they mainly showed the obligatory shiny car on a floor with skydome. It’s like cycles 2011 – they’ve got a way to go before it’s a production render system (if they ever intend to though).
  • AMD’s OpenCL progress is still fuzzy – but the general message is that we should try to split up the render kernel in smaller parts – graphics cards just don’t like it and we might run into similar issues with CUDA sooner than later anyway.
  • I had a long chat with Khronos’ president on this topic too – according to him we shouldn’t give up so easily, an industry compatible OpenCL compiler shouldn’t have such problems with building Cycles. To be continued…
  • Meanwhile, Intel had their render engine showing off too – Embree – which showed amazing detailed furry character renders… nearly realtime. On a ‘regular’ 16 core Xeon! The Intel contact was amazed I hadn’t heard of it… but a bit later I learned we use Embree’s BVH already in Cycles.


  • Nvidia is pushing their remote rendering (Grid, “cloud”) offering even further – including offering us access to a 15 x 8 gpu system for testing and rendering of final frames for Gooseberry. We’ll do followups on that.
  • Microsoft Developer support walked by – finally a contact to try to get a couple of free MSVC Pro license keys from! First follow-ups have been done now.
  • Interesting open source compositing project: had a long demo by the lead developer of Natron – extremely promising (“Nuke style”) standalone compositor, using the Tuttle and OpenFX plugins. I found OpenFX very disappointing though… no open release should accept this level of crappy overlay watermarks.
  • The guys from ImageEngine (Neil Blomkamp’s fx house) told me they work on an open compositor as well – Gaffer. Had not time yet to inspect this.
  • X3D – the Web3D consortium walked by with a quite a big delegation. They did a very warm and convincing plea for keeping good support for X3D files (“3d on the web”), and even check on further authoring of interactive 3d content, that can be published on the web efficiently this way. X3D has as a benefit that the display engines (java script webgl or others) can stay separated from the content… just like .html is compatible for all browsers.
  • Presto! A friendly Pixar employee gave us an extensive demo of the in-house animation tool. Of course it has real-time opensubdiv fur (yep, back to work!). But most amazing I found the Presto workflow, UI concepts and metaphors… it feels all quite familiar (selecting, responsiveness of UI in general). It’s a bit like Maya too (but then done well ;), any Blender animator would be really up to speed with Presto in a few hours.

In general – Siggraph was amazing as usual, and Blender’s still in the center of the attention for many people, including and steadily growing by the industry. We’re doing really good – and the support we had last year from companies (Epic, Valve, Google, Nvidia, HP, …) only helped to show how we as Blender Foundation and as Blender community can keep growing. In small steps, relaxed,  by having a lot of fun together, and stay focused on building the best free/open source 3D creation tools for the best of our artists.

Special thanks to everyone who helped out on Siggraph: Sean, Jonathan, Wes, Mike, Francesco, Patrick, David, Joseph and Oscar!

Ton Roosendaal
August 28, 2014

August 27, 2014

5 UX Tips for Developers

I wrote an article walking through 5 UX Tips for Developers over at the Red Hat Developer blog. They are just some general suggestions for improving applications that are geared towards developers without training in UX. If this sounds like something that might be useful to you, feel free to take a look:

5 UX Tips for Developers

Thanks :)

August 26, 2014

GIMP 2.8.14 Released

Yesterday's 2.8.12 release had broken library versioning, so we had to roll out GIMP 2.8.14 today. The only change is the fixed libtool versioning. Please do not distribute any binaries of yesterday's broken 2.8.12 release, and get GIMP 2.8.14 using the torrent: http://download.gimp.org/pub/gimp/v2.8/gimp-2.8.14.tar.bz2.torrent

Open Flight Controllers

In my last multirotor themed entry I gave an insight into the magical world of flying cameras. I also gave a bit of a promise to write about the open source flight controllers that are out there. Here’s a few that I had the luck laying my hands on. We’ll start with some acro FCs, with a very differt purpose to the proprietary NAZA I started on. These are meant for fast and acrobatic flying, not for flying your expensive cameras on a stabilized gimbal. Keep in mind, I’m still fairly inexperienced so I don’t want to go into specifics and provide my settings just yet.

Blackout: Potsdam from jimmac on Vimeo.


The best thing to be said about CC3D is that while being aimed at acro pilots, it’s relatively newbie friendly. The software is fairly straight forward. Getting the QT app built, set up the radio, tune motors and tweak gains is not going to make your eyes roll in the same way APM’s ground station would (more on that in a future post, maybe). The defaults are reasonable and help you achieve a maiden flight rather than a maiden crash. Updating to the latest firmware over the air is seamless.

Large number of receivers and connection methods is supported. Not only the classic PWM, or the more reasonable “one cable” CPPM method, but even Futaba proprietary SBUS can be used with CC3D. I’ve flown it with Futaba 8J, 14SG and even the Phantom radio (I actually quite like the compact receiver and the sticks on the TX feel good. Maybe it’s just that it’s something I’ve started on). As you’re gonna be flying proximity mostly, the range is not an issue, unless you’re dealing with external interference where a more robust frequency hopping radio would be safer. Without a GPS “break” or even a barometer, losing signal for even a second is fatal. It’s extremely nasty to get a perfect 5.8 video of your unresponsive quad plumetting to the ground :)

Overall a great board and software, and with so much competition, the board price has come down considerably recently. You can get non-genuine boards for around EUR20-25 on ebay. You can learn more about CC3D on openpilot website


Sounding very similar to the popular DJI flight controller, this open board is built around the 32-bit STM32 processor. Theoretically it could be used to fly a bit larger kites with features like GPS hold. You’re not limited to the popular quad or hexa setups with it either, you can go really custom with defining your own motor mix. But you’d be stepping in the realm of only a few and I don’t think I’d trust my camera equipment to a platform that hasn’t been so extensively tested.

Initially I didn’t manage to get the cheap acro variant ideal for the minis, so I got the ‘bells & whistles’ edition, only missing the GPS module. The mag compass and air pressure barometer is already on the board, even though I found no use for altitude hold (BARO). You’ll still going to worry about momentum and wind so reaching for those goggles mid flight is still not going to be any less difficult than just having it stabilized.

If you don’t count some youtube videos, there’s not a lot of handholding for the naze32. People assume you have prior experience with similar FCs. There are multiple choices of configuration tools, but I went for the most straight forward one — a Google Chrome/Chromium Baseflight app. No compiling necessary. It’s quite bare bones, which I liked a lot. Reasonably styled few aligned boxes and CLI is way easier to navigate than the non-searchable table with bubblegum styling than what APM provides for example.

One advanced technique that caught my eye, as the typical process is super flimsy and tedious, is ESC calibration. To set the full range of speeds based on your radio, you usually need to make sure to provide power to the RX, and setting the top and bottom throttle leves to each esc. With this FC, you can actually set the throttle levels from the CLI, calibrating all ESCs at the same time. Very clever and super useful.

Another great feature is that you can have up to three setting profiles, depending on the load, wind conditions and the style you’re going for. Typically when flying proximity, between trees and under park benches, you want very responsive controls at the expense of fluid movement. On the other hand if you plan on going up and fast and pretend to be a plane (or a bird), you really need to have that fluid non-jittery movement. It’s not a setting you change mid-flight, using up a channel, but rather something you choose before arming.

To do it, you hold throttle down and yaw to the left and with the elevator/aileron stick you choose the mode. Left is for preset1, up is for preset 2 and right is for preset 3. Going down with the pitch will recalibrate the IMU. It’s good to solder on a buzzer that will help you find a lost craft when you trigger it with a spare channel (it can beep on low voltage too). The same buzzer will beep for selecting profiles as well.

As for actual flying characteristics, the raw rate mode, which is a little tricky to master (and I still have trouble flying 3rd person with it), is very solid. It feels like a lot larger craft, very stable. There’s also quite a feat in the form of HORI mode, where you get a stabilized flight (kite levels itself when you don’t provide controls), but no limit on the angle, so you’re still free to do flips. I can’t say I’ve masted PID tuning to really get the kind of control over the aircraft I would want. Regardless of tweaking the control characteristics, you won’t get a nice fluid video flying HORI or ANGLE mode, as the self leveling will always do a little jitter to compensate for wind or inaccurate gyro readings which seems to not be there when flying rate. Stabilizing the footage in post gets rid of it mostly, but not perfectly:

Minihquad in Deutschland

You can get the plain acro version for about EUR30 which is an incredible value for a solid FC like this. I have a lot of practice ahead to truly get to that fluid fast plane-like flight that drew me into these miniquads. Check some of these masters below:

APM and Sparky next time. Or perhaps you’d be more interested in the video link instead first? Let me know in the comments.

Update: Turns out NAZE32 supports many serial protocols apart form CPPM, such as Futaba SBUS and Graupner SUMD.

August 25, 2014

Mon 2014/Aug/25

  • The Safety and Privacy team

    During GUADEC we had a Birds-of-a-feather session (BoF) for what eventually became the Safety Team. In this post I'll summarize the very raw minutes of the BoF.

    Locks on bridge

    What is safety in the context of GNOME?

    Matthew Garrett's excellent keynote at GUADEC made a point that GNOME should be the desktop that takes care of and respects the user, as opposed to being just a vehicle for selling stuff (apps, subscriptions) to them.

    I'll digress for a bit to give you an example of "taking care and respecting the user" in another context, which will later let me frame this for GNOME.

    Safety in cities

    In urbanism circles, there is a big focus on making streets safe for everyone, safe for all the users of the street. "Safe" here means many things:

    • Reducing the number of fatalities due to traffic accidents.
    • Reducing the number of accidents, even if they are non-fatal, because they waste everyone's time.
    • Making it possible for vulnerable people to use the streets: children, handicapped people, the elderly.
    • Reducing muggings and crime on the streets.
    • Reducing the bad health effects of a car-centric culture, where people can't walk to places they want to be.

    It turns out that focusing on safety automatically gives you many desirable properties in cities — better urbanism, not just a dry measure of "streets with few accidents".

    There is a big correlation between the speed of vehicles and the proportion of fatal accidents. Cities that reduce maximum speeds in heavily-congested areas will get fewer fatal accidents, and fewer accidents in general — the term that urbanists like to use is "traffic calming". In Strasbourg you may have noticed the signs that mark the central island as a "Zone 30", where 30 Km/h is the maximum speed for all vehicles. This lets motor vehicles, bicycles, and pedestrians share the same space safely.

    Zone 30 End of Zone 30

    Along with traffic calming, you can help vulnerable people in other ways. You can put ramps on curbs where you cross the street; this helps people on wheelchairs, people carrying children on strollers, people dragging suitcases with wheels, skaters, cyclists. On sidewalks you can put tactile paving — tiles with special reliefs so blind pedestrians can feel where the "walking path" is, or where the sidewalk is about to end, or where there is a street crossing. You can make traffic lights for pedestrians emit a special sound when it is people's turn to cross the street — this helps the blind as well as those who are paying attention to their cell phone instead of the traffic signals. You can make mass transit accessible to wheelchairs.

    Once you have slow traffic, accessible mass transit, and comfortable/usable sidewalks, you get more pedestrians. This leads to more people going into shops. This improves the local economy, and reduces the amount of money and time that people are forced to waste in cars.

    Once you have people in shops, restaurants, or cafes at most times of the day, you get fewer muggings — what Jane Jacobs would call "eyes on the street".

    Once people can walk and bike safely to places they actually want to go (the supermarket, the bakery, a cafe or a restaurant, a bank), they automatically get a little exercise, which improves their health, as opposed to sitting in a car for a large part of the day.

    Etcetera. Safety is a systemic thing; it is not something you get by doing one single thing. Not only do you get safer streets; you also get cities that are more livable and human-scaled, rather than machine-scaled for motor vehicles.

    And this brings us to GNOME.

    Safety in GNOME

    Strasbourg     Cathedral, below the rose

    "Computer security" is not very popular among non-technical users, and for good reasons. People have friction with sysadmins or constrained systems that don't let them install programs without going through bureaucratic little processes. People get asked for passwords for silly reasons, like plugging a printer to their home computer. People get asked questions like "Do you want to let $program do $thing?" all the time.

    A lot of "computer security" is done from the viewpoint of the developers and the administrators. Let's keep the users from screwing up our precious system. Let's disallow people from doing things by default. Let's keep control for ourselves.

    Of course, there is also a lot of "computer security" that is desirable. Let's put a firewall so that vandals can't pwn your machine, and so that criminals don't turn your computer into a botnet's slave. Let's keep rogue applications (or rogue users) from screwing up the core of the system. Let's authenticate users so a criminal can't access your bank account.

    Security is putting an armed guard at the entrance of a bank; safety is having enough people in the streets at all times of the day so you don't need the police most of the time.

    Security is putting iron bars in low-storey windows so robbers can't get in easily; safety is putting iron railings in high-storey balconies so you don't fall over.

    Security is disallowing end-user programs from reading /etc/shadow so they can't crack your login passwords; safety is not letting a keylogger run while the system is asking you for your password. Okay, it's security as well, but you get the idea.

    Safety is doing things that prevent harm to users.

    Strasbourg     Cathedral, door detail

    Encrypt all the things

    A good chunk of the discussion during the meeting at GUADEC was about existing things that make our users unsafe, or that inadvertently reveal user's information. For example, we have some things that don't use SSL/TLS by default. Gnome-weather fetches the weather information over unencrypted HTTP. This lets snoopers figure out your current location, or your planned future locations, or the locations where people related to you might live. (And in more general terms, the weather forecasts you check are nobody's business but yours.)

    Gnome-music similarly fetches music metadata over an unencrypted channel. In the best case it lets a snooper know your taste in music; in the worst case it lets someone correlate your music downloads with your music purchases — the difference is a liability to you.

    Gnome-maps fetches map tile data over an unencrypted connection. This identifies places you may intend to travel; it may also reveal your location.

    Strasbourg     Cathedral, stained glass window

    But I don't have a nation-state adversary

    While the examples above may seem far-fetched, they go back to one of the biggest problems with the Internet: unencrypted content is being used against people. You may not have someone to hide from, but you wouldn't want to be put in an uncomfortable situation just from using your software.

    You may not be a reckless driver, but you still put on seatbelts (and you would probably not buy a car without seatbelts).

    We are not trying to re-create Tails, the distro that tries to maintain your anonymity online, but we certainly don't want to make things easy for the bad guys.

    During the meeting we agreed to reach out to the Tails / Tor people so that they can tell us where people's identifying information may leak inadvertently; if we can fix these things without a specialized version of the software, everyone will be safer by default.

    Sandbox all the things

    While auditing code, or changing code to use encrypted connections, can be ongoing "everyday" work, there's a more interesting part to all of this. We are moving to sandboxed applications, where running programs cannot affect each other, or where an installed program doesn't affect the installed dependencies for other programs, or where programs don't have access to all your data by default. See Allan Day's posts on sandboxed apps for a much more detailed explanation of how this will work (parts one and two).

    We have to start defining the service APIs that will let us keep applications isolated from the user's personal data, that is, to avoid letting programs read all of your home directory by default.

    Some services will also need to do scrubbing of sensitive data. For example, if you want to upload photos somewhere public, you may want the software to strip away the geolocation information, the face-recognition data, and the EXIF data that reveals what kind of expensive camera you have. Regular users are generally not aware that this information exists; we can keep them safer by asking for their consent before publishing that information.

    Strasbourg     Cathedral, floor grate

    Consent, agency, respect

    A lot of uncomfortable, inconvenient, or unsafe software is like that because it doesn't respect you.

    Siloed software that doesn't let you export your data? It denies you your agency to move your data to other software.

    Software that fingerprints you and sends your information to a vendor? It doesn't give you informed consent. Or as part of coercion culture, it sneakily buries that consent in something like, "by using this software, you agree to the Terms of Service" (terms which no one ever bothers to read, because frankly they are illegible).

    Software that sends your contact list to the vendor so it can spam them? This is plain lack of respect, lack of consent, and more coercion, as those people don't want to be spammed in the first place (and you don't want to be the indirect cause).

    Allan's second post has a key insight:

    [...] the primary purpose of posing a security question is to ascertain that a piece of software is doing what the user wants it to do, and often, you can verify this without the user even realising that they are being asked a question for security purposes.

    We can take this principle even further. The moment when you ask a security question can be an opportunity to present useful informations or controls – these moments can become a valuable, useful, and even enjoyable part of the experience.

    In a way, enforcing the service APIs upon applications is a way of ensuring that they ask for your consent to do things, and that they respect your agency in doing things which naive security-minded software may disallow "for security reasons".

    Here is an example:

    Agency: "I want to upload a photo"
    Safety: "I don't want my privacy violated"
    Consent: "Would you like to share geographical information, camera information, tags?"


    Pattern Language

    We can get very interesting things if we distill these ideas into GNOME's Pattern Language.

    Assume we had patterns for Respect the user's agency, for Obtain the user's consent, for Maintain the user's safety, and for Respect the user's privacy. These are not written yet, but they will be, shortly.

    We already have prototypal patterns called Support the free ecosystem and User data manifesto.

    Pattern languages start being really useful when you have a rich set of connections between the patterns. In the example above about sharing a photo, we employ the consent, privacy, and agency patterns. What if we add Support the free ecosystem to the mix? Then the user interface to "paste a photo into your instant-messaging client" may look like this:

    Mockup of 'Insert photos' dialog

    Note the defaults:

    • Off for sharing metadata which you may not want to reveal by default: geographical information, face recognition info, camera information, tags. This is the Respect the user's privacy pattern in action.

    • On for sharing the license information, and to let you pick a license right there. This is the Support the free ecosystem pattern.

    If you dismiss the dialog box with "Insert photos", then GNOME would do two things: 1) scrub the JPEG files so they don't contain metadata which you didn't choose to share; 2) note in the JPEG metadata which license you chose.

    In this case, Empathy would not communicate with Shotwell directly — applications are isolated. Instead, Empathy would make use of the "get photos" service API, which would bring up that dialog, and which would automatically run the metadata scrubber.


August 24, 2014

One of them Los Alamos liberals

[Adopt-a-Highway: One of them Los Alamos liberals] I love this Adopt-a-Highway sign on Highway 4 on the way back down from the Jemez.

I have no idea who it is (I hope to find out, some day), but it gives me a laugh every time I see it.

August 23, 2014

Going to Akademy 2014

Akademy 2014

In two weeks I’m going to Akademy 2014, the annual summit of the KDE community.
This year it will take place in Brno, at the University of Technology, from September 6th to 12th. As usual, the main talks will be during the 2 first days (6th-7th), then we will have a week filled with Birds-of-feather, workshops and meetings. I’ll present a workshop about “Creating interface mockups and textures with KDE software” on Monday 8th.

Akademy is a really important event for the KDE community and for Free Software more generally. That can happen thanks to the support from some generous sponsors. You can become one here, or make some donations to support KDE through out the year here.

See you @ Akademy 2014!

Ideal kitchen?

I'm always quite fascinated by medieval-like kitchens, I'd like to have one for myself one day... But until the moment one is rich enough to buy an old medieval farm-castle in France, one must stick with dreaming. I also happened to need a new background for the Inventáriode receitas. So (after I saw this amazing...

August 22, 2014

Rocker Concept

Anastasia Majzhegisheva brings a quick concept of Rocker character. Rocker is a husband of Mechanic-Sister, and… well, you can see he is a tough guy.

RockerArtwork by Anastasia Majzhegisheva

Artwork by Anastasia Majzhegisheva

P.S. BTW, Anastasia have a Google+ page, you can find a lot of her artwork there!

August 21, 2014

Call for Content: Blenderart Magazine issue #46

We are ready to start gathering up tutorials, making of articles and images for Issue # 46 of Blenderart Magazine.

The theme for this issue isFANtastic FANart

This is going to be a seriously fun issue for everyone to take part in. We are going to honor and pay homage to our favorite artists by creating an issue full of Fanart.

At some point we all give in to the overwhelming urge to re-create our favorite characters, logos etc. We have no intention of claiming the idea as our own, we are simply practicing our craft, improving our skills and showing our love to those artists whose work inspires our own.

So in this issue we are looking for tutorials or “making of” articles on:

  • Personal Fanart projects that you have done to practice your skills and or for fun
  • A nice short summary of why this project inspired you, what you learned

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P


Send in your articles to sandra
Subject: “Article submission Issue # 46 [your article name]“

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “FANtastic FANart”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 46″

Note: Image size should be of 1024x (width) at max.

Last date of submissions October 5, 2014.

Good luck!
Blenderart Team

Papagayo packages for Windows

For a long time we have received many requests to provide Papagayo packages for Windows. I am aware about the Papagayo 2.0 bump from original developers, and I am considering to port the changes of our custom version of Papagayo into their version as soon as possible. Unfortunately, right now I’m busy with other priorities, so it’s very hard to tell when this actually will be done.

Papagayo screenshot (Windows)

Papagayo screenshot (Windows)

But there is a good news! As a result of our recent collaboration with some small animation studio of Novosibirsk, we have come up with a build of Papagayo, which is works for Windows. The build is pretty crappy, but it is working.

Here’s the download link: papagayo-1.2-5-win.zip

Note: To start the application, unpack the archive and run the “papagayo.bat” file.

Papagayo update

I am happy to announce that we have made a small update to our Papagayo packages, which fixes the issue with FPS settings value. In previous versions FPS value was internally messed up when you load a new audio file, which was leading to incorrect syncronization results. This update have no other changes and is recommended for all users.

Download updated packages

August 20, 2014

Mouse Release Movie

[Mouse peeking out of the trap] We caught another mouse! I shot a movie of its release.

Like the previous mouse we'd caught, it was nervous about coming out of the trap: it poked its nose out, but didn't want to come the rest of the way.

[Mouse about to fall out of the trap] Dave finally got impatient, picked up the trap and turned it opening down, so the mouse would slide out.

It turned out to be the world's scruffiest mouse, which immediately darted toward me. I had to step back and stand up to follow it on camera. (Yes, I know my camera technique needs work. Sorry.)

[scruffy mouse, just released from trap] [Mouse bounding away] Then it headed up the hill a ways before finally lapsing into the high-bounding behavior we've seen from other mice and rats we've released. I know it's hard to tell in the last picture -- the photo is so small -- but look at the distance between the mouse and its shadow on the ground.

Very entertaining! I don't understand why anyone uses killing traps -- even if you aren't bothered by killing things unnecessarily, the entertainment we get from watching the releases is worth any slight extra hassle of using the live traps.

Here's the movie: Mouse released from trap. [Mouse released from trap]

August 17, 2014

Krita booth at Siggraph 2014

This year, for the first time, we had a Krita booth at Siggraph. If you don’t know about it, Siggraph is the biggest yearly Animation Festival, which happened this year in Vancouver.
We were four people to hold the booth:
-Boudewijn Rempt (the maintainer of the Krita project)
-Vera Lukman (the original author of our popup-palette)
-Oscar Baechler (a cool Krita and Blender user)
-and me ;) (spreading the word about Krita training; more about this in a next post…)

Krita team at Siggraph

Together with Oscar and Vera, we’ve been doing live demos of Krita’s coolest and most impressive features.

Krita booth

We were right next to the Blender booth, which made a nice free-open-source solution area. It was a good occasion for me to meet more people from the blender team.

Krita and Blender booth

People were all really impressed, from those who discovered Krita for the first time to those who already knew about it or even already used it.
As we already started working hard on integrating Krita for VFX workflow, with support for high-bit-depth painting on OpenEXR files, supporting OpenColorIO color management, and even animation support, it was a good occasion to showcase these features and get appropriate feedback.
Many studios expressed their interest to integrate Krita into their production pipeline, replacing less ideal solutions they are using currently…
And of course we met lots of digital painters like illustrators, concept artists, storyboarders or texture artists who want to use Krita now.
Reaching such kinds of users was really our goal, and I think it was a success.

There was also a bird of feather event with all Open-source projects related to VFX that were present there, which was full of great encounters.
I even could meet the guy who is looking at fixing the OCIO bug that I reported a few days before, that was awesome!

OpenSource Bird Of Feather

So hopefuly we’ll see some great users coming to Krita in the next weeks/months. As usual, stay tuned ;)

*Almost all photos here by Oscar Baechler; much more photos here or here.

August 15, 2014

DXF export of FreeCAD Drawing pages

I just upgraded the code that exports Drawing pages in FreeCAD, and it works now much better, and much more the way you would expect: Mount your page fully in FreeCAD, then export it to DXF or DWG with the press of a button. Before, doing this would export the SVG code from the Drawing page,...

Time-lapse photography: stitching movies together on Linux

[Time-lapse clouds movie on youtube] A few weeks ago I wrote about building a simple Arduino-driven camera intervalometer to take repeat photos with my DSLR. I'd been entertained by watching the clouds build and gather and dissipate again while I stepped through all the false positives in my crittercam, and I wanted to try capturing them intentionally so I could make cloud movies.

Of course, you don't have to build an Arduino device. A search for timer remote control or intervalometer will find lots of good options around $20-30. I bought one so I'll have a nice LCD interface rather than having to program an Arduino every time I want to make movies.

Setting the image size

Okay, so you've set up your camera on a tripod with the intervalometer hooked to it. (Depending on how long your movie is, you may also want an external power supply for your camera.)

Now think about what size images you want. If you're targeting YouTube, you probably want to use one of YouTube's preferred settings, bitrates and resolutions, perhaps 1280x720 or 1920x1080. But you may have some other reason to shoot at higher resolution: perhaps you want to use some of the still images as well as making video.

For my first test, I shot at the full resolution of the camera. So I had a directory full of big ten-megapixel photos with filenames ranging from img_6624.jpg to img_6715.jpg. I copied these into a new directory, so I didn't overwrite the originals. You can use ImageMagick's mogrify to scale them all:

mogrify -scale 1280x720 *.jpg

I had an additional issue, though: rain was threatening and I didn't want to leave my camera at risk of getting wet while I went dinner shopping, so I moved the camera back under the patio roof. But with my fisheye lens, that meant I had a lot of extra house showing and I wanted to crop that off. I used GIMP on one image to determine the x, y, width and height for the crop rectangle I wanted. You can even crop to a different aspect ratio from your target, and then fill the extra space with black:

mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg

If you decide to rescale your images to an unusual size, make sure both dimensions are even, otherwise avconv will complain that they're not divisible by two.

Finally: Making your movie

I found lots of pages explaining how to stitch together time-lapse movies using mencoder, and a few using ffmpeg. Unfortunately, in Debian, both are deprecated. Mplayer has been removed entirely. The ffmpeg-vs-avconv issue is apparently a big political war, and I have no position on the matter, except that Debian has come down strongly on the side of avconv and I get tired of getting nagged at every time I run a program. So I needed to figure out how to use avconv.

I found some pages on avconv, but most of them didn't actually work. Here's what worked for me:

avconv -f image2 -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 time-lapse.mp4

Adjust the start_number and filename appropriately for the files you have.

Avconv produces an mp4 file suitable for uploading to youtube. So here is my little test movie: Time Lapse Clouds.

August 13, 2014


I should really write more about all the little open-source tools we use everyday here in our architecture studio. There are your usual CAD / BIM / 3D applications, of course, that you know a bit of if you follow this blog, but one of the tools that really helps us a lot in our...

August 12, 2014

Native OSX packages available for testing

We have made a new packages of Synfig that run natively on OSX and don't require X11 installed. Help us to test them!...

August 10, 2014

Synfig website goes international

We are happy to announce that our main website is going to provide its content translated for several languages....

Sphinx Moths

[White-lined sphinx moth on pale trumpets] We're having a huge bloom of a lovely flower called pale trumpets (Ipomopsis longiflora), and it turns out that sphinx moths just love them.

The white-lined sphinx moth (Hyles lineata) is a moth the size of a hummingbird, and it behaves like a hummingbird, too. It flies during the day, hovering from flower to flower to suck nectar, being far too heavy to land on flowers like butterflies do.

[Sphinx moth eye] I've seen them before, on hikes, but only gotten blurry shots with my pocket camera. But with the pale trumpets blooming, the sphinx moths come right at sunset and feed until near dark. That gives a good excuse to play with the DSLR, telephoto lens and flash ... and I still haven't gotten a really sharp photo, but I'm making progress.

Check out that huge eye! I guess you need good vision in order to make your living poking a long wiggly proboscis into long skinny flowers while laboriously hovering in midair.

Photos here: White-lined sphinx moths on pale trumpets.

August 09, 2014

A bit of FreeCAD BIM work

This afternoon I did some BIM work in FreeCAD for a house project I'm doing with Ryan. We're using this as a test platform for IFC roundtripping between Revit and FreeCAD. So far the results are mixed, lots of information get lost on the way obviously, but on the other hand I'm secretly pretty happy...

August 08, 2014

Siggraph 2014

Meet us at the SIGGRAPH 2014 conference in Vancouver!

Sunday 10 August, Birds of a feather, Convention Center East, room 3

  • 3 PM: Blender Foundation and community meeting
    Ton Roosendaal talks about last year’s results and plans for next year.
    Feedback welcome!
  • 4.30 PM: Blender Artist Showcase and demos
    Everyone’s welcome to show 5-10 minutes of work you did with Blender.
    Well known artists have been invited already for it, like Jonathan Williamson (BlenderCookie), Sean Kennedy (former R&H), Mike Pan (author BGE book), etc.

Tuesday 12 – Thursday 14 August: Tradeshow exhibit

  • Exhibit hall, booth #545
  • FREE TICKETS! Go to this URL and use promotion code BL122947
  • And meet with our neighbors: Krita Foundation.
  • Tuesday 9.30 AM – 6 PM, Wednesday 9.30 AM – 6 PM, Thursday 9.30 AM – 3.30 PM
  • Exhibit has been kindly sponsored by HP and BlenderCookie.

Daily meeting point after show hours to get together informal for a drink or food:

  •  Rogue Kitchen & Wetbar in Gastown. 601 W. Cordova Street
    (Walk out of convention center to the east, to the trainstation, 10 minutes.)


  • Available in three loud colors – the crew outfit for this year. We’ll sell then for CAD 20 at the BOF and booth.


August 07, 2014


  • If you have an orientation sensor in your laptop that works under Windows 8, this tool might be of interest to you.
  • Mattias will use that code as a base to add Compass support to Geoclue (you're on the hook!)
  • I've made a hack to load games metadata using Grilo and Lua plugins (everything looks like nail when you have a hammer ;)
  • I've replaced a Linux phone full of binary blobs by another Linux phone full of binary blobs
  • I believe David Herrmann missed out on asking for a VT, and getting something nice in return.
  • Cosimo will be writing some more animations for me! (and possibly for himself)
  • I now know more about core dumps and stack traces than I would want to, but far less than I probably will in the future.
  • Get Andrea to approve Timm Bädert's git account so he can move Corebird to GNOME. Don't forget to try out Charles, Timm!
  • My team won FreeFA, and it's not even why I'm smiling ;)
  • The cathedral has two towers!
Unfortunately for GUADEC guests, Bretzel Airlines opened its new (and first) shop on Friday, the last days of the BoFs.

(Lovely city, great job from Alexandre, Nathalie, Marc and all the volunteers, I'm sure I'll find excuses to come back :)

Check out Flock Day 2’s Virtual Attendance Guide on Fedora Magazine


I’ve posted today (Thursday’s) guide to Flock talks over on Fedora Magazine:

Guide to Attending Flock Virtually: Day 2

The guide to days 3 and 4 will follow, of course. Enjoy!

August 06, 2014

Guide to Attending Flock Virtually: Day 1


Flock, the Fedora Contributor Conference, starts tomorrow morning in Prague, the Czech Republic, and you can attend – no matter where in the world you are. (Although admittedly, depending on where you are, you may need to give up on some sleep if you intend to attend live ;-) )

Here’s a quick schedule of tomorrow’s talks for remote attendees:

Wednesday, 6 August 2014

6:45 AM UTC / 8:45 AM Prague / 2:45 AM Boston

Opening: Fedora Project Leader (Matthew Miller)

7:00 AM UTC / 9:00 AM Prague / 3:00 AM Boston

Keynote: Free And Open Source Software In Europe: Policies And Implementations (Gijs Hillenius)

8:00 AM UTC / 10:00 AM Prague / 4:00 AM Boston

Better Presentation of Fonts in Fedora (Pravin Satpute)

Contributing to Fedora SELinux Policy (Michael Scherer)

FedoraQA: You are important (Amita Sharma)

9:00 AM UTC / 11:00 AM Prague / 5:00 AM Boston

Fedora Magazine (Chris Anthony Roberts)

State of Copr Build Service (Miroslav Suchý)

Taskotron and Me (Tim Flink)

Where’s Wayland (Matthias Clasen)

12:00 PM UTC / 14:00 PM Prague / 8:00 AM Boston

Fedora Workstation – Goals, Philosophy, and Future (Christian F.K. Schaller)

Procrastination makes you better: Life of a remotee (Flavio Percoco)

Python 3 as Default (Bohuslav Kabrda)

Wayland Input Status (Hans de Goede)

13:00 PM UTC / 15:00 PM Prague / 9:00 AM Boston

Evolving the Fedora Updates Process (Luke Macken)

Fedora Future Devices (Wolnei Tomazelli Junior)

Outreach Program for Women: Lessons in Collaboration
(Marina Zhurakhinskaya)

Predictive Input Methods (Anish Patel)

14:00 PM UTC / 16:00 PM Prague / 10:00 AM Boston

Open Communication and Collaboration Tools for Humans (Sayan Chowdhury, Ratnadeep Debnath)

State of the Fedora Kernel (Josh Boyer)

The Curious Case of Fedora Freshmen (aka Issue #101) (Sarup Banskota)

UX 101: Practical Usability Methods Anyone Can Use (Karen Tang)

15:00 PM UTC / 17:00 PM Prague / 11:00 AM Boston

Fedora Ambassadors: State of the Union (Jiří Eischmann)

Hyperkitty: Past, Present, and Future (Aurélien Bompard)

Kernel Tuning (John H Dulaney)

Release Engineering and You (Dennis Gilmore)

16:00 PM UTC / 18:00 PM Prague / 12:00 PM Boston

Advocating Fedora.next (Christoph Wickert)

Documenting Software with Mallard (Jaromir Hradilek, Petr Kovar)

Fedora Badges and Badge Design (Marie Catherine Nordin, Chris Anthony Roberts)

How is the Fedora kernel different? (Levente Kurusa)

Help us cover these talks!


We’re trying to get as full coverage as possible of these talks on Fedora Magazine. You can help us out, even if you are a remote attendee. If any of the talks above are at a reasonable time in your timezone and you’d be willing to take notes and draft a blog post for Fedora Magazine, please sign up on our wiki page for assignments! You can also contact Ryan Lerch or Chris Roberts for more information about contributing.

August 05, 2014

(lxml) XPath matching against nodes with unprintable characters

Sometimes you want to clean up HTML by removing tags with unprintable characters in them (whitespace, non breaking space, etc). Sometimes encoding this back and forth results in weird characters when the HTML is rendered. Anyways, here is the snippet you might find useful:

def clean_empty_tags(node):
    Finds all tags with a whitespace in it. They come out broke and
    we won't need them anyways.
    for empty in node.xpath("//p[.='\xa0']"):

FreeCAD Spaces

I just finished to give a bit of polish to the Arch Space tool of FreeCAD. Until now it was a barely geometric entity, that represents a closed space. You can define it buy building it from an existing solid shape, or from selected boundaries (walls, floors, whatever). Now I added a bit of visual goodness....

Privacy Policy

I got an envelope from my bank in the mail. The envelope was open and looked like the flap had never been sealed.

Inside was a copy of their privacy policy. Nothing else.

The policy didn't say whether their privacy policy included sealing the envelope when they send me things.

Clarity in GIMP (Local Contrast + Mid Tones)

I was thinking about other ways I fiddle with Luminosity Masks recently, and I thought it might be fun to talk about some other ways to use them when looking at your images.

My previous ramblings about Luminosity Masks:
The rest of my GIMP tutorials can be found here:

If you remember from my previous look at Luminosity Masks, the idea is to create masks that correspond to different luminous levels in your image (roughly the lightness of tones). Once you have these masks, you can make adjustments to your image and isolate their effect to particular tonal regions easily.

In my previous examples, I used them to apply different color toning to different tonal regions of the image, like this example masked to the DarkDark tones (yes, DarkDark):

Mouseover to change Hue to: 0 - 90 - 180 - 270

What’s neat about that application is when you combine it with some Film Emulation presets. I’ll leave that as an exercise for you to play with.

In this particular post I want to do something different.
I want to make some eyes bleed.

“My eyes! The goggles do nothing!” Radioactive Man (Rainier Wolfcastle)

In the same realm of bad tone-mapping for HDR images (see the first two images here) there are those who sharpen to ridiculous proportions as well as abuse local contrast enhancement with Unsharp Mask.

It was this last one that I was fiddling with recently that got me thinking.

Local Contrast Enhancement with Unsharp Mask

If you haven’t heard of this before, let me explain briefly. There is a sharpening method you can use in GIMP (and other software) that utilizes a slightly blurred version of your image to enhance edge contrasts. This leads to a visual perception of increased sharpness or contrast on those edges.

It’s easy to do this manually to see sort of how it works:
  1. Open an image.
  2. Duplicate the base layer.
  3. Blur the top layer a bit (Gaussian blur).
  4. Set the top layer blend mode to “Grain Extract”.
  5. Create a New Layer from visible.
  6. Set the new layer blend mode to “Overlay”, and hide the blurred layer.
Of course, it’s quite a bit easier to just use Unsharp Mask directly (but now you know how to create high-pass layers of your image - we’re learning things already!).

So let’s have a look at an image from a nice Fall day at a farm:

I can apply Unsharp Mask through the menu:

Filters → Enhance → Unsharp Mask...

Below the preview window there are three sliders to adjust the effect: Radius, Amount, and Threshold.

Radius changes how big a radius to use when blurring the image to create the mask.
Amount changes how strong the effect is.
Threshold is a setting for the minimum pixel value difference to define an edge. You can ignore it for now.

If we apply the filter with its default values (Radius: 5.0, Amount: 0.50), we get a nice little sharpening effect on the result:

Unsharp Mask with default values
(mouseover to compare original)

It gives a nice little “pop” to the image (a bit much for my taste). It also avoids sharpening noise mostly, which is nice as well.

So far this is fairly simple stuff, nothing dramatic. The problem is, once many people learn about this they tend to go a bit overboard with it. For instance, let’s crank up the Amount to 3.0:

Don’t do this. Just don’t.

Yikes. But don’t worry. It’s going to get worse.

High Radius, Low Amount

So I’m finally getting to my point. There is a neat method of increasing local contrast in an image by pushing the Unsharp Mask values more than you might normally. If you use a high radius, and default amount you get:

Unsharp Mask, Radius: 80 Amount: 0.5
(mouseover to compare original)

It still looks like clown vomit. But we can still gain the nice local contrast enhancement and mitigate the offensiveness by turning the Amount down even further. Here it is with the Radius still at 80, but the Amount turned down to 0.10:

Unsharp Mask, Radius: 80 Amount: 0.10
(mouseover to compare original)

Even with the Amount at 0.10 it might be a tad much for my taste. The point is that you can gain a nice little boost to local contrast with this method.

Neat but hardly earth-shattering. This has been covered countless times in various places already (and if this is the first time you’re hearing about it, then we’re learning two new things today!).

We can see that we now have a neat method for bumping up the local contrast of an image slightly to give it a little extra visual pop. What we can think about now is, how can I apply that to my images in other interesting ways?

Perhaps we could find some way to apply these effects to particular areas of an image? Say, based on something like luminosity?

Clarity in Lightroom

From what I can tell (and find online), it appears that this is basically what the “Clarity” adjustment in Adobe Lightroom does. It’s a Local Contrast Enhancement masked in some way to middle tones in the image.

Let’s have a quick look and see if that theory holds any weight. Here is the image above, brought into Lightroom and with the “Clarity’ pushed to 100:

From Lightroom 4, Clarity: 100

This seems visually similar to the path we started on already, but let’s see if we can get something better with what we know so far.

Clarity in GIMP

What I want to do is to increase the local contrast of my image, and confine those adjustments to the mid-tone areas of the image. We have seen a method for increasing local contrast with Unsharp Mask, and I had previously written about creating Luminosity Masks. Let’s smash them together and see what we get!

If you haven’t already, go get the Script-Fu to automate the creation of these masks (I tend to use Saul’s version as it’s faster than mine) from the GIMP Registry.

Open an image to get started (I’ll be using the same image from above).

Create Your Luminosity Masks

You’ll need to generate a set of luminosity masks using your base image as a reference. With your image open, you can find Saul’s Luminosity Mask script here:

Filters → Generic → Luminosity Masks (saulgoode)

It should only take a moment to run, and you shouldn’t notice anything different when it’s finished. If you do check your Channels dialog, you should see all nine of the masks there (L, LL, LLL, M, MM, MMM, D, DD, DDD).

Luminosity Masks, by row: Darks, Mids, Lights

Enhance the Local Contrast

Now it’s time to leave subtlety behind us. We are going to be masking these results anyway, so we can get a little crazy with the application in this step. You can use the steps I mentioned above with Unsharp Mask to increase the local contrast, or you can use G'MIC to do it instead.

The reason that you may want to use G'MIC instead is that to increase the local contrast without causing a bit of a color shift would require that you apply the Unsharp Mask on a particular channel after decomposition. G'MIC can automatically apply the contrast enhancement only on the luminance in one step.

Let’s try it with the regular Unsharp Mask in GIMP. I’m going to use similar settings to what we used above, but we’ll turn the amount up even more.

With your image open in GIMP, duplicate the base layer. We’ll be applying the effect and mask on this duplicate over your base.

Now we can enhance the local contrast using Unsharp Mask:
Filters → Enhance → Unsharp Mask...

This time around, we’ll try using Radius: 80 and Amount: 1.5.

Unsharp Mask, Radius: 80, Amount: 1.5. My eyes!

Yes, it’s horrid, but we’re going to be masking it to the mid-range tones remember. Now I can apply a layer mask to this layer by Right-clicking on the layer, and selecting “Add Layer Mask...”.
Right-click → Add Layer Mask...

In the “Add a Mask to the Layer” dialog that pops up, I’ll choose to initialize the layer to a Channel, and choose the “M” mid-tone mask:

Once the ridiculous tones are confined to the mid-tones, things look much better:

Unsharp Mask, Radius: 80, Amount: 1.5. Masked to mid-tones.
(mouseover to compare original)

You can see that there is now a nice boost to the local contrast that is confined to the mid-tones in the image. This is still a bit much for me personally, but I’m purposefully over-doing it in an attempt to illustrate the process. Really you’d want to either tone-down the amount on the USM (UnSharp Mask), or adjust the opacity of this layer to taste now.

So the general formula we are seeing is to make an adjustment (local contrast enhance in this case), and to use the luminosity masks to give us control over where the effect is applied.

For instance, we can try using other types of contrast/detail enhancement in place of the USM step.

I had previously written about detail enhancement through “Freaky Details”. This is what we get when replacing the USM local contrast enhancement with it. Using G'MIC, I can find “Freaky Details” at:
Filters → G'MIC
Details → Freaky details

I used an Amplitude of 4, Scale 22, and Iterations 1. I applied this to the Luminance Channels:

Freaky Details, Amplitude 4, Scale 22, Iterations 1, mid-tone mask
(mouseover to compare original)

Trying other G'MIC detail enhancements such as “Local Normalization” can yield slightly different results:

G'MIC Local Normalization at default values.
(mouseover to compare original)

Yes, there’s some halo-ing, but remember that I’m purposefully allowing these results to get ugly to highlight what they’re doing.

G'MIC Local Variance Normalization is a neat result with fine details as well:

G'MIC Local Variance Normalization (default settings)
(mouseover to compare original)

In Conclusion

This approach works because our eyes will be more sensitive to slight contrast changes as they occur in the mid-tones of an image as opposed to the upper and lower tones. More importantly, it’s a nice introduction to viewing your images as more than a single layer.

Understanding these concepts and viewing your images as the sum of multiple parts allows you much greater flexibility in how you approach your retouching.

I fully encourage you to give it a shot and see what other strange combinations you might be able to discover! For instance, try using the Film Emulation presets in combination with different luminosity masks to find new and interesting combinations of color grading! Try setting the masked layers to different blending modes! You may surprise yourself with what you find.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


This blog post is mostly about showing some photos I took, but I may as well give a brief summary from my point of view.

Had a good time in Strasbourg this week. Hacked a bit on Adwaita with Lapo, who has fearlessly been sanding the rough parts after the major refactoring. Jim Hall uncovered the details of his recent usability testing of GNOME, so while we video chatted before, it was nice to meet him in person. Watched Christian uncover his bold plans to focus on Builder full time which is both awesome and sad. Watched Jasper come out with the truth about his love for Windows and Federico’s secret to getting around fast. Uncovered how Benjamin is not getting more aerodynamic (ie fat) like me. Enjoyed a lot of great food (surprisingly had crêpes only once).

In a classic move I ran out of time in my lightning talk on multirotors, so I’ll have to cover the topic of free software flight controllers in a future blog post. I managed to miss a good number of talks I intended to see, which is quite a feat, considering the average price of beer in the old town. Had a good time hanging out with folks which is so rare to me.

During the BOFs on Wednesday I sat down with the Boxes folks, discussing some new designs. Sad that it was only few brief moments I managed to talk to Bastian about our Blender workflows. Unfortunately the Brno folks from whom I stole a spot in the car had to get back on Thursday so I missed the Thursday and Friday BOFs as well.

Despite the weather I enjoyed the second last GUADEC. Thanks for making it awesome again. See you in the next last one in Gothenburg.

August 04, 2014

ReduceContour tool test

Hi all

Recently I´ve been doing some ground work, polishing FillHoles tool and other internal tools. Nothing fun to show but definitelly improving robustness and inbetween added here and there small useful new features, like the one I´ve being showing in this quick and dirt video series.

Like proportinal inflate, I´ve recently added too in Separate disconnected functionality a treshold to delete smaller parts. Very common when we import noisy meshes with lots of floating parts.
One of the most important feature is the possibility to bridge and connect seperated meshes manually, like I show here: http://farsthary.wordpress.com/2014/07/15/fillholes-revamp-test-1/

So stay tunned because the real fun my start soon for me :P

Notes on Fedora on an Android device

A bit more than a year ago, I ordered a Geeksphone Peak, one of the first widely available Firefox OS phones to explore this new OS.

Those notes are probably not very useful on their own, but they might give a few hints to stuck Android developers.

The hardware

The device has a Qualcomm Snapdragon S4 MSM8225Q SoC, which uses the Adreno 203 and a 540x960 Protocol A (4 touchpoints) touchscreen.

The Adreno 203 (Note: might have been 205) is not supported by Freedreno, and is unlikely to be. It's already a couple of generations behind the latest models, and getting a display working on this device would also require (re-)writing a working panel driver.

At least the CPU is an ARMv7 with a hardware floating-point (unlike the incompatible ARMv6 used by the Raspberry Pi), which means that much more software is available for it.

Getting a shell

Start by installing the android-tools package, and copy the udev rules file to the correct location (it's mentioned with the rules file itself).

Then, on the phone, turn on the developer mode. Plug it in, and run "adb devices", you should see something like:

$ adb devices
List of devices attached
22ae7088f488 device

Now run "adb shell" and have a browse around. You'll realise that the kernel, drivers, init system, baseband stack, and much more, is plain Android. That's a good thing, as I could then order Embedded Android, and dive in further.

If you're feeling a bit restricted by the few command-line applications available, download an all-in-one precompiled busybox, and push it to the device with "adb push".

You can also use aafm, a simple GUI file manager, to browse around.

Getting a Fedora chroot

After formatting a MicroSD card in ext4 and unpacking a Fedora system image in it, I popped it inside the phone. You won't be able to use this very fragile script to launch your chroot just yet though, as we lack a number of kernel features that are required to run Fedora. You'll also note that this is an old version of Fedora. There are probably newer versions available around, but I couldn't pinpoint them while writing this article.

Runnning Fedora, even in a chroot, on such a system will allow us to compile natively (I wouldn't try to build WebKit on it though) and run against a glibc setup rather than Android's bionic libc.

Let's recompile the kernel to be able to use our new chroot.

Avoiding the brick

Before recompiling the kernel and bricking our device, we'll probably want to make sure that we have the ability to restore the original software. Nothing worse than a bricked device, right?

First, we'll unlock the bootloader, so we can modify the kernel, and eventually the bootloader. I took the instructions from this page, but ignored the bits about flashing the device, as we'll be doing that a different way.

You can grab the restore image from my Fedora people page, as, as seems to be the norm for Android(-ish) devices makers to deny any involvement in devices that are more than a couple of months old. No restore software, no product page.

The recovery should be as easy as

$ adb reboot-bootloader
$ fastboot flash boot boot.img
$ fastboot flash system system.img
$ fastboot flash userdata userdata.img
$ fastboot reboot

This technique on the Geeksphone forum might also still work.

Recompiling the kernel

The kernel shipped on this device is a modified Ice-Cream Sandwich "Strawberry" version, as spotted using the GPU driver code.

We grabbed the source code from Geeksphone's github tree, installed the ARM cross-compiler (in the "gcc-arm-linux-gnu" package on Fedora) and got compiling:

$ export ARCH=arm
$ export CROSS_COMPILE=/usr/bin/arm-linux-gnu-
$ make C8680_defconfig
# Make sure that CONFIG_DEVTMPFS and CONFIG_EXT4_FS_SECURITY get enabled in the .config
$ make

We now have a bzImage of the kernel. Launching "fastboot boot zimage /path/to/bzImage" didn't seem to work (it would have used the kernel only for the next boot), so we'll need to replace the kernel on the device.

It's a bit painful to have to do this, but we have the original boot image to restore in case our version doesn't work. The boot partition is on partition 8 of the MMC device. You'll need to install my package of the "android-BootTools" utilities to manipulate the boot image.

$ adb shell 'cat /dev/block/mmcblk0p8 > /mnt/sdcard/disk.img'
$ adb pull /mnt/sdcard/disk.img
$ bootunpack boot.img
$ mkbootimg --kernel /path/to/kernel-source/out/arch/arm/boot/zImage --ramdisk p8.img-ramdisk.cpio.gz --base 0x200000 --cmdline 'androidboot.hardware=qcom loglevel=1' --pagesize 4096 -o boot.img
$ adb reboot-bootloader
$ fastboot flash boot boot.img

If you don't want the graphical interface to run, you can modify the Android init to avoid that.

Getting a Fedora chroot, part 2

Run the script. It works. Hopefully.

If you manage to get this far, you'll have a running Android kernel and user-space, and will be able to use the Fedora chroot to compile software natively and poke at the hardware.

I would expect that, given a kernel source tree made available by the vendor, you could follow those instructions to transform your old Android phone into an ARM test "machine".

Going further, native Fedora boot

Not for the faint of heart!

The process is similar, but we'll need to replace the initrd in the boot image as well. In your chroot, install Rob Clark's hacked-up adb daemon with glibc support (packaged here) so that adb commands keep on working once we natively boot Fedora.

Modify the /etc/fstab so that the root partition is the SD card:

/dev/mmcblk1 /                       ext4    defaults        1 1

We'll need to create an initrd that's small enough to fit on the boot partition though:

$ dracut -o "dm dmraid dmsquash-live lvm mdraid multipath crypt mdraid dasd zfcp i18n" initramfs.img

Then run "mkbootimg" as above, but with the new ramdisk instead of the one unpacked from the original boot image.

Flash, and reboot.


In the future, one would hope that packages such as adbd and the android-BootTools could get into Fedora, but I'm not too hopeful as Fedora, as a project, seems uninterested in running on top of Android hardware.


Why am I posting this now? Firstly, because it allows me to organise the notes I took nearly a year ago. Secondly, I don't have access to the hardware anymore, as it found a new home with Aleksander Morgado at GUADEC.

Aleksander hopes to use this device (Qualcomm-based, remember?) to add native telephony support to the QMI stack. This would in turn get us a ModemManager Telephony API, and the possibility of adding support for more hardware, such as through RIL and libhybris (similar to the oFono RIL plugin used in the Jolla phone).

Common docker pitfalls

I’ve ran into a few problems with docker I’d like to document myself and how to solve them.

Overwriting an entrypoint

If you’ve configured a script as an entrypoint which fails, you can run the docker image with a shell in order to fiddle with the script (instead of continously rebuilding the image):

#--entrypoint (provides a new entry point which is the nominated shell)
docker run -i --entrypoint='/bin/bash'  -t f5d4a4d6a8eb

Possible errors you face otherwise are these:

/bin/bash: /bin/bash: cannot execute binary file

Weird errors when building the image

I’ve ran into this a few times. Errors like:

Error in PREIN scriptlet in rpm package libvirt-daemon-
useradd: failure while writing changes to /etc/passwd

If you’ve set SELinux to enforcing, you may want to temporarily disable SELinux for just building the image. Don’t disable SELinux permanently.

Old (base) image

Check if your base image has changed (e.g. docker images) and pull it again (docker pull <image>)


August 03, 2014

RawSpeed moves to github

As you may have noticed, there hasn’t been much activity lately, since all on the Rawstudio team has had various things getting in the way of doing more work on Rawstudio.

I have however now and again found time to work on RawSpeed, and will from now on host all changes on github. Github makes a lot of work much easier, and allows direct pull requests to be made.

RawSpeed Version 2; New Cameras & Features

The new features that has been included and can be tested on the development branch. Note that this is RawSpeed and not Rawstudio.

  • Support for Sigma foveon cameras.
  • Support for Fuji cameras.
  • Support old Minolta, Panasonic, Sony and other cameras (contributed by Pedro Côrte-Real)
  • Arbitrary CFA definition sizes.
  • Use pugixml for xml parsing to avoid depending on libxml.

When “version 2″ is stabilized a bit, a formal relase will be made, whereafter the API will be locked.


August 02, 2014

Krita: illustrated beginners guide in Russian

Some time ago our user Tyson Tan (creator of Krita's mascot Kiki) published his beginners guide for Krita. Now this tutorial is also available in Russian language!

If you happen to know Russian, please follow the link :)


Fanart by Anastasia Majzhegisheva – 13

Morevna Universe. Watercolor artwork painting by Anastasia Majzhegisheva

Morevna Universe.
Watercolor and ink artwork by Anastasia Majzhegisheva.


August 01, 2014

Predicting planetary visibility with PyEphem

Part II: Predicting Conjunctions

After I'd written a basic script to calculate when planets will be visible, the next step was predicting conjunctions, times when two or more planets are close together in the sky.

Finding separation between two objects is easy in PyEphem: it's just one line once you've set up your objects, observer and date.

p1 = ephem.Mars()
p2 = ephem.Jupiter()
observer = ephem.Observer()  # and then set it to your city, etc.
observer.date = ephem.date('2014/8/1')

ephem.separation(p1, p2)

So all I have to do is loop over all the visible planets and see when the separation is less than some set minimum, like 4 degrees, right?

Well, not really. That tells me if there's a conjunction between a particular pair of planets, like Mars and Jupiter. But the really interesting events are when you have three or more objects close together in the sky. And events like that often span several days. If there's a conjunction of Mars, Venus, and the moon, I don't want to print something awful like

  Conjunction between Mars and Venus, separation 2.7 degrees.
  Conjunction between the moon and Mars, separation 3.8 degrees.
  Conjunction between Mars and Venus, separation 2.2 degrees.
  Conjunction between Venus and the moon, separation 3.9 degrees.
  Conjunction between the moon and Mars, separation 3.2 degrees.
  Conjunction between Venus and the moon, separation 4.0 degrees.
  Conjunction between the moon and Mars, separation 2.5 degrees.

... and so on, for each day. I'd prefer something like:

Conjunction between Mars, Venus and the moon lasts from Friday through Sunday.
  Mars and Venus are closest on Saturday (2.2 degrees).
  The moon and Mars are closest on Sunday (2.5 degrees).

At first I tried just keeping a list of planets involved in the conjunction. So if I see Mars and Jupiter close together, I'd make a list [mars, jupiter], and then if I see Venus and Mars on the same date, I search through all the current conjunction lists and see if either Venus or Mars is already in a list, and if so, add the other one. But that got out of hand quickly. What if my conjunction list looks like [ [mars, venus], [jupiter, saturn] ] and then I see there's also a conjunction between Mars and Jupiter? Oops -- how do you merge those two lists together?

The solution to taking all these pairs and turning them into a list of groups that are all connected actually lies in graph theory: each conjunction pair, like [mars, venus], is an edge, and the trick is to find all the connected edges. But turning my list of conjunction pairs into a graph so I could use a pre-made graph theory algorithm looked like it was going to be more code -- and a lot harder to read and less maintainable -- than making a bunch of custom Python classes.

I eventually ended up with three classes: ConjunctionPair, for a single conjunction observed between two bodies on a single date; Conjunction, a collection of ConjunctionPairs covering as many bodies and dates as needed; and ConjunctionList, the list of all Conjunctions currently active. That let me write methods to handle merging multiple conjunction events together if they turned out to be connected, as well as a method to summarize the event in a nice, readable way.

So predicting conjunctions ended up being a lot more code than I expected -- but only because of the problem of presenting it neatly to the user. As always, user interface represents the hardest part of coding.

The working script is on github at conjunctions.py.