March 04, 2015

Getting Around in GIMP - Luminosity Masks Revisited


Brorfelde landscape by Stig Nygaard (cb)
After adding an aggressive curve along with a mid-tone luminosity mask.

I had previously written about adapting Tony Kuyper’s Luminosity Masks for GIMP. I won’t re-hash all of the details and theory here (just head back over to that post and brush up on them there), but rather I’d like to re-visit them using channels. Specifically to have another look at using the mid-tones mask to give a little pop to images.

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP
Original tutorial on Luminosity Masks:
Getting Around in GIMP - Luminosity Masks
Luminosity Masking in darktable:
PIXLS.US - Luminosity Masking in darktable





Let’s Build Some Luminosity Masks!

The way I had approached building the luminosity masks previously were to create them as a function of layer blending modes. In this re-visit, I’d like to build them from selection sets in the Channels tab of GIMP.

For the Impatient:
I’ve also written a Script-Fu that automates the creation of these channels mimicking the steps below.

Download from: Google Drive

Download from: GIMP Registry (registry.gimp.org)

Once installed, you’ll find it under:
Filters → Generic → Luminosity Masks (patdavid)
[Update]
Yet another reason to love open-source - Saul Goode over at this post on GimpChat updated my script to run faster and cleaner.
You can get a copy of his version at the same Registry link above.
(Saul’s a bit of a Script-Fu guru, so it’s always worth seeing what he’s up to!)


We’ll start off in a similar way as we did previously.

Duplicate your base image

Either through the menus, or by Right-Clicking on the layer in the Layer Dialog:
Layer → Duplicate Layer
Pat David GIMP Luminosity Mask Tutorial Duplicate Layer

Desaturate the Duplicated Layer

Now desaturate the duplicated layer. I use Luminosity to desaturate:
Colors → Desaturate…

Pat David GIMP Luminosity Mask Tutorial Desaturate Layer

This desaturated copy of your color image represents the “Lights” channel. What we want to do is to create a new channel based on this layer.

Create a New Channel “Lights”

The easiest way to do this is to go to your Channels Dialog.

If you don’t see it, you can open it by going to:
Windows → Dockable Dialogs → Channels

Pat David GIMP Luminosity Mask Tutorial Channels Dialog
The Channels dialog

On the top half of this window you’ll see the an entry for each channel in your image (Red, Green, Blue, and Alpha). On the bottom will be a list of any channels you have previously defined.

To create a new channel that will become your “Lights” channel, drag any one of the RGB channels down to the lower window (it doesn’t matter which - they all have the same data due to the desaturation operation).

Now rename this channel to something meaningful (like “L” for instance!), by double-clicking on its name (in my case it‘s called “Blue Channel Copy”) and entering a new one.

This now gives us our “Lights” channel, L :

Pat David GIMP Luminosity Mask Tutorial L Channel

Now that we have the “Lights” channel created, we can use it to create its inverse, the “Darks” channel...

Create a New Channel “Darks”

To create the “Darks” channel, it helps to realize that it should be the inverse of the “Lights” channel. We can get this selection through a few simple operations.

We are going to basically select the entire image, then subtract the “Lights” channel from it. What is left should be our new “Darks” channel.

Select the Entire Image

First, have the entire image selected:
Select → All

Remember, you should be seeing the “marching ants” around your selection - in this case the entire image.

Subtract the “Lights” Channel

With the entire image selected, now we just have to subtract the “Lights” channel. In the Channels dialog, just Right-Click on the “Lights” channel, and choose “Subtract from Selection”:

Pat David GIMP Luminosity Mask Tutorial L Channel Subtract

You’ll now see a new selection on your image. This selection represents the inverse of the “Lights” channel...

Create a New “Darks” Channel from the Selection

Now we just need to save the current selection to a new channel (which we’ll call... Darks!). To save the current selection to a channel, we can just use:
Select → Save to Channel

This will create a new channel in the Channel dialog (probably named “Selection Mask copy”). To give it a better name, just Double-Click on the name to rename it. Let’s choose something exciting, like “D”!

More Darker!

At this point, you’ll have a “Lights” and a “Darks” channel. If you wanted to create some channels that target darker and darker regions of the image, you can subtract the “Lights” channel again (this time from the current selection, “Darks”, as opposed to the entire image).

Once you’ve subtracted the “Lights” channel again, don’t forget to save the selection to a new channel (and name it appropriately - I like to name subsequent masks things like, “DD”, in this case - if I subtracted again, I’d call the next one “DDD” and so on…).

I’ll usually make 3 levels of “Darks” channels, D, DD, and DDD:

Pat David GIMP Luminosity Mask Tutorial Darks Channels
Three levels of Dark masks created.

Here’s what the final three different channels of darks looks like:

Pat David GIMP Luminosity Mask Tutorial All Darks Channels
The D, DD, and DDD channels

Lighter Lights

At this point we have one “Lights” channel, and three “Darks” channels. Now we can go ahead and create two more “Lights” channels, to target lighter and lighter tones.

The process is identical to creating the darker channels, just in reverse.

Lights Channel to Selection

To get started, activate the “Lights” channel as a selection:

Pat David GIMP Luminosity Mask Tutorial L Channel Activate

With the “Lights” channel as a selection, now all we have to do is Subtract the “Darks” channel from it. Then save that selection as a new channel (which will become our “LL” channel, and so on…

Pat David GIMP Luminosity Mask Tutorial Subtract D Channel
Subtracting the D channel from the L selection

To get an even lighter channel, you can subtract D one more time from the selection so far as well.

Here are what the three channels look like, starting with L up to LLL:

Pat David GIMP Luminosity Mask Tutorial All Lights Channels
The L, LL, and LLL channels

Mid Tones Channels

By this point, we’ve got 6 new channels now, three each for light and dark tones:

Pat David GIMP Luminosity Mask Tutorial L+D Channels

Now we can generate our mid-tone channels from these.

The concept of generating the mid-tones is relatively simple - we’re just going to intersect dark and light channels to produce whats left - midtones.

Intersecting Channels for Midtones

To get started, first select the “L” channel, and set it to the current selection (just like above). Right-Click → Channel to Selection.

Then, Right-Click on the “D” channel, and choose “Intersect with Selection”.

You likely won’t see any selection active on your image, but it’s there, I promise. Now as before, just save the selection to a channel:
Select → Save to Channel

Give it a neat name. Sayyy, “M”? :)

You can repeat for each of the other levels, creating an MM and MMM if you’d like.

Now remember, the mid-tones channels are intended to isolate mid values as a mask, so they can look a little strange at first glance. Here’s what the basic mid-tones mask looks like:

Pat David GIMP Luminosity Mask Tutorial Mid Channel
Basic Mid-tones channel

Remember, black tones in this mask represent full transparency to the layer below, while white represents full opacity, from the associated layer.


Using the Masks

The basic idea behind creating these channels is that you can now mask particular tonal ranges in your images, and the mask will be self-feathering (due to how we created them). So we can now isolate specific tones in the image for manipulation.

Previously, I had shown how this could be used to do some simple split-toning of an image. In that case I worked on a B&W image, and tinted it. Here I’ll do the same with our image we’ve been working on so far...

Split Toning

Using the image I’ve been working through so far, we have the base layer to start with:

Pat David GIMP Luminosity Mask Tutorial Split Tone Base

Create Duplicates

We are going to want two duplicates of this base layer. One to tone the lighter values, and another to tone the darker ones. We’ll start by considering the dark tones first. Duplicate the base layer:
Layer → Duplicate Layer

Then rename the copy something descriptive. In my example, I’ll call this layer “Dark” (original, I know):

Pat David GIMP Luminosity Mask Tutorial Split Tone Darks

Add a Mask

Now we can add a layer mask to this layer. You can either Right-Click the layer, and choose “Add Layer Mask”, or you can go through the menus:
Layer → Mask → Add Layer Mask

You’ll then be presented with options about how to initialize the mask. You’ll want to Initialize Layer Mask to: “Channel”, then choose one of your luminosity masks from the drop-down. In my case, I’ll use the DD mask we previously made:

Pat David GIMP Luminosity Mask Tutorial Add Layer Mask Split Tone

Adjust the Layer

Pat David GIMP Luminosity Mask Tutorial Split Tone Activate DD Mask
Now you’ll have a Dark layer with a DD mask that will restrict any modification you do to this layer to only apply to the darker tones.

Make sure you select the layer, and not it’s mask, by clicking on it (you’ll see a white outline around the active layer). Otherwise any operations you do may accidentally get applied to the mask, and not the layer.


At this point, we now want to modify the colors of this layer in some way. There are literally endless ways to approach this, bounded only by your creativity and imagination. For this example, we are going to tone the image with a cool teal/blue color (just like before), which combined with the DD layer mask, will restrict it to modifying only the darker tones.

So I’ll use the Colorize option to tone the entire layer a new color:
Colors → Colorize

To get a Teal-ish color, I’ll pull the Hue slider over to about 200:

Pat David GIMP Luminosity Mask Tutorial Split Tone Colorize

Now, pay attention to what’s happening on your image canvas at this point. Drag the Hue slider around and see how it changes the colors in your image. Especially note that the color shifts will be restricted to the darker tones thanks to the DD mask being used!

To illustrate, mouseover the different hue values in the caption of the image below to change the Hue, and see how it effects the image with the DD mask active:


Mouseover to change Hue to: 0 - 90 - 180 - 270

So after I choose a new Hue of 200 for my layer, I should be seeing this:

Pat David GIMP Luminosity Mask Tutorial Split Tone Dark Tinted

Repeat for Light Tones

Now just repeat the above steps, but this time for the light tones. So duplicate the base layer again, and add a layer mask, but this time try using the LL channel as a mask.

For the lighter tones, I chose a Hue of around 25 instead (more orange-ish than blue):

Pat David GIMP Luminosity Mask Tutorial Split Tone Light Tinted

In the end, here are the results that I achieved:

Pat David GIMP Luminosity Mask Tutorial Split Tone Result
After a quick split-tone (mouseover to compare to original)

The real power here comes from experimentation. I encourage you to try using a different mask to restrict the changes to different areas (try the LLL for instance). You can also adjust the opacity of the layers now to modify how strongly the color tones will effect those areas as well. Play!

Mid-Tones Masks

The mid-tone masks were very interesting to me. In Tony’s original article, he mentioned how much he loved using them to provide a nice boost to contrast and saturation in the image. Well, he’s right. It certainly does do that! (He also feels that it’s similar to shooting the image on Velvia).

Pat David GIMP Luminosity Mask Tutorial Mid Tones Mask
Let’s have a look.

I’ve deleted the layers from my split-toning exercise above, and am back to just the base image layer again.

To try out the mid-tones mask, we only need to duplicate the base layer, and apply a layer mask to it.

This time I’ll choose the basic mid-tones mask M.


What’s interesting about using this mask is that you can use pretty aggressive curve modifications to it, and still keep the image from blowing up. We are only targeting the mid-tones.

To illustrate, I’m going to apply a fairly aggressive compression to the curves by using Adjust Color Curves:
Colors → Curves

When I say aggressive, here is what I’m referring to:

Pat David GIMP Luminosity Mask Tutorial Aggresive Curve Mid Tone Mask

Here is the effect it has on the image when using the M mid-tones mask:


Aggressive curve with Mid-Tone layer mask
(mouseover to compare to original)

As you can see, there is an increase in contrast across the image, as well a nice little boost to saturation. You don’t need to worry about blowing out highlights or losing shadow detail, because the mask will not allow you to modify those values.

More Samples of the Mid-Tone Mask in Use

Pat David GIMP Luminosity Mask Tutorial
Pat David GIMP Luminosity Mask Tutorial
The lede image again, with another aggressive curve applied to a mid-tone masked layer
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Red Tailed Black Cockatoo at f/4 by Debi Dalio on Flickr (used with permission)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscape Ballon by Lennart Tange on Flickr (cb)
(mouseover to compare to original)


Pat David GIMP Luminosity Mask Tutorial
Landscapes by Tom Hannigan on Flickr (cb)
(mouseover to compare to original)



Mixing Films

This is something that I’ve found myself doing quite often. It’s a very powerful method for combining color toning that you may like from different film emulations. Consider what we just walked through.

These masks allow you to target modifications of layers to specific tones of an image. So if you like the saturation of, say, Fuji Velvia in the shadows, but like the upper tones to look similar to Polaroid Polachrome, then these luminosity masks are just what you’re looking for!

Just a little food for experimentation thought... :)

Stay tuned later in the week where I’ll investigate this idea in a little more depth.

In Conclusion

This is just another tool in our mental toolbox of image manipulation, but it’s a very powerful tool indeed. When considering your images, you can now look at them as a function of luminosity - with a neat and powerful way to isolate and target specific tones for modification.

As always, I encourage you to experiment and play. I’m willing to bet this method finds it’s way into at least a few peoples workflows in some fashion.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


March 03, 2015

Updating Firmware on Linux

A few weeks ago Christian asked me to help with the firmware update task that a couple of people at Red Hat have been working on for the last few months. Peter has got fwupdate to the point where we can “upload” sample .cap files onto the flash chips, but this isn’t particularly safe, or easy to do. What we want for Fedora and RHEL is to be able to either install a .rpm file for a BIOS update (if the firmware is re-distributable), or to get notified about it in GNOME Software where it can be downloaded from the upstream vendor. If we’re showing it in a UI, we also want some well written update descriptions, telling the user about what’s fixed in the firmware update and why they should update. Above all else, we want to be able to update firmware safely offline without causing any damage to the system.

So, lets back up a bit. What do we actually need? A binary firmware blob isn’t so useful, and so Microsoft have decided we should all package it up in a .cab file (a bit like a .zip file) along with a .inf file that describes the update in more detail. Parsing .inf files isn’t so hard in Linux as we can fix them up to be valid and open them as a standard key file. The .inf file gives us the hardware ID of what the firmware is referring to, as well as a vendor and a short (!) update description. So far the update descriptions have been less than awesome “update firmware” so we also need some way of fixing up the update descriptions to be suitable to show the user.

AppStream, again, to the rescue. I’m going to ask nice upstreams like Intel and the weird guy who does ColorHug to start shipping a MetaInfo file alongside the .inf file in the firmware .cab file. This means we can have fully localized update descriptions, along with all the usual things you’d expect from an update, e.g. the upstream vendor, the licensing information, etc. Of course, a lot of vendors are not going to care about good descriptions, and won’t be interested in shipping another 16k file in the update just for Linux users. For that, we can actually “inject” a replacement MetaInfo file when we curate the AppStream metadata. This allows us to download all the .cab files we care about, but are not allowed to redistribute, run the appstream-builder on them, then package up just the XML metadata which can be consumed by pretty much any distribution. Ideally vendors would do this long term, bu you need got master versions of basically everything to generate the file, so it’s somewhat of a big ask at the moment.

So, we’ve now got a big blob of metadata we can read in GNOME Software, and show to Fedora users. We can show it in the updates panel, just like a normal update, we just can’t do anything with it. We also don’t know if the firmware update we know about is valid for the hardware we’re running on. These are both solved by the new fwupd project that I’ve been hacking on for a few days. This exit-on-idle daemon allows normal users to apply firmware to devices (with appropriate PolicyKit checks, typically the root password) in a safe way. We check the .cab file is valid, is for the right hardware, and then apply the update to be flashed on next reboot.

A lot of people don’t have UEFI hardware that’s capable of using capsule firmware updates, so I’ve also added a ColorHug provider, which predictably also lets you update the firmware on your ColorHug device. It’s a lot lower risk testing all this super-new code with a £20 EEPROM device than your nice shiny expensive prototype hardware from Intel.

At the moment there’s not a lot to test, we still need to connect up the low level fwupdate code with the fwupd provider, but that will be a lot easier when we get all the prerequisites into Fedora. What’s left to do now is to write a plugin for GNOME Software so it can communicate with fwupd, and to write the required hooks so we can get the firmware upgrade status as a notification for boot+2. I’m also happy to accept patches for other hardware that supports updates, although the internal API isn’t 100% stable yet. This is probably quite interesting for phones and tablets, so I’d be really happy if this gets used on other non-Fedora, or non-desktop usecases.

Comments welcome. No screenshots yet, but coming soon.

Young Morevna artwork #2

Today Anastasia Majzhegisheva brought one more artwork […]

Tue 2015/Mar/03

  • An inlaid GNOME logo, part 3

    Esta parte en español

    (Parts 1, 2)

    The next step is to make a little rice glue for the template. Thoroughly overcook a little rice, with too much water (I think I used something like 1:8 rice:water), and put it in the blender until it is a soft, even goop.

    Rice glue in the blender

    Spread the glue on the wood surfaces. I used a spatula; one can also use a brush.

    Spreading the glue

    I glued the shield onto the dark wood, and the GNOME foot onto the light wood. I put the toes closer to the sole of the foot so that all the pieces would fit. When they are cut, I'll spread the toes again.

    Shield, glued Foot, glued

March 02, 2015

Luminosity Masking in darktable (Ian Hex)

Photographer Ian Hex was kind enough to be a guest writer over on PIXLS.US with a fantastic tutorial on creating and using Luminosity Masks in the raw processing software darktable.


You can find the new tutorial over on PIXLS.US:



I had previously looked at a couple of amazing shots from Ian over on the PIXLS.US blog, when I introduced him as a guest writer. I thought it might be nice to re-post some of his work here...


The Reverence of St. Peter by Ian Hex (cc-by-sa-nc)


Fire of Whitby Abbey by Ian Hex (cc-by-sa-nc)


Wonder of Variety by Ian Hex (cc-by-sa-nc)

Ian has many more amazing images from Britain of breathtaking beauty over on his site, Lightsweep. Be sure to check them out!

PIXLS.US Update

I have also written an update on the status of the site over on the PIXLS.US blog. TL;DR: It's still coming along! :)

February 27, 2015

Replacing paths in Blender’s Sequence Editor

Sometimes, when it comes to editing video in Blender, y […]

Reddit IAmA today, 2-4pm EST

DeathKillCycleAMAProof? Here’s your proof.

Head on over here between 2 and 4pm EST today, Friday February 27.

UPDATE: Reddit is not allowing me to post. On my own IAMA. Granted, this IAMA was set up by someone else, who said he had duly submitted my handle (Nina_Paley, created a week ago) to the mods. But it didn’t work. I was on the calendar, but I can’t respond to questions. I am not happy about this but mods aren’t responding, so I give up. You can AMA on Twitter instead.

UPDATE 2: after half an hour the problem was corrected, and I went back and answered questions.

 

 

Share/Bookmark

flattr this!

February 26, 2015

Another fake flash story

I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.


Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on Amazon.fr, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account

The Second Plague (Frogs) – rough

Music is from “Frogs” by DJ Zeph featuring Azeem, from the album “Sunset Scavenger.” It’s from 2004, making it the most contemporary song in the film. I almost used Taylor Swift’s 2014 “Bad Blood” for Blood, but I ended up deciding Josh White’s 1933 “Blood Red River Blues” was simply a better song. It wasn’t due to fear of lawsuits; I decided long ago not to allow copyright to determine my artistic choices. If you don’t know my stance on Intellectual Disobedience, you can learn about it here:
youtube.com/watch?v=dfGWQnj6RNA
and here:
blog.ninapaley.com/2013/12/07/make-art-not-law-2/

I’m curious what frogs DJ Zeph and Azeem were originally referring to. Here, of course, the frogs are these:

“3 And the river shall bring forth frogs abundantly, which shall go up and come into thine house, and into thy bedchamber, and upon thy bed, and into the house of thy servants, and upon thy people, and into thine ovens, and into thy kneadingtroughs:

“4 And the frogs shall come up both on thee, and upon thy people, and upon all thy servants.” -Exodus 8, King James Version

Share/Bookmark

flattr this!

February 25, 2015

Krita 2.9

Congratulations to Krita on releasing version 2.9 and a very positive write-up for Krita by Bruce Byfield writing for Linux Pro Magazine.

I'm amused by his comment comparing Krita to "the cockpit of a fighter jet" and although there are some things I'd like to see done differently* I think Krita is remarkably clear for a program as complex as it is and does a good job of balancing depth and breadth. (* As just one example: I'm never going to use "File, Mail..." so it's just there waiting for me to hit it accidentally, but far as I know I cannot disable or hide it.)

Unfortunately Byfield writes about Krita "versus" other software. I do not accept that premise. Different software does different things, users can mix and match (and if they can't that is a different and bigger problem). Krita is another weapon in the arsenal. Enjoy Krita 2.9.

Mairi Trois


Readers who've been here for a little while might recognize my friend Mairi, who has modeled for me before. This time I had a brief opportunity for her to sit for me again for a few shots before she jet-setted her way over to Italy for a while.

I was specifically looking to produce the lede image you see above, Mairi Troisième. In particular, I was chasing some chiaroscuro portrait lighting that I had in mind for a while and I was quite happy with the final result!

Of course, I also had a large new light modifier, so bigger shots were fun to play with as well:


Mairi Color (in Black)
ƒ/6.3 1/200s ISO200


Mairi B&W
ƒ/8.0 1/200s ISO200

Those two shots were done using a big Photek Softlighter II [amazon] that I treated myself to late last year. (I believe the speedlight was firing @3/4 power for these shots).

It wasn't all serious, there were some funny moments as well...


My Eyes Are Up Here
ƒ/7.1 1/200s ISO200

Of course, I like to work up close to a subject personally. I think it gives a nicer sense of intimacy to an image.


More Mairi Experiments
ƒ/11.0 1/200s ISO200


Mairi Trois
ƒ/8.0 1/200s ISO200

Culminating at one of my favorites from the shoot, this nice chiaroscuro image up close:


Mairi (Closer)
ƒ/10.0 1/200s ISO200

It's always a pleasure to get a chance to shoot with Mairi. She's a natural in front of the camera, and has these huge expressive eyes that are always a draw.

Later this week, an update on PIXLS.US!

February 24, 2015

Morevna Project got a Patreon page

I am happy to announce that Morevna Project have a Patr […]

Announcing issue 2.3 of Libre Graphics magazine

cover-photo-medium

We’re very pleased to announce the long-awaited release of Libre Graphics magazine issue 2.3. This issue is guest-edited by Manuel Schmalsteig and addresses a theme we’ve been wanting to tackle for some time: type design. From specimen design to international fonts, constraint-based type to foundry building, this issue shows off the many faces of libre type design.

With the usual cast of columnists, stunning showcases and intriguing features, issue 2.3, The Type Issue, given an entrée into what’s now and next in F/LOSS fonts.

The Type Issue is the third issue in volume two of Libre Graphics magazine. Libre Graphics magazine is a print publication devoted to showcasing and promoting work created with Free/Libre Open Source Software. We accept work about or including artistic practices which integrate Free, Libre and Open software, standards, culture, methods and licenses.

The theory of everything

Life of  Stephen Hawking’s, based on his ex-wife’s biography. The movie is attractive and romantic, yet not exaggerated or over dramatized. Instead of focusing on Hawking’s life tragedy or  listing his contributions to Physics, the movie takes a personal angle. The amazing cinematography, clean script and brilliant performances make the movie justified and impressive.

Tips for developing on a web host that offers only FTP

Generally, when I work on a website, I maintain a local copy of all the files. Ideally, I use version control (git, svn or whatever), but failing that, I use rsync over ssh to keep my files in sync with the web server's files.

But I'm helping with a local nonprofit's website, and the cheap web hosting plan they chose doesn't offer ssh, just ftp.

While I have to question the wisdom of an ISP that insists that its customers use insecure ftp rather than a secure encrypted protocol, that's their problem. My problem is how to keep my files in sync with theirs. And the other folks working on the website aren't developers and are very resistant to the idea of using any version control system, so I have to be careful to check for changed files before modifying anything.

In web searches, I haven't found much written about reasonable workflows on an ftp-only web host. I struggled a lot with scripts calling ncftp or lftp. But then I discovered curftpfs, which makes things much easier.

I put a line in /etc/fstab like this:

curlftpfs#user:password@example.com/ /servername fuse rw,allow_other,noauto,user 0 0

Then all I have to do is type mount /servername and the ftp connection is made automagically. From then on, I can treat it like a (very slow and somewhat limited) filesystem.

For instance, if I want to rsync, I can

rsync -avn --size-only /servername/subdir/ ~/servername/subdir/
for any particular subdirectory I want to check. A few things to know about this:
  1. I have to use --size-only because timestamps aren't reliable. I'm not sure whether this is a problem with the ftp protocol, or whether this particular ISP's server has problems with its dates. I suspect it's a problem inherent in ftp, because if I ls -l, I see things like this:
    -rw-rw---- 1 root root 7651 Feb 23  2015 guide-geo.php
    -rw-rw---- 1 root root 1801 Feb 14 17:16 guide-header.php
    -rw-rw---- 1 root root 8738 Feb 23  2015 guide-table.php
    
    Note that a file modified a week ago shows a modification time, but files modified today show only a day and year, not a time. I'm not sure what to make of this.
  2. Note the -n flag. I don't automatically rsync from the server to my local directory, because if I have any local changes newer than what's on the server they'd be overwritten. So I check the diffs by hand with tkdiff or meld before copying.
  3. It's important to rsync only the specific directories you're working on. You really don't want to see how long it takes to get the full file tree of a web server recursively over ftp.

How do you change and update files? It is possible to edit the files on the curlftpfs filesystem directly. But at least with emacs, it's incredibly slow: emacs likes to check file modification dates whenever you change anything, and that requires an ftp round-trip so it could be ten or twenty seconds before anything you type actually makes it into the file, with even longer delays any time you save.

So instead, I edit my local copy, and when I'm ready to push to the server, I cp filename /servername/path/to/filename.

Of course, I have aliases and shell functions to make all of this easier to type, especially the long pathnames: I can't rely on autocompletion like I usually would, because autocompleting a file or directory name on /servername requires an ftp round-trip to ls the remote directory.

Oh, and version control? I use a local git repository. Just because the other people working on the website don't want version control is no reason I can't have a record of my own changes.

None of this is as satisfactory as a nice git or svn repository and a good ssh connection. But it's a lot better than struggling with ftp clients every time you need to test a file.

February 23, 2015

Boyhood

Boyhood is stunning. Just like Linklater’s Trilogy, the movie deals the most sophisticated human emotion with a simple, micro storyline. This time Linklater’s narration follows a boy and his life for 12 years (and it took 12 years in making). The brilliant making, interesting way of story telling, indirect representation of time makes the awesome. […]

February 22, 2015

Production Report #1

It’s already more than a month passed by since we […]

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

February 20, 2015

SVG Working Group Meeting Report — Sydney

The SVG Working Group had a four day face-to-face meeting in Sydney this month. The first day was a joint meeting with the CSS Working Group.

I would like to thank the Inkscape board for funding my travel. This was an expensive trip as I was traveling from Paris and Sydney is an expensive city… but I think it was well worth it as the SVG WG (and CSS WG, where appropriate) approved all of my proposals and worked through all of the issues I raised. Unfortunately, due to the high cost of this trip, I have exhausted the budgeted funding from Inkscape for SVG WG travel this year and will probably miss the two other planned meetings, one in Sweden in June and one in Japan in October. We target the Sweden meeting for moving the SVG 2 specification from Working Draft to Candidate Recommendation so it would be especially good to be there. If anyone has ideas for alternative funding, please let me know.

Highlights:

A summary of selected topics, grouped by day, follows:

Joint CSS and SVG Meeting

Minutes

  • SVG sizing in HTML.

    We spent some time discussing how SVG should be sized in HTML. For corner cases, the browsers disagree on how large an SVG should be displayed. There is going to be a lot work required to get this nailed down.

  • CSS Filter Effects:

    We spent a lot of time going through and resolving the remaining issues in the CSS Filter Effects specification. (This is basically SVG 1.1 filters repackaged for use by HTML with some extra syntax sugar coating.) We then agreed to publish the specification as a Candidate Recommendation.

  • CSS Blending:

    We discussed publishing the CSS Blending specification as a Recommendation, the final step in creating a specification. I raised a point that most of the tests assumed HTML content. It was requested that more SVG specific test be created. (Part of the requirement for Recommendation status is that there be a test suite and that two independently developed renderers pass each test in the suite.)

  • SVG in OpenType, Color Palettes:

    The new OpenType specification allows for multi-colored SVG glyphs. It would be nice to set those colors through CSS. We discussed several methods for doing so and decided on one method. It will be added to the CSS Fonts Level 4 specification.

  • Text Rendering:

    The ‘text-rendering‘ property gives renderers a hint on what speed/precision trade-offs should be made. It was pointed out that the layout of text flowed into a box will change as one zooms in and out on a page in Firefox due to font-hinting, font-size rounding, etc. The Google docs people would like to prevent this. It was decided that the ‘geometricPrecision’ value should require that font-metrics and text-measurement be independent of device resolution and zoom level. (Note: this property is defined in SVG but both Firefox and Chrome support it on HTML content.)

  • Text Properties:

    Text in SVG 2 relies heavily on CSS specifications that are in various states of readiness. I asked the CSS/SVG groups what is the policy for referencing these specs. In particular, SVG 2 needs to reference the CSS Shapes Level 2 specification in order to implement text wrapping inside of SVG shapes. The CSS WG agreed to publish CSS Shapes Level 2 as a Working Draft so we can reference it. We also discussed various technical issues in defining how text wraps around excluded areas and in flowing text into more than one shape.

SVG Day 1

Minutes

  • CamelCase Names

    The SVG WG decided some time ago to avoid new CamelCase names like ‘LinearGradient’ which cause problems with integration in HTML (HTML is case insensitive and CamelCase SVG names must be added by hand to HTML parsers). We went through the list of new CamelCase names in SVG 2 and decided which ones could be changed, weighing arguments for consistency against the desire to not introduce new CamelCase names. It was decided that <meshGradient> should be changed to <mesh>. This was mostly motivated by the ability to use a mesh as a standalone entity (and not only as a paint server). Other changes include: <hatchPath> to <hatchpath>, <solidColor> to <solidcolor>, …

  • Requiring <foreignObject> HTML to be rendered.

    There was a proposal to require any HTML content in a <foreignObject> element to be rendered. I pointed out that not all SVG renderers are HTML renderers (Inkscape as an example). It was decided to have separate conformance classes, one requiring HTML content to be rendered and one not.

  • Requiring Style Sheets Support:

    It was decided to require style sheet support. We discussed what kind of style sheets to require. We decided to require basic style sheet support at the CSS 1 or CSS 2.1 level (that part of the discussion was not minuted).

  • Open Issues:

    We spent considerable time going through the specification chapter by chapter looking at open issues that would block publishing the specification as a Candidate Recommendation. This was a long multi-day process.

SVG Day 2

Minutes

Note: Day 2 and Day 3 minutes are merged.

  • Superpaths:

    Superpaths is the name for the ability to reuse path segment data. This is useful, for example, to define the boundary between two shapes just once, reusing the path segment for both shapes. SVG renderers might be able to exploit this information to provide better anti-aliasing between two shapes knowing they share a common border. The SVG WG endorses this proposal but it probably won’t be ready in time for SVG 2. Instead, it will be developed in a separate Path enhancement module.

  • Line-Join: Miter Clipped:

    It was proposed on the SVG mailing list that there be a new behavior for the miter ‘line-join’ value in regards to the ‘miter-limit’ property. At the moment, if a miter produces a line cap that extends farther than the ‘miter-limit’ value then the miter type is changed to bevel. This causes abrupt jumps when the angle between the joined lines changes such that the miter length crosses over the ‘miter-limit’ value (see demo). A better solution is to clip the line join at the ‘miter-limit’. This is done by some rendering libraries including the one used on Windows. We decided to create a new value for ‘line-join’ with this behavior.

  • Auto-Path Closing:

    The ‘z’ path command closes paths by drawing a line segment to the first point in the path. This is fine if the path is made up of straight lines but becomes problematic if the path is made up of curves. For example, it can cause rendering problems for markers as there will be an extra line segment between the start and end of the path. If the last point is exactly on top of the first point, one can remove this closing line segment but this isn’t always possible, especially if one is using the relative path commands with rounding errors. A more detailed discussion can be found here. We decided to allow a ‘z’ command to fill in missing point data using the first point in the path. For example in: d=”m 100,125 c 0,-75 100,-75 100,0 c 0,75 -100,75 z” the missing point of the second Bezier curve is filled in by the first point in the path.

  • Text on a Shape:

    An Inkscape developer has been working on putting text on a shape by converting shapes to paths while storing the original shape in the <defs> section. It would be much easier if SVG just allowed text on a shape. I proposed that we include this in SVG 2. This is actually quite easy to specify as we have already defined how shapes are converted to paths (needed by markers on shapes and putting dash patterns on shapes). A couple minor points needed to be decided: Do we allow negative path offsets? (Yes) How do we decide which side of a path the text should be put? (A new attribute) The SVG WG approved adding text on a shape to SVG 2.

  • Marker knockouts, mid-markers, etc:

    A number of new marker features still need some work. To facilitate finishing SVG 2 we decided to move them to a separate specification. There is some hesitation to do so as there is fear that once removed from the main SVG specification they will be forgotten about. This will be a trial of how well separating parts of SVG 2 into separates specifications works. The marker knockout feature, very useful for arrowheads is one feature moved into the new specification. On day 3 we approved publishing the new Markers Level 1 specification as a First Public Working Draft.

  • Text properties:

    With our new reliance on CSS for text layout, just what CSS properties should SVG 2 support? We don’t want to necessarily list them all in the SVG 2 specification as the list could change as CSS adds new properties. We decided that we should support all paragraph level properties (‘text-indent’, ‘text-justification’, etc.). We’ll ask the CSS working group to create a definition for CSS paragraph properties that we can then reference.

  • Text ‘dx’, ‘dy’, and ‘rotate’ attributes:

    SVG 1.1 has the properties ‘dx’, ‘dy’, and ‘rotate’ attributes that allow individual glyphs to be shifted and rotated. While not difficult to support on auto-wrapped text (they would be applied after CSS text layout), we decided that they weren’t really needed. They can still be used on SVG 1.1 style text (which is still part of SVG 2).

SVG Day 3

Minutes

Note: Day 3 minutes are at end of Day 2 minutes.

  • Stroking Enhancements:

    As part of trying to push SVG 2 quickly, we decided to move some of the stroking enhancements that still need work into a separate specification. This includes better dashing algorithms (such as controlling dash position at intersections) and variable width strokes. We agreed to the publication of SVG Strokes as a First Public Working Draft.

  • Smoothing in Mesh Gradients:

    Coons-Patch mesh gradients have one problem: the color profile at the boundary between patches is not always smooth. This leads to visible artifacts which are enhanced by Mach Banding. I’ve discussed this in more detail here. I proposed to the SVG WG that we include the option of auto-smoothing meshes using monotonic-bicubic interpolation. (There is an experimental implementation in Inkscape trunk which I demonstrated to the group.) The SVG WG accepted my proposal.

  • Motion Path:

    SVG has the ability to animate a graphical object along a path. This ability is desired for HTML. The SVG and CSS working groups have produced a new specification, Motion Path Module Level 1, for this purpose. We agreed to publish the specification as a First Public Working Draft.

February 19, 2015

Finding core dump files

Someone on the SVLUG list posted about a shell script he'd written to find core dumps.

It sounded like a simple task -- just locate core | grep -w core, right? I mean, any sensible packager avoids naming files or directories "core" for just that reason, don't they?

But not so: turns out in the modern world, insane numbers of software projects include directories called "core", including projects that are developed primarily on Linux so you'd think they would avoid it ... even the kernel. On my system, locate core | grep -w core | wc -l returned 13641 filenames.

Okay, so clearly that isn't working. I had to agree with the SVLUG poster that using "file" to find out which files were actual core dumps is now the only reliable way to do it. The output looks like this:

$ file core
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (375)

The poster was using a shell script, but I was fairly sure it could be done in a single shell pipeline. Let's see: you need to run locate to find any files with 'core" in the name.

Then you pipe it through grep to make sure the filename is actually core: since locate gives you a full pathname, like /lib/modules/3.14-2-686-pae/kernel/drivers/edac/edac_core.ko or /lib/modules/3.14-2-686-pae/kernel/drivers/memstick/core, you want lines where only the final component is core -- so core has a slash before it and an end-of-line (in grep that's denoted by a dollar sign, $) after it. So grep '/core$' should do it.

Then take the output of that locate | grep and run file on it, and pipe the output of that file command through grep to find the lines that include the phrase 'core file'.

That gives you lines like

/home/akkana/geology/NorCal/pinnaclesGIS/core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (523)

But those lines are long and all you really need are the filenames; so pass it through sed to get rid of anything to the right of "core" followed by a colon.

Here's the final command:

file `locate core | grep '/core$'` | grep 'core file' | sed 's/core:.*//'

On my system that gave me 11 files, and they were all really core dumps. I deleted them all.

Guy-In-Black conceptwork continued

Anastasia Majzhegisheva follows the Nikolai’s wor […]

February 18, 2015

OpenRaster Python Plugin

OpenRaster Python Plugin

Early in 2014, version 0.0.2 of the OpenRaster specification added a requirement that each file should include a full size pre-rendered image (mergedimage.png) so that other programs could more easily view OpenRaster files. [Developers: if your program can open a zip file and show a PNG you could add support for viewing OpenRaster files.*]

The GNU Image Manipulation Program includes a python plugin for OpenRaster support, but it did not yet include mergedimage.png so I made the changes myself. You do not need to wait for the next release, or for your distribution to eventually package that release you can benefit from this change immediately. If you are using the GNU Image Manipulation Program version 2.6 you will need to make sure you have support for python plugins included in your version (if you are using Windows you wont), and if you are using version 2.8 it should already be included.

It was only a small change but working with Python and not having to wait for code to compile make it so much easier.

* Although it would probably be best if viewer support was added at the toolkit level, so that many applications could benefit.
[Edit: Updated link]

Wed 2015/Feb/18

  • Integer overflow in librsvg

    Another bug that showed up through fuzz-testing in librsvg was due to an overflow during integer multiplication.

    SVG supports using a convolution matrix for its pixel-based filters. Within the feConvolveMatrix element, one can use the order attribute to specify the size of the convolution matrix. This is usually a small value, like 3 or 5. But what did fuzz-testing generate?

    <feConvolveMatrix order="65536">

    That would be an evil, slow convolution matrix in itself, but in librsvg it caused trouble not because of its size, but because C sucks.

    The code had something like this:

    struct _RsvgFilterPrimitiveConvolveMatrix {
        ...
        double *KernelMatrix;
        ...
        gint orderx, ordery;
        ...
    };
    	      

    The values for the convolution matrix are stored in KernelMatrix, which is just a flattened rectangular array of orderx × ordery elements.

    The code tries to be careful in ensuring that the array with the convolution matrix is of the correct size. In the code below, filter->orderx and filter->ordery have both been set to the dimensions of the array, in this case, both 65536:

    guint listlen = 0;
    
    ...
    
    if ((value = rsvg_property_bag_lookup (atts, "kernelMatrix")))
        filter->KernelMatrix = rsvg_css_parse_number_list (value, &listlen);
    
    ...
    
    if ((gint) listlen != filter->orderx * filter->ordery)
        filter->orderx = filter->ordery = 0;
    	    

    Here, the code first parses the kernelMatrix number list and stores its length in listlen. Later, it compares listlen to orderx * ordery to see if KernelMatrix array has the correct length. Both filter->orderx and ordery are of type int. Later, the code iterates through the values in the filter>KernelMatrix when doing the convolution, and doesn't touch anything if orderx or ordery are zero. Effectively, when those values are zero it means that the array is not to be touched at all — maybe because the SVG is invalid, as in this case.

    But in the bug, the orderx and ordery are not being sanitized to be zero; they remain at 65536, and the KernelMatrix gets accessed incorrectly as a result. Let's see what happens when you mutiply 65536 by itself with ints.

    (gdb) p (int) 65536 * (int) 65536
    $1 = 0
    	    

    Well, of course — the result doesn't fit in 32-bit ints. Let's use 64-bit ints instead:

    (gdb) p (long long) 65536 * 65536
    $2 = 4294967296
    	    

    Which is what one expects.

    What is happening with C? We'll go back to the faulty code and get a disassembly (I recompiled this without optimizations so the code is easy):

    $ objdump --disassemble --source .libs/librsvg_2_la-rsvg-filter.o
    ...
        if ((gint) listlen != filter->orderx * filter->ordery)
        4018:       8b 45 cc                mov    -0x34(%rbp),%eax    
        401b:       89 c2                   mov    %eax,%edx           %edx = listlen
        401d:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4021:       8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx     %ecx = filter->orderx
        4027:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        402b:       8b 80 ac 00 00 00       mov    0xac(%rax),%eax     %eax = filter->ordery
        4031:       0f af c1                imul   %ecx,%eax
        4034:       39 c2                   cmp    %eax,%edx
        4036:       74 22                   je     405a <rsvg_filter_primitive_convolve_matrix_set_atts+0x4c6>
            filter->orderx = filter->ordery = 0;
        4038:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        403c:       c7 80 ac 00 00 00 00    movl   $0x0,0xac(%rax)
        4043:       00 00 00 
        4046:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        404a:       8b 90 ac 00 00 00       mov    0xac(%rax),%edx
        4050:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4054:       89 90 a8 00 00 00       mov    %edx,0xa8(%rax)
    	    

    The highligted lines do the multiplication of filter->orderx * filter->ordery and the comparison against listlen. The imul operation overflows and gives us 0 as a result, which is of course wrong.

    Let's look at the overflow in slow motion. We'll set a breakpoint in the offending line, disassemble, and look at each instruction.

    Breakpoint 3, rsvg_filter_primitive_convolve_matrix_set_atts (self=0x69dc50, ctx=0x7b80d0, atts=0x83f980) at rsvg-filter.c:1276
    1276        if ((gint) listlen != filter->orderx * filter->ordery)
    (gdb) set disassemble-next-line 1
    (gdb) stepi
    
    ...
    
    (gdb) stepi
    0x00007ffff7baf055      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
    => 0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
       0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x10000  65536
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    ...
    eflags         0x206    [ PF IF ]
    	    

    Okay! So, right there, the code is about to do the multiplication. Both eax and ecx, which are 32-bit registers, have 65536 in them — you can see the 64-bit "big" registers that contain them in rax and rcx.

    Type "stepi" and the multiplication gets executed:

    (gdb) stepi
    0x00007ffff7baf058      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
       0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
    => 0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x0      0
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    eflags         0xa07    [ CF PF IF OF ]
    	    

    Kaboom. The register eax (inside rax) now is 0, which is the (wrong) result of the multiplication. But look at the flags! There is a big fat OF flag, the overflow flag! The processor knows! And it tries to tell us... with a single bit... that the C language doesn't bother to check!

    Handover

    (The solution in the code, at least for now, is simple enough — use gint64 for the actual operations so the values fit. It should probably set a reasonable limit for the size of convolution matrices, too.)

    So, could anything do better?

    Scheme uses exact arithmetic if possible, so (* MAXLONG MAXLONG) doesn't overflow, but gives you a bignum without you doing anything special. Subsequent code may go into the slow case for bignums when it happens to use that value, but at least you won't get garbage.

    I think Python does the same, at least for integer values (Scheme goes further and uses exact arithmetic for all rational numbers, not just integers).

    C# lets you use checked operations, which will throw an exception if something overflows. This is not the default — the default is "everything gets clipped to the operand size", like in C. I'm not sure if this is a mistake or not. The rest of the language has very nice safety properties, and it lets you "go fast" if you know what you are doing. Operations that overflow by default, with opt-in safety, seem contrary to this philosophy. On the other hand, the language will protect you if you try to do something stupid like accessing an array element with a negative index (... that you got from an overflowed operation), so maybe it's not that bad in the end.

February 17, 2015

Reanimation of MacBook Air

For some months our MacBook Air was broken. Finally good time to replace, I thought. On the other side, the old notebook was quite useful even 6 years after purchasing. Coding on the road, web surfing, SVG/PDF presentations and so on worked fine on the Core2Duo device from 2008. The first breaking symptoms started with video errors on a DVI connected WUXGA/HDTV+ sized display. The error looked like non stable frequency handling, with the upper scan lines being visually ok and the lower end wobbling to the right. A black desktop background with a small sized window was sometimes a workaround. This notebook type uses a Nvidia 9400M on the logic board. Another non portable computer of mine which uses Nvidia 9300 Go on board graphics runs without such issues. So I expected no reason to worry about the type of graphics chip. Later on, the notebook stopped completely, even without attached external display. It showed a well known one beep every 5 seconds during startup. On MacBook Pro/Air’s this symptom means usually broken RAM.

The RAM is soldered directly on the logic board. Replacing @ Apple appeared prohibitive. Now that I began to look around to sell the broken hardware to hobbyists, I found an article talking about these early MacBook Air’s. This specific one is a 2.1 rev A 2.13 GHz. It was mentioned, that early devices suffered from lead-free soldering, which performs somewhat worse in regards to ductility than normal soldering. The result was that many of these devices suffered from electrical disconnections of its circuitry during the course of warming and cooling and the related thermal expansion and contraction. The device showed the one beep symptom on startup without booting. An engineer from Apple was unofficially cited to suggest, that putting the logic board in around 100° Celsius for a few minutes would eventually suffice to solve the issue. That sounded worth a try to me. As I love to open up many devices to look into and eventually repair them, taking my time for dismounting the logic board and not bringing it to a repair service was fine for me. But be warned, doing so can be difficult for beginners. I placed the board on some wool in the oven @120 ° and after 10 minutes and some more for montage, the laptop started again to work. I am not sure if soldering is really solved now or if the experienced symptoms will come back. I guess that some memory chips on the board were resetted and stopped telling that RAM is broken. So my device works again and will keep us happy for a while – I hope.

February 16, 2015

Old projects, new images

We use to make 3D images of old projects of some of our clients, to give their websites a bit of a refresh, and we don't do it for ourselves? No sir, no more! Here is a bit of revamp on two oldies but goodies of our projects, Casa GL and the PACE ONG. ...

KMZ Zorki 4 (Soviet Rangefinder)

The Leica rangefinder

Rangefinder type cameras predate modern single lens reflex cameras. People still use them. It’s just a different way of shooting. Since they’re no longer a mainstream type camera most manufacturers have stopped making them a long time ago. Except Leica, Leica still makes digital and film rangefinders and as you might guess, they come at significant cost. Even old Leica film rangefinders easily cost upwards of € 1000. While Leica certainly wasn’t the only brand to manufacture rangefinders throughout photographic history, it was (and still is) certainly the most iconic rangefinder brand.

The Zorki rangefinder

Now the Soviets essentially tried to copy Leica’s cameras, the result of which, the Zorki series of cameras, was produced at KMZ. Many different versions exist, having produced nearly 2 million cameras across more than 15 years, the Zorki-4 was without a doubt it’s most popular incarnation. Many consider the Zorki-4 to be the one where the Soviets got it (mostly) right.

That said, the Zorki-4 vaguely looks like a Leica M with it’s single coupled viewfinder/rangefinder window. In most other ways it’s more like a pre-M Leica, with it’s 39mm LTM lens screw mount. Earlier Zorki-4’s have a body finished with vulcanite which is though as nails, but if damaged is very difficult to fix/replace. Later Zorki-4’s have a body finished with relatively cheap leatherette, which is much more easily damaged, and is commonly starting to peel off, but should be relatively easy to make better than new. Most Zorki’s come with either a Jupiter-8 50mm f/2.0 lens (being a Zeiss Sonnar inspired design), or an Industar-50 50mm f/3.5 (being a Zeiss Tessar inspired design). I’d highly recommend getting a Zorki-4 with a Jupiter-8 if you can find one.

Buying a Zorki rangefinder with a Jupiter lens

If you’re looking to buy a Zorki there are a few things to be aware of. Zorki’s were produced during the fifties, the sixties and the seventies in Soviet Russia often favoring quantity over quality presumably to be able to meet quota’s. The same is likely true for most Soviet optics as well. So they are both old and may not have met the highest quality standards to begin with. So when buying a Zorki you need to keep in mind it might need repairs and CLA (clean, lube, adjust). My particular Zorki had a dim viewfinder because of dirt both inside and out, the shutterspeed dial was completely stuck at 1/60th of a second and the film takeup spool was missing. I sent my Zorki-4 and Jupiter-8 to Oleg Khalyavin for repairs, shutter curtain replacement and CLA. Oleg was also able to provide me with a replacement film takeup spool or two as well. All in all having work done on your Zorki will easily set you back about € 100 including significant shipping expenses. Keep this in mind before buying. And even if you get your Zorki in a usable state, you’ll probably have to have it serviced at some point. You may very well want to consider having it serviced rather sooner than later, allowing yourself the benefit of enjoying a newly serviced camera.

Complementary accessories

Zorki’s usually come without a lens hood, and the Jupiter-8’s glass elements are said to be only single coasted, so a lens hood isn’t exactly a luxury. A suitable aftermarket lens hood isn’t hard to find though.

While my Zorki did come with it’s original clumsy (and in my case stinky) leather carrying case, it doesn’t come with a regular camera strap. Matin’s Deneb-12LN leather strap can be an affordable but stylish companion to the Zorki. The strap is relatively short, but it’s long enough to wear around your neck or arm. It’s also fairly stiff when it’s still brand new, but it will loosen up after using it for a few days.

To some it might seem asif the Zorki has a hot shoe, but it doesn’t, it’s actually a cold shoe, merely intended as an accessory mount and since it’s all metal even with a flash connected via PC Sync it’s likely to be permanently shorted. To mount a regular hot shoe flash you will need a hot shoe adapter both for isolation and PC Sync connectivity.

Choosing a film stock

So now you have a nice Zorki-4, waiting for film to be loaded into it. As of this writing (2015) there is a smörgåsbord of film available. I like shooting black & white, and I often shoot Ilford XP2 Super 400. Ilford’s XP2 is the only B&W film left that’s meant to be processed along with color print film in regular C41 chemicals (so it can be processed by a one-hour-photo service, if you’re lucky enough to still have one of those around). Like most color print film, XP2 has a big exposure latitude, remaining usable between ISO 50 — 800, which isn’t a luxury since the Zorki-4 is not equipped with a built-in lightmeter. While Ilford recommends shooting it at ISO 400, I’d suggest shooting it as if it’s ISO 200 film, giving you two stops of both underexposure and overexposure leeway.

Duckies

I haven’t shot any real color print film yet in the Zorki, but Kodak New Portra 400 quickly comes to mind. An inexpensive alternative could possibly be Fuji Superia X-TRA 400, which can be found very cheaply as most store-brand 400 speed color print film.

Shooting with a Zorki rangefinder

Once you have a Zorki, there are still some caveats you need to be aware of… Most importantly, don’t change shutter speeds while the shutter isn’t cocked (cocking the shutter is done by advancing the film), not heeding this warning may damage the cameras internal mechanisms. Other notable issues of lesser importance are minding the viewfinder’s parallax error (particularly when shooting at short distances) and making sure you load the film straight, I’ve managed to load film at a slight angle a couple of times already.

As I’ve mentioned, the Zorki-4 does not come with a built-in lightmeter, which means the camera won’t be helping you getting the exposure right, you are on your own. You could use a pricy dedicated light meter (or a less pricy smartphone app, which may or may not work well on your particular phone), either of which are fairly cumbersome. Considering XP2’s wide exposure latitude means an educated guesswork approach becomes feasible. There’s a rule of thumb system called Sunny 16 for making educated guesstimates of exposure for outdoors environments. Sunny 16 states that if you set your shutter speed to the closest reciprocal of your film speed, bright sunny daylight requires an aperture of f/16 to get a decent exposure. Other weather conditions require opening up the aperture according to this table:


Sunny
Slightly
Overcast

Overcast
Heavy
Overcast
Open
Shade
f/16
f/11
f/8
f/5.6
f/4

If you have doubts when classifying shooting conditions, you may want to err on the side of overexposure as color print film tends to prefer overexposure over underexposure. If you’re shooting slide film you should probably avoid using Sunny 16 altogether, as slide film can be very unforgiving if improperly exposed. Additionally, you can manually read a film canisters DX CAS code to see what a films minimum exposure tolerance is.

Quick example: When shooting XP2 on an overcast day, assuming an alternate base ISO of 200 (as suggested earlier), the shutter speed should be set at 1/250th of a second and our aperture should be set at f/8, giving a fairly large field of depth. Now if we want to reduce our field of depth we can trade +2 aperture stops for -2 stops of shutterspeed, where we end up shooting at 1/1000th of a second at f/4.

Having film processed

After shooting a roll of XP2 (or any roll of color print film) you need to take it to a local photo shop, chemist or supermarket to have a it processed, scanned and printed. Usually you’ll be able to have your film processed in C41 chemicals, scanned to CD and get a set of small prints for about € 15 or so. Keep in mind that most shops will cut your filmroll into strips of 4, 5 or 6 negatives, if left to their own devices, depending on the type of protective sleeves they use. Some shops might not offer scanning services without ordering prints, since scanning may be considered a byproduct of the printmaking process. Resulting JPEG scans are usually about 2 megapixel (1800×1200), or sometimes slightly less (1536×1024). A particular note when using XP2, since it’s processed as if it’s color print film means it’s usually scanned as if it’s color print film, where the resulting should-be-monochrome scans (and prints for that matter) can often have a slight color cast. This color cast varies, my particular local lab usually does a fairly decent job, where the scans have a subtle color cast, which isn’t too unpleasant. But I’ve heard about nasty heavier color casts as well. Regardless you need to keep in mind that you might need to convert the scans to proper monochrome manually, which can be easily done with any random photo editing software in a heartbeat. Same goes for rotating the images, aside from the usual 90 degree turns occasionally I get my images scanned upside down, where they need either 180 degree or 270 degree turns, you’ll likely need to do that yourself as well.

Post-processing the scans

Generally speaking I personally like preprocessing my scanned images using some scripted commandline tools before importing them into an image management program like for example Shotwell.

First I remove all useless data from the source JPEG, and in particular for black and white film, like XP2, remove the JPEGs chroma channels, to losslessly remove any color cast (avoiding generational loss):

$ jpegtran -copy none -grayscale -optimize -perfect ORIGINAL.JPG > OUTPUT.JPG

Using the clean image we previously created as a base, we can then add basic EXIF metadata:

$ exiv2 \
   -M"set Exif.Image.Artist John Doe" \
   -M"set Exif.Image.Make KMZ" \
   -M"set Exif.Image.Model Zorki-4" \
   -M"set Exif.Image.ImageNumber \
      $(echo ORIGINAL.JPG | tr -cd '0-9' | sed 's#^0*##g')" \
   -M"set Exif.Image.Orientation 0" \
   -M"set Exif.Image.XResolution 300/1" \
   -M"set Exif.Image.YResolution 300/1" \
   -M"set Exif.Image.ResolutionUnit 2" \
   -M"set Exif.Photo.DateTimeDigitized \
      $(stat --format="%y" ORIGINAL.JPG | awk -F '.' '{print $1}' | tr '-' ':')" \
   -M"set Exif.Photo.UserComment Ilford XP2 Super" \
   -M"set Exif.Photo.ExposureProgram 1" \
   -M"set Exif.Photo.ISOSpeedRatings 400" \
   -M"set Exif.Photo.FocalLength 50/1" \
   -M"set Exif.Image.MaxApertureValue 20/10" \
   -M"set Exif.Photo.LensMake KMZ" \
   -M"set Exif.Photo.LensModel Jupiter-8" \
   -M"set Exif.Photo.FileSource 1" \
   -M"set Exif.Photo.ColorSpace 1" \
   OUTPUT.JPG

As I previously mentioned I tend to get my scans back upside down, which is why I’m usually setting the Orientation tag to 3 (180 degree turn). Other useful values are 0 (do nothing), 6 (rotate 90 degrees clockwise) and 9 (rotate 270 degrees clockwise).

Keeping track

When you’re going to shoot a lot of film it can become a bit of a challenge keeping track of the various rolls of film you may have at an arbitrary point in your workflow. FilmTrackr has you covered.

Manual

You can find a scanned manual for the Zorki-4 rangefinder camera on Mike Butkus’ website.

Moar

If you want to read more about film photography you may want to consider adding Film Is Not Dead and Hot Shots to your bookshelf.

February 14, 2015

The Sangre de Cristos wish you a Happy Valentine's Day

[Snow hearts on the Sangre de Cristo mountains]

The snow is melting fast in the lovely sunny weather we've been having; but there's still enough snow on the Sangre de Cristos to see the dual snow hearts on the slopes of Thompson Peak above Santa Fe, wishing everyone for miles around a happy Valentine's Day.

Dave and I are celebrating for a different reason: yesterday was our 1-year anniversary of moving to New Mexico. No regrets yet! Even after a tough dirty work session clearing dead sage from the yard.

So Happy Valentine's Day, everyone! Even if you don't put much stock in commercial Hallmark holidays. As I heard someone say yesterday, "Valentine's day is coming up, and you know what that means. That's right: absolutely nothing!"

But never mind what you may think about the holiday -- you just go ahead and have a happy day anyway, y'hear? Look at whatever pretty scenery you have near you; and be sure to enjoy some good chocolate.



Dear lazyweb,

I am now using a very silent MacBook Air (with external monitor, keyboard and trackpad) as my desktop, and connect to remote boxes for (CPU-intensive) software builds. (One of those "remote" boxes is actually a relatively low-noise (i.e. big fan) Dell workstation under my desk.)

Will the noise become unbearable if I get a 27in iMac (SSD-only) and fully load its CPU a significant part of the day right in front of my eyes and ears?

February 13, 2015

16F1454 RA4 input only

To save someone else a wasted evening, RA4 on the MicroChip PIC 16F1454 is an input-only pin, not I/O like stated in the datasheet. In other news, I’ve prototyped the ColorHug ALS on a breadboard (which, it turns out was a good idea!) and the PCB is now even smaller. 12x19mm is about as small as I can go…

OpenRaster and OpenDocument: Metadata

OpenRaster is a file format for the exchange of layered images, and is loosely based on the OpenDocument standard. I previously wrote about how a little extra XML can make a file that is both OpenRaster and OpenDocument compatible. The OpenRaster specification is small and relatively simple, but it does not do everything, so what happens if a developer wants to do something not covered by the standard? What if you want to include metadata?

How about doing it the same way as OpenDocument, it does not have to be complicated. OpenDocument already cleverly reused the existing Dublin Core (dc) standard for metadata, and includes a file called meta.xml in the zip container. A good idea worth borrowing, a simplified example file follows:

Sample OpenDocument Metadata[Pastebin]

(if you can't see the XML here directly, see the link to Pastebin instead.)

I extended the OpenRaster code in Pinta to support metadata in this way. This is the easy part, it gets more complicated if you want to do more than import and export within the same program. As before the resulting file can renamed from .ora to .odg and be opened using OpenOffice* allowing you to view the image and the metadata too. The code is Pinta OraFormat.cs is freely available on GitHub under the same license (MIT X11) as Pinta. The relevant sections of are "ReadMeta" and "GetMeta". A Properties dialog and other code was also added, and I've edited a screenshot of Pinta to show both the menu and the dialog at the same time:

[* OpenOffice 3 is quite generous, and opens the file without complaint. LibreOffice 4 is far less forgiving and gives an error unless I specifically choose "ODF Drawing (.odg)" as the file type in the Open dialog]

February 11, 2015

Goodbye Remake, hello RenderChan!

Some our readers might remember, that during the produc […]

Tue 2015/Feb/10

  • I'm taking over the maintainership of librsvg.

    I've been fixing a few crashers, and the code is interesting, so I'll blog a bit about the bugs. It's rather peculiar how people's mindset has changed from the time when "feeding an invalid file leads to a crash" was just considered garbage-in, garbage-out — to the present time, when a crasher on invalid data is "OMG a government agency surely is going to write malicious vector images to pwn you every way it can".

    Atte Kettunen of the Oulu University Secure Programming Group has been doing fuzz-testing on librsvg, and this is producing very interesting results. Check out their fuzz-testing tools! My next blog posts will be about the bugs in librsvg and why C is a shitty language for userland code.

  • librsvg bug #703102 - out of bounds memory access

    In librsvg bug 703102 we get an SVG that starts with

        <svg version="1.1" baseProfile="basic" id="svg-root"
             width="100%" height="100%" viewBox="0 170141183460469231731687303715884105727 480 360"
    

    The bounding box is obviously invalid, and the code crashed in this function:

    static void
    rsvg_alpha_blt (cairo_surface_t *src,
                    gint srcx,
                    gint srcy,
                    gint srcwidth,
                    gint srcheight,
                    cairo_surface_t *dst,
                    gint dstx,
                    gint dsty)
    

    This is a fairly typical function for "take a rectangle from this cairo_surface_t and composite it over this other cairo_surface_t".

    The function used to start with some code to clip the coordinates to the actual surfaces... but it was broken. Eventually the loops that iterate through the pixels in the destination region would go past the bounds of the allocated buffers.

    I replaced the broken clipping code with something similar to our venerable gdk_rectangle_intersect(), and the bug went away.

    C sucks

    The ubiquitous pattern to access rectangular image buffers is, "give me a pointer to the start of the pixels", then "give me the rowstride, i.e. the length of each line in bytes".

    The code has to be careful to not go past the bounds of buffers. Things get complicated when you have two images with different dimensions, or different rowstrides — lots of variables to keep track of.

    A civilized language would let you access the byte arrays for the pixel data, but it would not let you access past their bounds. It would halt the program if you do buffer[-5] or buffer[BIGNUM].

    C doesn't give a fuck. C gives you a buffer overrun:

    Buffer overrun at Montparnasse

February 10, 2015

FreeCAD, Architecture and future

There is quite some time I didn't write here about FreeCAD and the development of the Architecture module. This doesn't mean it has stopped, but rather that I have temporarily been busy with another project: The Path module, plus there has been my FOSDEM talk, and finally we're on the verge of releasing version 0.15...

Making flashblock work again; and why HTML5 video doesn't work in Firefox

Back in December, I wrote about Problems with Firefox 35's new deprecation of flash, and a partial solution for Debian. That worked to install a newer version of the flash plug-in on my Debian Linux machine; but it didn't fix the problem that the flashblock program no longer works properly on Firefox 35, so that clicking on the flashblock button does nothing at all.

A friend suggested that I try Firefox's built-in flash blocking. Go to Tools->Add-ons and click on Plug-ins if that isn't the default tab. Under Shockwave flash, choose Ask to Activate.

Unfortunately, the result of that is a link to click, which pops up a dialog that requires clicking a button to dismiss it -- a pointless and annoying extra step. And there's no way to enable flash for just the current page; once you've enabled it for a domain (like youtube), any flash from that domain will auto-play for the remainder of the Firefox session. Not what I wanted.

So I looked into whether there was a way to re-enable flashblock. It turns out I'm not the only one to have noticed the problem with it: the FlashBlock reviews page is full of recent entries from people saying it no longer works. Alas, flashblock seems to be orphaned; there's no comment about any of this on the main flashblock page, and the links on that page for discussions or bug reports go to a nonexistent mailing list.

But fortunately there's a comment partway down the reviews page from user "c627627" giving a fix.

Edit your chrome/userContent.css in your Firefox profile. If you're not sure where your profile lives, Mozilla has a poorly written page on it here, Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it will probably lead you to your profile.

Inside yourprofile/chrome/userContent.css (create it if it doesn't already exist), add these lines:

@namespace url(http://www.w3.org/1999/xhtml);
@-moz-document domain("youtube.com"){
#theater-background { display:none !important;}}

Now restart Firefox, and flashblock should work again, at least on YouTube. Hurray!

Wait, flash? What about HTML5 on YouTube?

Yes, I read that too. All the tech press sites were reporting week before last that YouTube was now streaming HTML5 by default.

Alas, not with Firefox. It works with most other browsers, but Firefox's HTML5 video support is too broken. And I guess it's a measure of Firefox's increasing irrelevance that almost none of the reportage two weeks ago even bothered to try it on Firefox before reporting that it worked everywhere.

It turns out that using HTML5 video on YouTube depends on something called Media Source Extensions (MSE). You can check your MSE support by going to YouTube's HTML5 info page. In Firefox 35, it's off by default.

You can enable MSE in Firefox by flipping the media.mediasource preference, but that's not enough; YouTube also wants "MSE & H2.64". Apparently if you care enough, you can set a new preference to enable MSE & H2.64 support on YouTube even though it's not supported by Firefox and is considered too buggy to enable.

If you search the web, you'll find lots of people talking about how HTML5 with MSE is enabled by default for Firefox 32 on youtube. But here we are at Firefox 35 and it requires jumping through hoops. What gives?

Well, it looks like they enabled it briefly, discovered it was too buggy and turned it back off again. I found bug 1129039: Disable MSE for Firefox 36, which seems an odd title considering that it's off in Firefox 35, but there you go.

Here is the dependency tree for the MSE tracking bug, 778617. Its dependency graph is even scarier. After taking a look at that, I switched my media.mediasource preference back off again. With a dependency tree like that, and nothing anywhere summarizing the current state of affairs ... I think I can live with flash. Especially now that I know how to get flashblock working.

February 09, 2015

Graphical profiling under Linux

The Oyranos library became quite slower during the last development cycle for 0.9.6 . That is pretty normal, as new features were added and more ideas waited for implementation letting not much room for all details as wanted. The last two weeks, I took a break and mainly searched for bottlenecks inside the code base and wanted to bring performance back to satisfactory levels. One good starting point for optimisations in Oyranos are the speed tests inside the test suite. But that gives only help on starting a few points. What I wished to be easy, is seeing where code paths spend lots of time and perhaps, which line inside the source file takes much computation time.

I knew from old days the oprofile suite. So I installed it on my openSUSE machine, but had not much success to get callgraphs working. The web search for “Linux profiling” brought me to a article on pixel beat and to perf. I found the article very informative and do not want to duplicate it here. The perf tools are impressive. The sample recording needs to run as root. On the other hand the obtained sample information is quite useful. Most tools of perf are text based. So getting to the hot spots is not straight forward for my taste. However the pixel beat site names a few graphical data representations, and has a screenshot of kcachegrind. The last link under misc guides to flame graphs. The flame graphs are amazing representations of what happens inside Oyranos performance wise. They show in a very intuitive way, which code paths take most time. The graphs are zoom able SVG.

Here an example with expensive hash computation and without in oyranos-profiles:

Computation time has much reduced. An other bottleneck was expensive DB access. I talked with Markus about that already some time ago but forgot to implement. The according flame graph reminded me about that open issue. After some optimisation the DB bottleneck is much reduced.

The command to create the data is:

root$ perf record -g my-command

user& perf-flame-graph.sh my-command-graph-title

… with perf-flame-graph.sh somewhere in your path:

#!/bin/sh
path=/path/to/FlameGraph
output=”$1″
if [ "$output" = "" ]; then
output=”perf”
fi

 

perf script | $path/stackcollapse-perf.pl > $TMPDIR/$USER-out.perf-folded
$path/flamegraph.pl $TMPDIR/$USER-out.perf-folded > $TMPDIR/$USER-$output.svg
firefox $TMPDIR/$USER-$output.svg

One needs FlameGraph, a set of perl script, installed and perf. The above script is just a typing abbreviation.

February 08, 2015

Morevna website goes international

I am happy to announce that Morevna Website is now enab […]

February 07, 2015

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

Vienna GNOME/.NET hackfest report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my love for the Human Interface Guidelines and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.

Sponsors

I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna and The GNOME Foundation.

Trip to Nuremberg and Munich

This month I visited my friend and colleague Garrett in Germany. We visited the Christmas markets there. Lots of fun. Here are some pictures.

Attending the Vienna GNOME/.NET hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME and .NET fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  1. Port SparkleShare's user interface to GTK+3.
  2. Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the GNOME Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.

Features

For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.

History

The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.

Notifications

If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  1. Saving the modification times of files
  2. Creating a binary Linux bundle
  3. SparkleShare folder location selection
  4. GNOME 3 integration
  5. …other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.

Finally…

I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

February 06, 2015

Things software development teams should know about design

I wrote a quick, unpolished list of things that development teams should know about design (and designers). It’s worth sharing with a larger audience, so here it is:

  1. programmers and managers need to fully embrace design — without “buy-in”, designing is futile (as it’ll never get implemented properly — or, often, implemented at all)
  2. design is an iterative process involving everyone — not just “the designer”
  3. in all stages of the project cycle, design must be present: before, during, and after
  4. design isn’t just eye-candy; functionality is even more important (text/content is part of design too)
  5. designers are full team members (and should be treated as such), not some add-on
  6. design issues should have high priority too (if something is unusable or even sometimes looks unusable, then people can’t use the software)
  7. designers are developers too (in fact, anyone who contributes to making software is a developer — not just programmers)
  8. good software requires design, documentation, project management, community engagement, marketing, etc. — in addition to programming
  9. “usability” != “design”; design is about creating something useful (and/or pretty), whereas usability is discovering how well something works — usability tests are useful to see if a design works, but usability test alone usually won’t point a way to fix issues… design helps fix problems after they’re discovered, and helps to prevent problems in the first place
  10. a lot of designers are quite tech-savvy (especially us designers already in the open source world), but many aren’t — regardless, it’s okay to not know everything (especially about programming corner-cases or project-related esoterica)
  11. think about the people using the software as people using the software, not as “users” (using the term “users” is degrading (similar to “drug ‘users'”) and sets up an “us-versus-them” mentality)

February 05, 2015

Announce: Entangle “Strange” release 0.6.1 – an app for tethered camera control & capture

I am pleased to announce a new release 0.6.1 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

This release has primarily involved bug fixing, but one major user visible change is a rewrite of the camera control panel. Instead of showing all possible camera controls (which can imply 100’s of widgets), only 7 commonly used controls are displayed initially. Other controls can be optionally enabled at the discretion of the user and customization is remembered per camera model.

  • Require GTK >= 3.4
  • Fix check for GIO package in configure
  • Add missing icons to Makefile
  • Follow freedesktop thumbnail standard storage location
  • Refactor capture code to facilitate plugin script automation
  • Fix bug causing plugin to be displayed more than once
  • Make histogram height larger
  • Strip trailing ‘2’ from widget labels to be more friendly
  • Completely rewrite control panel display to show a small, user configurable subset from all the camera controls.
  • Remember custom list of camera controls per camera model
  • Hide compiler warnings from new glib atomic opertaions
  • Update to newer gnulib compiler warnings code
  • Remove broken double buffering code that’s no required when using GTK3
  • Remove use of deprecated GtkMisc APis
  • Allow camera picker list to show multiple lines
  • Remove crufty broken code from session browser that was breaking with new GTK versions
  • Disable libraw auto brightness since it totally overexposes many images, generally making things look worse
  • Fix memory leak handling camera events
  • Add keywords to desktop & appdata files

Ambient Light Sensors

An ambient light sensor is a little light-to-frequency chip that you’ve certainly got in your tablet, most probably in your phone and you might even have one in your laptop if you’re lucky. Ambient light sensors let us change the panel brightness in small ways so that you can still see your screen when it’s sunny outside, but we can dim it down when the ambient room light is lower to save power. Lots of power.

There is a chicken and egg problem here. Not many laptops have ambient light sensors; some do, but driver support is spotty and they might not work, or work but return values with some unknown absolute scale. As hardware support is so bad, we’ve not got any software that actually uses the ALS hardware effectively, and so most ALS hardware goes unused. Most people don’t actually have any kind of ALS at all, even on high-end models like Thinkpads

So, what do we do? I spent a bit of time over the last few weeks designing a small OpenHardware USB device that acts as a ALS sensor. It’s basically a ColorHug1 with a much less powerful processor, but speaking the same protocol so all the firmware update and test tools just work out of the box. It sleeps between readings too, so only consumes a tiiiiny amount of power. I figure that with hardware that we know works out of the box, we can get developers working on (and testing!) the software integration without spending hours and hours compiling kernels and looking at DSDTs. I was planning to send out devices for free to GNOME developers wanting to hack on ALS stuff with me, and sell the devices for perhaps $20 to everyone else just to cover costs.

pcb

The device would be a small PCB, 12x22mm in size which would be left in a spare USB slot. It only sticks out about 9mm from the edge of the laptop as most of the PCB actually gets pushed into the USB slot. It’s obviously non-ideal, and non-sexy, but I really think this is the way to break the chicken/egg problem we have with ALS sensors. It obviously costs money to make a device like this, and the smallest batch possible is about 60 – so before I spend any more of my spare money/time on this, is anyone actually interested in saving tons of power using an ALS sensor and dimming the display? Comment here or email me if you’re interested. Thanks.

February 04, 2015

The End is Nigh

“If @sgarrity doesn’t write a blog post in the next month he won’t have written for a year, and blogging will be over.”

— Peter Rukavina, via twitter

Mechanic-Sister Sketch

One more sketch of Mechanic-Sister character from Anast […]

Studying Glaciers on our Roof

[Roof glacier as it slides off the roof] A few days ago, I wrote about the snowpack we get on the roof during snowstorms:

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

The day after I posted that, I had a chance to see what happens as the snow sheet slides off a roof if it doesn't have a long distance to fall. It folds gracefully and gradually, like a sheet.

[Underside of a roof glacier] [Underside of a roof glacier] The underside as they slide off the roof is pretty interesting, too, with varied shapes and patterns in addition to the imprinted pattern of the roof.

But does it really move like a glacier? I decided to set up a camera and film it on the move. I set the Rebel on a tripod with an AC power adaptor, pointed it out the window at a section of roof with a good snow load, plugged in the intervalometer I bought last summer, located the manual to re-learn how to program it, and set it for a 30-second interval. I ran that way for a bit over an hour -- long enough that one section of ice had detached and fallen and a new section was starting to slide down. Then I moved to another window and shot a series of the same section of snow from underneath, with a 40-second interval.

I uploaded the photos to my workstation and verified that they'd captured what I wanted. But when I stitched them into a movie, the way I'd used for my time-lapse clouds last summer, it went way too fast -- the movie was over in just a few seconds and you couldn't see what it was doing. Evidently a 30-second interval is far too slow for the motion of a roof glacier on a day in the mid-thirties.

But surely that's solvable in software? There must be a way to get avconv to make duplicates of each frame, if I don't mind that the movie come out slightly jump. I read through the avconv manual, but it wasn't very clear about this. After a lot of fiddling and googling and help from a more expert friend, I ended up with this:

avconv -r 3 -start_number 8252 -i 'img_%04d.jpg' -vcodec libx264 -r 30 timelapse.mp4

In avconv, -r specifies a frame rate for the next file, input or output, that will be specified. So -r 3 specifies the frame rate for the set of input images, -i 'img_%04d.jpg'; and then the later -r 30 overrides that 3 and sets a new frame rate for the output file, -timelapse.mp4. The start number is because the first file in my sequence is named img_8252.jpg. 30, I'm told, is a reasonable frame rate for movies intended to be watched on typical 60FPS monitors; 3 is a number I adjusted until the glacier in the movie moved at what seemed like a good speed.

The movies came out quite interesting! The main movie, from the top, is the most interesting; the one from the underside is shorter.

Roof Glacier
Roof Glacier from underneath.

I wish I had a time-lapse of that folded sheet I showed above ... but that happened overnight on the night after I made the movies. By the next morning there wasn't enough left to be worth setting up another time-lapse. But maybe one of these years I'll have a chance to catch a sheet-folding roof glacier.

February 03, 2015

GCompris: crowdfunding campaign is over, time to start the work

Hi,

The crowdfunding campaign we ran on IndieGoGo to support the work on new unified graphics for GCompris finished yesterday. We didn’t reach the goal set to complete the whole new graphics, but thanks to 94 generous contributors, we collected 3642$. Also we got 260€ directly from the Hackadon 2014, many thanks to those contributors too! Thanks again to everyone who contributed and helped to spread the word!

Now after deducing the fees from IndieGoGo, converting to euros and making the sum, the total should be around 3150€, which is enough to cover 25 days of work. This is way less than the full goal, so I have to adapt the plan. Of course I won’t be able to make new artwork for all activities in these 25 days of work, but I’ll make as much as possible to establish good bases for the new design transition. Also I’ll have to rely as much as possible on some of the existing artwork that is already good enough and only adapt the style and do a some edits instead of starting from scratch as much as possible.

I have published the initial proposal for the new artwork guidelines. I will start the work on February 16th, in 13 days, so this should leave enough time for people to review the guidelines and send their opinion and comments (please use the contact form on my main website, or any other way that you’re used to contact me..). I’ll read carefully every comment and apply needed edits to the guidelines. The guidelines proposal is here.

(Edit 02/11/2015: guidelines and examples updated according to reviews)

Then I’ll fulfil these 25 days of work between February and April. Let’s see how much activities I can update in this time ;)

Release date for Krita 2.9

After a short discussion, we came up with a release date for Krita 2.9! It's going to be... February 26th, Deo volente. Lots and lots and lots of new stuff, and even the old stuff in Krita is still amazing (lovely article, the author mentions a few features I didn't know about).

Then it's time for the port to Qt5, while Dmitry will be working on Photoshop-style layer styles -- a kickstarter feature that didn't make it for 2.9.0, but will be in 2.9.x. A new fundraiser for the next set of amazing features is also necessary.

Of course, there are still over 130 open bugs, and we've got a lot to do still, but then the bugs will always be with us, and after 2.9.0 a 2.9.1 will surely follow. But I do care about our bug reports.

Some people feel that in many projects,bugreports are often being discarded in an administrative way, but honestly, we try to do better! Because without user testing and bug reporting, we won't be able to improve Krita. After all, hacking on Krita takes so much time that I hardly get to use Krita for more than ten, twenty minutes a week!

We fixed, closed or de-duplicated 91 bugs in the past seven days. Of course, we also got a bunch of new bug reports: 25.

So, I want to take a bit of time to give a public thank-you to all our bug reporters. We've got an awesome bunch of testers!

For example, one tester has reported 46 bugs against the 2.9 betas: that is a pretty amazing level of activity! And we have by now fixed 33 of these 46 bugs. By testing betas and painstakingly carefully reporting bugs, often with videos to help us reproduce the issue, Krita has become so much better.

If you use Krita and notice an issue, don't think that you'll hurt us when you report the issue as a bug -- the only thing we ask from you is that you do a cursory check whether your bug has already been reported (if it isn't immediately obvious, report away, and if it's been reported before, no problem), and that we can come back to you with questions, if necessary.

February 02, 2015

Make it flat. Make it the all same. Make it Boring.



"Less is a bore" brilliantly said Robert Venturi.

So at the end of the Oxygen period, UX/UI design was reaching an inflection point. Gone were the days were graphical designers challenged its own illustrations skills in a perpetual "I can my candy more naturalistic silly than yours".
We had reached the saturation point of the silliness in graphic representations of every day objects as UI elements. Now back then people needed to find a culprit for it all, a quintessential word that in itself represented all evil... Cue in "skeuomorphism", an word used in traditional design to imply a faux representation of a material. in this word we collectively found the  "wrong" to be corrected, we had our culprit.

We have to kill all aspects of anything skeuomorphic, cue in flat design, we don't need anything "fake", we don't need textures, we must do without those fake drop-shadows, kill all artificial gradients. reinvent the circle in a precise concise square.  

Well this all to me back then sounded a bit like a personal attack ;), I mean, gradients and shadows is all I did :) and just because some were abusing it  I had to pay for it?
And I said, they were all wrong, this was nothing more than a modernism surge all over again, "skuemorphism is all we do in UI any way" all the concepts in the desktop are skuemorphick, seriously we call it DESKTOP, we use Buttons and Folders we can't do anything but Skeomorphic designs the only non Skeomorphic design would be a screen turned off.

Sad reality is that being right when everybody else is wrong, just makes you irrelevant. And its not like that despite the argumentation being fundamentally wrong, there  was no need for change, there was!
Overly done design tends to be a trick a designer will do when he cant  find a efficient answer, and we all abused this "trick".

Out of it some great new concepts and methods came to life, must say I'm a bit of fan of Google new "material" design language. (in fact material design is IMO not flat I mean they must have called it its "material" for some reason ;) ) 

But so comes today, every little single designer agency looks the same, its easy to achieve the current dictatorial style, slap in a blurred background a lonely Helvetica Neue on top and you are almost there.
Trendy websites pop out everywhere looking exactly the same as one cooperative unique brand took control of everything.

And its a BORE....

What will come next?
Don't know, predicting this sort of things is absolutely futile, and I'm almost sure that anything that comes after it will retain some of the best aspect it brought out. Think we are in a transitory period and something new is around the corner.

January 31, 2015

Snow day!

We're having a series of snow days here. On Friday, they closed the lab and all the schools; the ski hill people are rejoicing at getting some real snow at last.

[Snow-fog coming up from the Rio Grande] It's so beautiful out there. Dave and I had been worried about this business of living in snow, being wimpy Californians. But how cool (literally!) is it to wake up, look out your window and see a wintry landscape with snow-fog curling up from the Rio Grande in White Rock Canyon?

The first time we saw it, we wondered how fog can exist when the temperature is below freezing. (Though just barely below -- as I write this the nearest LANL weather station is reporting 30.9°F. But we've seen this in temperatures as low as 12°F.) I tweeted the question, and Mike Alexander found a reference that explains that freezing fog consists of supercooled droplets -- they haven't encountered a surface to freeze upon yet. Another phenomenon, ice fog, consists of floating ice crystals and only occurs below 14°F.

['Glacier' moving down the roof] It's also fun to watch the snow off the roof.

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

[Mysterious tracks in the snow] When we do go outside, the snow has wonderful collections of tracks to try to identify. This might be a coyote who trotted past our house on the way over to the neighbors.

We see lots of rabbit tracks and a fair amount of raccoon, coyote and deer, but some are hard to identify: a tiny carnivore-type pad that might be a weasel; some straight lines that might be some kind of bird; a tail-dragging swish that could be anything. It's all new to us, and it'll be great fun learning about all these tracks as we live here longer.

Snow day!

We're having a series of snow days here. On Friday, they closed the lab and all the schools; the ski hill people are rejoicing at getting some real snow at last.

[Snow-fog coming up from the Rio Grande] It's so beautiful out there. Dave and I had been worried about this business of living in snow, being wimpy Californians. But how cool (literally!) is it to wake up, look out your window and see a wintry landscape with snow-fog curling up from the Rio Grande in White Rock Canyon?

The first time we saw it, we wondered how fog can exist when the temperature is below freezing. (Though just barely below -- as I write this the nearest LANL weather station is reporting 30.9°F. But we've seen this in temperatures as low as 12°F.) I tweeted the question, and Mike Alexander found a reference that explains that freezing fog consists of supercooled droplets -- they haven't encountered a surface to freeze upon yet. Another phenomenon, ice fog, consists of floating ice crystals and only occurs below 14°F.

['Glacier' moving down the roof] It's also fun to watch the snow off the roof.

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

[Mysterious tracks in the snow] When we do go outside, the snow has wonderful collections of tracks to try to identify. This might be a coyote who trotted past our house on the way over to the neighbors.

We see lots of rabbit tracks and a fair amount of raccoon, coyote and deer, but some are hard to identify: a tiny carnivore-type pad that might be a weasel; some straight lines that might be some kind of bird; a tail-dragging swish that could be anything. It's all new to us, and it'll be great fun learning about all these tracks as we live here longer.

January 28, 2015

Detecting fake flash

I’ve been using F3 to check my flash drives, and this is how I discovered my drives were counterfeit. It seems to me this kind of feature needs to be built inside gnome-multi-writer itself to avoid sending fake flash out to customers. Last night I wrote a simple tool called gnome-multi-writer-probe which does the following few things:

* Reads the existing data from the drive in 32kb chunks every 32Mbish into RAM
* Writes random blocks of 32kb every 32MBish, and also stores in RAM
* Resets the drive
* Reads all the 32k blocks from slightly different addresses and sizes and compares them to the random data in RAM
* Writes all the saved data back to the drive.

I only takes a few seconds on most drives. It also tries to be paranoid, and saves the data back to the drive the best it can when it encounters an error. That said, please don’t use this tool on any drives that have important data on them; assume you’ll have to reformat them after using this tool. Also, it’s probably a really good idea to unmount any drives before you try this.

If you’ve got access to gnome-multi-writer from git (either from jhbuild, or from my repo) then please could you try this:

sudo gnome-multi-writer-probe /dev/sdX

Where sdX is the USB drive you want to test. I’d be interested of the output, and especially interested if you have any fake flash media you can test this with. Either leave a comment here, grab me on IRC or send me an email. Thanks.

January 27, 2015

Tue 2015/Jan/27

  • An inlaid GNOME logo, part 2

    Esta parte en español

    To continue with yesterday's piece — the amargoso board which I glued is now dry, and now it is time to flatten it. We use a straightedge to see how bad it is on the "good" side.

    Not flat

    We use a jack plane with a cambered blade. There is a slight curvature to the edge; this lets us remove wood quickly. We plane across the grain to remove the cupping of the board. I put some shavings in strategic spots between the board and the workbench to keep the board from rocking around, as its bottom is not flat yet.

    Cambered iron Cross-grain planing

    We use winding sticks at the ends of the board to test if the wood is twisted. Sight almost level across them, and if they look parallel, then the wood is not twisted. Otherwise, plane away the high spots.

    Winding sticks Not twisted

    This gives us a flat board with scalloped tracks. We use a smoothing plane to remove the tracks, planing along the grain. This finally gives us a perfectly flat, smooth surface. This will be our reference face.

    Scalloped board Smoothing plane Smooth, flat surface

    On that last picture, you'll see that both halves of the board are not of the same thickness, and we need to even them up. We set a marking gauge to the thinnest part of the boards. Mark all four sides, using the flat side as the reference face, so we have a line around the board at a constant distance to the reference face.

    Gauging the thinnest part Marking all around Marked all around

    Again, plane the board flat across the grain with a jack plane and its cambered iron. When you reach the gauged line, you are done. Use a smoothing plane along the grain to make the surface pretty. Now we have a perfectly flat board of uniform thickness.

    Thicknessing with the jack plane Smoothing plane Flat and uniform board

    Now we go back to the light-colored maple board from yesterday. First I finished flattening the reference face. Then, I used the marking gauge to put a line all around at about 5mm to the reference face. This will be our slice of maple for the inlaid GNOME logo.

    Marking the maple board

    We have to resaw the board in order to extract that slice. I took my coarsest ripsaw and started a bit away from the line at a corner, being careful to sight down the saw to make it coplanar with the lines on two edges. It is useful to clamp the board at about 45 degrees from level.

    Starting to resaw at a corner

    Once the saw is into the corner, tilt it down gradually to lengthen the kerf...

    Kerfing one side

    Tilt it gradually the other way to make the kerf on the other edge...

    Kerfing the other side

    And now you can really begin to saw powerfully, since the kerfs will guide the saw.

    Resawing

    Gradually extend the cut until the other corner, and repeat the process on all four sides.

    Extending the cut Resawing

    Admire your handiwork; wipe away the sweat.

    Resawn slice

    Plane to the line and leave a smooth surface. Since the board is too thin to hold down with the normal planing stops on the workbench, I used a couple of nails as planing stops to keep the board from sliding forward.

    Nail as planing stop

    Now we can see the contrast between the woods. The next step is to glue templates on each board, and start cutting.

    Contrast between woods

Scammers at promo-newa.com

tl;dr Don’t use promo-newa.com, they are scammers that sell fake flash.

Longer version: For the ColorHug project we buy a lot of the custom parts direct from China at a fraction of the price available to us in the UK, even with import tax considered. It would be impossible to produce such a low cost device and still make enough money to make it worth giving up our evenings and weekends. This often means sending thousands of dollars to sketchy-looking companies willing to take on small (to them!) custom orders of a few thousand parts.

So far we’ve been very lucky, until last week. I ordered 1000 customized 1GB flash drives to use as a LiveUSB image rather than using a LiveCD. I checked out the company as usual, and ordered a sample. The sample came back good quality, with 1GB of fast flash. Payment in full was sent, which isn’t unusual for my other suppliers in China.

Fast forward a few weeks. 1000 USB drives arrived, which look great. Great, until you start using them with GNOME MultiWriter, which kept throwing validation warnings. Using the awesome F3 and a few remove-insert cylces later, the f3probe tool told me the flash chip was fake, reporting the capacity to be 1GB, when it was actually 96Mb looped around 10 times.

Taking the drives apart you could also see the chip itself was different from the sample, and the plastic molding and metal retaining tray was a lower quality. I contacted the seller, who said he would speak to the factory later that day. The seller got back to me today, and told me that the factory has produced “B quality drives” and basically, that I got what I paid for. For another 1600USD they would send me the 1GB ICs, which I would have to switch in the USB units. Fool me once, shame on you; fool me twice, shame on me.

I suppose people can use the tiny flash drives to get the .icc profile off the LiveCD image, which was always a stumbling block for some people, but basically the drives are worthless to me as LiveUSB devices. I’m still undecided whether to include them in the ColorHug box; i.e. is a free 96Mb drive better than them all going into landfill?

As this is China, I understand all my money is gone. The company listing is gone from Alibaba, so there’s not a lot I can do there. So other people can hopefully avoid this same mistake, I’ve listed all the details here, which hopefully will become googleable:

Promo-Newa Electronic Limited(Shenzhen)
Wei and Ping Group Limited(Hongkong)  

Office: Building A, HuaQiang Garden, North HuaQiang Road, Futian district, Shenzhen China, 0755-3631 4600
Factory: Building 4, DengXinKeng Industrial Zone, JiHua Road,LongGang District, Shenzhen, China
Registered Address: 15/B—15/F Cheuk Nang Plaza 250 Hennessy Road, HongKong
Email: sales@promo-newa.com
Skype: promonewa

January 26, 2015

Mon 2015/Jan/26

  • An inlaid GNOME logo, part 1

    Esta parte en español

    I am making a special little piece. It will be an inlaid GNOME logo, made of light-colored wood on a dark-colored background.

    First, we need to make a board wide enough. Here I'm looking for which two sections of those longer/narrower boards to use.

    Grain matching pieces

    Once I am happy with the sections to use — similar grain, not too many flaws — I cross-cut them to length.

    Cross cutting

    (Yes, working in one's pajamas is fantastic and I thoroughly recommend it.)

    This is a local wood which the sawmill people call "amargoso", or bitter one. And indeed — the sawdust feels bitter in your nose.

    Once cut, we have two pieces of approximately the same length and width. They have matching grain in a V shape down the middle, which is what I want for the shape of this piece.

    V shaped grain match

    We clamp the pieces togther and match-plane them. Once we open them like a book, there should be no gaps between them and we can glue them.

    Clamped pieces Match-planing Match-planed pieces

    No light shows between the boards, so there are no gaps! On to gluing. Rub both boards back and forth to spread the glue evenly. Clamp them, and wait overnight.

    No gaps! Gluing boards Clamped boards

    Meanwhile, we can prepare the wood for the inlaid pieces. I used a piece of soft maple, which is of course pretty hard — unlike hard maple, which would be too goddamn hard.

    Rough maple board

    This little board is not flat. Plane it cross-wise and check for flatness.

    Checking for flatness Planing

    Tomorrow I'll finish flattening this face of the maple, and I'll resaw a thinner slice for the inlay.

    Planed board

മറിയംമുക്ക്

ആമേൻ ഒരു ‘ആനവാൽമോതിരം’ ആണെങ്കിൽ ‘ഭാർഗവ ചരിത’മാണ് മറിയംമുക്ക്.

January 23, 2015

Scientific and Technical Academy Award for the development of Bullet Physics!

The Academy of Motion Picture Arts and Sciences today announced that 21 scientific and technical achievements represented by 58 individual award recipients will be honored at its annual Scientific and Technical Awards Presentation on Saturday, February 7, at the Beverly Wilshire in Beverly Hills.

“To Erwin Coumans for the development of the Bullet physics library, and to Nafees Bin Zafar and Stephen Marshall for the separate development of two large-scale destruction simulation systems based on Bullet.

These pioneering systems demonstrated that large numbers of constrained rigid bodies could be used to animate visually complex, believable destruction effects with minimal simulation time.”

Thanks to all Bullet contributors and users!
See https://www.oscars.org/news/21-scientific-and-technical-achievements-be-honored-academy-awardsr

January 21, 2015

Plugable USB Hubs

Joshua from Plugable sent me 4 different USB hubs this week so they could be added as quirks to gnome-multi-writer. If you’re going to be writing lots of USB drives, the Plugable USB3 hubs now work really well. I’ve got a feeling that inserting and removing the drive is going to be slower than the actual writing and verifying now…

Moving update information from the distribution to upstream

I’ve been talking to various people about the update descriptions we show to the user. Without exception, the messages we show to end users are really bad. For example, the overly-complex-but-not-actually-useful:

Screenshot from 2015-01-21 10:56:34

Or, the even more-to-the-point:

Update to 3.15.4

I’m guilty of both myself. Why is this? Typically this text is written an over-worked and under-paid packager doing updates to many applications and packages. Sometimes the packager might be the upstream maintainer, or at least involved in the project, but many times it’s just some random person that got fingered to maintain a particular package. This doesn’t make an awesome person to write beautiful prose and text that thousands of end users are going to read. It also doesn’t make sense to write the same beautiful prose again and again for every distribution out there.

So, what do we need? We need a person who probably wrote the code, or at least signed it off, who cares about the project and cares about the user experience. i.e. the upstream maintainer.

What I’m proposing is that we ask upstream maintainers to write the release information in a way that can be shown in the software center. NEWS files are not stanardized, and you don’t typically have a NEWS file for each application in your upstream tarball, so we need something else.

Suprise suprise, it’s AppStream to the rescue. AppStream has a <release> object that fits the bill almost completely; you can put upstream version information and long (optionally translated) formatted descriptions.

Of course, you don’t want to write both NEWS and the various appdata files at release time, as that just increased the workload of the overly-busy upstream maintainer. In this case we can use appstream-util appdata-to-news in the buildsystem and generate the former from the latter automatically. We’re outputting markdown for the NEWS file, which seems to be a fairly good approximation of what NEWS files actually look like at least for GNOME.

For a real-world example, see the GNOME MultiWriter example commit that uses this.

There are several problems with this approach. One is that the translators might have to translate lots more text; and the obvious solution to that seems to be to only mark strings to be translated for stable versions. Alas, projects like GNOME don’t allow any new strings in stable versions, so we’ll either have to come up with an except for release notes ammendment to that, or just say that all the release notes are only ever available in C locale.

The huge thing to take away from this blog, if you are intending to use this new feature is that update descriptions have to be understandable by end users. Various bug fixes is not helpful, but Fixes a crash when adding a joystick is. End users neither care or understand Use libgusb rather than libusbx and technical details that do not affect the UI or UX of the application should be omitted.

This doesn’t work for distribution releases, e.g. 3.14.1-1 to 3.14.1-2, but typically these are not huge changes that we need to show so prominently to the user.

I’m also writing a news-to-appdata.py script, so if anyone wants to take the plunge on an existing project it might be good to wait for that unless you like lots of copy and pasting.

Comments, as always, welcome.

January 20, 2015

Stellarium 0.13.2 and 0.12.5 has been released!

The Stellarium development team after 3 months of development is proud to announce the second correcting release of Stellarium in series 0.13.x - version 0.13.2. This version contains over 70 closed bugs and includes some wishes and new nice features - like visualization of the zodiacal light and new sky cultures.

Also we announce the new release for series 0.12.x - version 0.12.5 - we are backported of some features from series 0.13.x for this version.

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes: https://launchpad.net/stellarium/+milestone/0.13.2

Searching for Morevna image

Nikolai Mamashev takes his own search for the image of […]

January 19, 2015

Morevna Child (coloured)

Some time ago we have published a “Morevna Child& […]

January 18, 2015

Another stick figure in peril

One of my favorite categories of funny sign: "Stick figures in peril". This one was on one of those automated gates, where you type in a code and it rolls aside, and on the way out it automatically senses your car.

[Moving gate can cause serious injury or death]

January 17, 2015

The Gorilla and the Gibbon

As a Krita developer, I'm not too happy comparing Krita to Photoshop. In fact, I have been known to scream loudly, start rolling my eyes and in general gibber like a lunatic whenever someone reports a bug with as its sole rationale "Photoshop does it like this, Krita must, too!".

But when we published the news that a group at University Paris 8 had replaced Photoshop with Krita, that comparison becomes inevitable. Even though it is just one group that used Photoshop for a specific purpose, one that Krita can fill as well. The news got picked rather widely and even brought the krita.org webserver to its knees for a moment. The discussion on Hacker News got interesting when people started claiming that Krita, like Gimp was missing so much stuff, like 16 bit/channel, adjustment layers and so on -- even things that we've had since 2004 or 2005.

So, where are we, what's our position? In the first place, with Adobe having about a hundred developers on Photoshop, they can spend thousands of hours a week on developing Photoshop. We're lucky to get a hundred hours a week on Krita. Of course, we're awesome, but that's a huge disparity. We simply cannot duplicate all the features in Photoshop: even if we'd want to, there are not enough developer hours available. And even if there were, there's the pesky problem of incomplete file format specifications. That means choices have to be made.

We develop Krita for a specific purpose: for people to create artwork. Comics, illustrations, matte paintings, concept-art, textures. Anything that's not relevant for those purposes isn't relevant for Krita. No website design, no wedding album editing, no 3D printing, no embedded email client.

But anything that artists need for their work is relevant. And I think we're doing pretty well in that regard. Krita is an efficient tool that people find fun to use.

If that's the purpose you use Photoshop for, give Krita a try. If you use Photoshop for something else, don't bother. Or, well, you can give Krita a try, but don't be surprised if Krita is missing stuff.

Sometimes, we will do a direct clone of a Photoshop feature: the layer styles dialog Dmitry and I are working on is an example. People want that feature, even if we've got filter layers and transformation masks already. It's going to take about two to three hundred hours just to do all the typing, let alone the actual thinking about how to fit the algorithms into Krita's image composition code.

But mostly, cloning another application is a bad idea. You are always be running behind, because you cannot clone what hasn't been released yet, and unless you have more hours per week available than the clonee, you won't ever have time for introducing unique features that make your project more interesting than the clonee -- like Krita's wrap-around mode. Or the opengl canvas, which we had in 2005, and which Photoshop now also has. (This Nvidia page on how Opengl makes Photoshop more responsive could have been written for Krita. The only thing we miss is embedding a 3D model in your painting, and we've already made two Summer of Code students attempt just that.)

So, what's our roadmap for 2015, if it isn't "be Photoshop unto all people"?

These are the big issues that we need to spend serious time on:

  • Port to Qt and KDE Frameworks 5. In many respects, a waste of time, since it won't bring us anything of actual use to our users, but it has to be done.
  • Implement a performance optimization called Levels of Detail. This will make Krita work much faster with bigger images, at the expense of actually doing the pixel mangling several times.
  • Animation. Somsubhra's animation timeline is a great start, but it's not ready for end users. We had hoped for a big donation kickstarting this development, but that doesn't seem likely to materialize.
  • OSX. We've got an experimental OSX port, but it's buggy and broken and missing features. (No HDR painting, OpenGL is broken, the popup palette doesn't show in OpenGL mode, memory handling is broken -- and a thousand smaller issues.
  • Python. Krita has been scriptable in the past. First through KJS, in the 1.x days, then through Kross (which meant javascript, ruby, python). Neither scripting interface exposed all of Krita, or even the right parts. You could create filters in Python, but automating workflow was much harder. There's a new prototype Python scripting plugin, modelled after Kate's Python plugin that would make a good start.

To make this possible, we simply have to add hours per week to Krita. Which means starting on the next fund raiser, and publising Krita in more app stores, sell more DVD's, get more people to join the Development Fund!

January 16, 2015

Surviving winter as a motorsports fan.

Winter is that time of the year where nothing happens in the motorsport world (one exception: Dakar). Here are a few recommendations to help you through the agonizing wait:

Formula One

Start out with It Is What It Is, the autobiography of David Coulthard. It only goes until the end of 2007, but nevertheless it’s a fascinating read: rarely do you hear a sportsman speak with such openness. A good and honest insight into the mind of a sportsman and definitely not the politically correct version you’ll see on the BBC.

It Is What It Is

Next up: The Mechanic’s Tale: Life in the Pit-Lanes of Formula One by Steve Matchett, a former Benetton F1 mechanic. This covers the other side of the team: the mechanics and the engineers.

The Mechanic's Tale: Life in the Pit-Lanes of Formula One

Still feel like reading? Dive into the books of Sid Watkins, who deserves huge amounts of credit for transforming a very deadly sport into something surprisingly safe (or as he likes to point out: riding a horse is much more dangerous).

He wrote two books:

Both describe the efforts on improving safety and are filled with anecdotes.

And finally, if you prefer movies, two more recommendations. Rush, an epic story about the rivalry between Niki Lauda and James Hunt. Even my girlfriend enjoyed it and she has zero interest in motorsports.

Rush

And finally Senna, the documentary about Ayrton Senna, probably the most mythical Formula One driver of all time.

Rush

Le Mans

On to that other legend: The 24 hours of Le Mans.

I cannot recommend the book Le Mans by Koen Vergeer enough. It’s beautiful, it captures the atmosphere brilliantly and seamlessly mixes it with the history of this event.

But you’ll have to go the extra mile for it: it’s in Dutch, it’s out of print and it’s getting exceedingly rare to find.

Le Mans

Nothing is lost if you can’t get hold of it. There’s also the 1971 movie with Steve McQueen: Le Mans.

It’s everything that modern racing movies are not: there’s no CG here, barely any dialog and the story is agonizingly slow if you compare it to the average Hollywood blockbuster.

But that’s the beauty of it: in this movie the talking is done by the engines. Probably the last great racing movie that featured only real cars and real driving.

Le Mans

Motorcycles

Motorcycles aren’t really my thing (not enough wheels), but I have always been in awe for the street racing that happens during the Isle of Man TT. Probably one of the most crazy races in the world.

Riding Man by Mark Gardiner documents the experiences of a reporter who decides to participate in the TT.

Riding Man

And to finish, the brilliant documentary TT3D: Closer to the Edge gives a good insight into the minds of these drivers.

It seems to be available online. If nothing else, I recommend you watch the first two minutes: the onboard shots of the bike accelerating on the first straight are downright terrifying.

TT3D: Closer to the Edge

Rounding up

By the time you’ve read/seen all of the above, it should finally be spring again. I hope you enjoyed this list. Any suggestions about things that would belong in this list are greatly appreciated, send them over!

“സുഹൃത്തിന്റെ വീട്ടിലേക്ക് പോയപ്പോൾ ഒരു വലിയ ലൈബ്രറി. ഒരുപാട് പുസ്തകങ്ങൾ. കൂടുതലും മലയാളം. പുസ്തകം തുറക്കുമ്പൊൾ പുസ്തകത്തിന്റെ ആദ്യ പേജിൽ ‘ഇന്ന ആളിനു, സ്നേഹപൂർവ്വം’ എന്ന് ലേഖകന്റെ ഒരു വരിയും ഒപ്പും. അവന്റെ അച്ഛനു കിടിയതാണത്രേ. പുള്ളിക്ക് സാഹിത്യകാരന്മാരായി നല്ല ബന്ധമാണത്രേ. അസൂയയോടെ ഓരോന്നും മറച്ച് നോക്കി നോക്കി. എംടി ഒപ്പിട്ട നാലുകെട്ട്, സേതു ഒപ്പിട്ട അടയാളങ്ങൾ, മുകുന്ദന്റെ ഒപ്പോടു കൂടിയ ദൈവത്തിന്റെ വികൃതികൾ എന്നിവക്കൊപ്പം, കൊട്ടാരത്തിൽ ശങ്കുണ്ണിയുടെ സ്നേഹാശംസകളും ഒപ്പോടും കൂടിയ ഐതിഹ്യമാലയും ആ ലൈബ്രറിയിൽ […]

January 15, 2015

gnome-battery-bench

One thing we want to do for the next versions of GNOME and Fedora is to improve battery performance. Your laptop may well be advertised by the manufacturer to have “up to 10 hours of battery life” or some such claim. You probably don’t get anywhere near this.

Let’s put out some rough numbers here to give an overall sense of scale for the problem. For a modern ultrabook:

  • The battery is 50 watt-hours (Wh) – it can power a load of 50W for an hour or a load of 5W for 10 hours.
  • The baseline idle consumption of the system – this is RAM refresh, the power consumption of peripherals in power-saving mode, etc, is 5W.
  • The screen and keyboard backlights, if both turned on to 100%, draw 5W.
  • The CPU/GPU can sustain about 15W – this is a thermal limit, so it can draw more for short bursts, but over time it will be throttled to an average.
  • All other peripherals (Wifi, bluetooth, touchpad, etc.) can use about 5W of power combined when not in power-saving mode.

So the power draw of the system can range from about 5W (the manufacturer’s 10 hours) to 30W (1 hour 40 minutes). If you have such an ultrabook, how is it doing right now? I’d guess it’s using about 15W unless you pay a lot of attention to power usage. Some of the things that might be going wrong:

  • Your keyboard/screen backlights are likely higher than is needed.
  • Some of the devices on your system don’t have power-saving turned on properly.
  • You likely have some background CPU activity (webpage ads, for example).

Of course, if you are running a compilation in the background, you want your CPU to be using that full 15W of power to get it done as soon as possible – in this case, your battery isn’t going to last very long. But a lot of usage is closer to idle.

Measuring power usage

I’ve made assertions above about power used by different things. But how did I actually measure that? powertop is the state of the art for measuring power usage and tweaking it on Linux. It will show you a figure for current battery discharge rate, but it bounces around by several watts; partly because powertop’s own data collection loads the system. The effect of a kernel option is usually much smaller than that. One of the larger effects I discovered on my laptop was that turning USB autosuspend for the touchscreen saves about 150mW. When you tweak a tunable in powertop, without a way to measure power usage more accurately, it’s hard to know whether any observed differences are real.

To support figuring out what is going on with power, I wrote gnome-battery-bench. What it does is pretty simple – it plays back recorded sequences of events in a loop and monitors battery charge to estimate power usage. Because battery usage is being averaged over minutes, a lot of the noise is averaged out. And the event sequences can be changed to exercise different usage patterns by the user.

gnome-battery-benchThe above screenshot shows gnome-battery-bench running a “Light Duty” benchmark that combines scrolling around in a Wikipedia page and typing in gedit. Instantaneous usage bounces around a lot from the activity and from random noise, but after a few cycles the averaged power and estimated battery lifetime converge. The corresponding idle power usage is about 5.5W, so we see then know that we’re using about 2.9W from the activity.

gnome-battery-bench is designed as a graphical application because I want to encourage people to explore with it and find out interactively what is using power on their system. And graphing is also useful so that the user can see when something is going wrong with the measurement; sometimes batteries will report data that jumps around. But there’s also a command line version that can be used for automatic scripting of benchmarks.

I decided to use recorded sequences of events for a couple of reasons: first, it’s easy for anybody to create new test sequences – you just run the gnome-battery-bench command line tool in record mode and do what you want to test. Second, playing back event sequences at a low level simulates user interaction very accurately. There is little CPU overhead, and as far as the desktop is concerned it’s exactly like user input. Of course, not everything can be easily tested by simply playing back canned sequences, but our goal here isn’t to test everything, just to be able to test some things that are reasonably representative.

The gnome-battery-bench README file describes in more detail how it works and how to install it on your system.

Next steps

gnome-battery-bench is basically usable as is. The main remaining thing to do with it is to spend some time designing and recording a couple of sequences that better reflect actual usage. The current tests I checked in are basically just placeholders.

On the operating system, we need to make sure that we are actually shipping with as many power-saving options on for peripherals as can be supported. In particular, “SATA link power management” makes a several-watt difference.

Backlight management is another place we can make improvements. Some problems are simply bad defaults. If ambient light sensors are present on the system, using them can be a big win. If not, simply using appropriate defaults is already an improvement.

Beyond that, in GNOME, we can optimize application and system code for efficiency and to not do things unnecessarily often in the background. Eventually I’d like to figure out a way to have power consumption also tracked by perf.gnome.org so we can see how code changes affect our power consumption and avoid regressions.


Guy-In-Black concept

A coloured concept of Guy-In-Black character, made by Nikolai Mamashev.

2015-01-10-mib

Koschei the Deathless – Artwork #2

Following by Nikolai’s progress, Anastasia Majzhegisheva takes up the torch and presents her artwork on Koschei character.

2014-12-28-Koshei-2-4

2014-12-28-Koshei-3-4

January 14, 2015

Fedora Design Team FAD this weekend!

Design Team FAD Logo

Starting this Friday through the weekend, we’re having the very first Fedora Design Team FAD here at Red Hat’s Boston-area office. A number of design team members are going to come together for two and a half days and plan the basic roadmap for the design team over the next year or two, as well as more hands-on tasks that could involve cleaning out our ticket queue and maybe even working on wallpaper ideas for Fedora 22. :)

Join Us Virtually!

We want to allow remote participants (yes, even you :) ) to join us, so we will have an OpenTokRTC video stream as well as a Google On Air Hangout for each day of the event. We will also be in #fedora-design on irc.freenode.net, and we’ll have a shared notepad on piratepad.

Video Stream Links

OpenTokRTC

OpenTokRTC is an open source project for webrtc video conferencing; opentokrtc.com is the demo site set up by TokBox, the project’s sponsor. If you have issues with this feed, please jump to the appropriate Google Hangout.

Google Hangouts

Google Hangouts are unfortunately not open source, but we have set these up as “On Air” hangouts so you do not need to be logged into Google to view them nor should you need to install Google’s plugin to view them.

Other Resources

Chat + Notes
  • #fedora-design on irc.freenode.net – this is the official chat channel for the event.
  • Design FAD piratepad – we’ll take notes as the event progresses here; for example, as we make decisions we’ll track them here.

What are We Working on, When?

We’ll flesh out the fine details of the schedule during the first hour or so of the event; I will try to update the session titles on FedoCal to reflect that as we hash it out. (Likely, it will be documented on the piratepad first.)

Schedule
  • Schedule on FedoCal (note that FedoCal has built-in timezone conversion so you can view in your local timezone :) )

 

See you there! :)

Arbitrary contour quadrangulation

Hi

Quadrangulating as ¨even¨ as possible an arbitrary contour, is by no means an easy task, many algorithm exists and even complete thesis have being written about that.

Recently I was developing such a tool that can be used for improve realtime retopology.

I have tested it with arbitrary contours and even ill hspaed ones and generally it performs quite good and for patch like contours used in automatic retopo it really excels :)

Cheersret bestofall bestofallcube good jgy quad1 quads3 quads4 quads6 quads8 qube5


Morevna wallpapers by Anastasia Majzhegisheva

Anastasia Majzhegisheva brings a coloured version of her recent Morevna artwork. Enjoy!

2015-01-11-Morevna-3

2015-01-11-Morevna-2

The artwork is painted completely in Krita. Here are a few WIP screenshots.

2011-01-11-2 2011-01-11-3 2011-01-11-4

Koschei The Deathless – Artwork #1

After having some fun with concept images, Nikolai Mamashev presents the sample lineart of Koschei The Deathless.

2014-12-12-sketch-med

 

January 13, 2015

A a litle fun.

So its has been a long time since I published anything Oxygen-KDE related. Well been taking some time off from the extreme amount of responsibility/work Oxygen/KDE was. It was for the best and its great fun seeing Breeze develop its own little magic. They are just great.
Plus gets me the time to reinvent my design language and skill sets in a vastly different design language world from what we had just a few years ago.
I might start something new for fun that is a bit more structured, things are starting to make sense to me and more important its fun again.

Any way a picture in every post right? So here goes a new take on a wallpaper I did a few years ago, quadros2.

Concepts of Koschei the Deathless by Nikolai Mamashev

Last weeks Nikolai Mamashev have been intensively researching the main villain of Morevna series – Koschei the Deathless. Here we would like to share some concepts made by Nikolai.

2014-10-15-1 2014-10-01-4k 2014-10-15-h 2014-12-15-k-v1 2014-10-15-1L 2014-12-08-k3 2014-12-10-k4 2014-12-11-k8 2014-10-15-gr

 

January 12, 2015

Fanart by Anastasia Majzhegisheva – 18

Morevna sketch by Anastasia Majzhegisheva

Morevna sketch by Anastasia Majzhegisheva

January 09, 2015

Finding hidden applications with GNOME Software

When you do a search in GNOME Software it returns any result of any application with AppStream metadata and with a package name it can resolve in any remote repository. This works really well for software you’re installing from the main distribution repos, but less well for some other common cases.

Lets say I want to install Google Chrome so that my 2 year old daughter can ring me on hangouts, and tell me that dinner is ready. Lets search for Chrome on my Fedora Rawhide system.

Screenshot from 2015-01-09 16:37:45

Whoa! Wait, how did you do that? First, this exists in /etc/yum.repos.d/google-chrome.repo — the important line being enabled_metadata=1. This means “download just the metadata even when enabled=0” and means we can get information about what packages are available in repos we are not enabling by default for legal or policy reasons.

[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=0
gpgcheck=1
repo_gpgcheck=1
enabled_metadata=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

We’ve also got a little XML document with the AppStream metadata (just the long description and keywords) called /usr/share/app-info/xmls/google-chrome.xml which could be included in the usual vendor-supplied fedora-22.xml if that’s what we want to do.

Screenshot from 2015-01-09 16:40:09

The other awesome feature this unlocks is when we have addon repos that are not enabled by default. For instance, my utopia repo full of super new free software applications could be included in Fedora, and if the user types in the search word we ask if the repo should be enabled. This would solve a lot of use cases if we could ship .repo files for a few popular COPRs of stuff we don’t (yet) ship in Fedora, but are otherwise free and open source software.

Screenshot from 2015-01-09 16:51:00

All the components to do this are upstream in Fedora 22 (you need a new librepo, libhif, PackageKit, libappstream-glib and gnome-software, phew!) although I’m sure we’ll be tweaking the UI and UX before Fedora 22 is released. Comments welcome.

 

GNOME MultiWriter 3.15.2

I’ve just released GNOME MultiWriter 3.15.2, which is the first release that makes it pretty much feature complete for me.

Reads and writes are now spread over root hubs to increase throughput. If you’ve got a hub with more than 7 ports and the port numbers don’t match the decals on the device please contact me for more instructions.

In this release I’ve also added the ability to completely wipe a drive (write the image, then NULs to pad it out to the size of the media) and made that and the verification step optional. We also now show a warning dialog to the user the very first time the application is used, and some global progress in the title bar so you can see the total read and write throughput of the application.

With this release I’ve now moved the source to git.gnome.org and will do future releases to ftp.gnome.org like all the other GNOME modules. If you see something obviously broken and you have GNOME commit access, please just jump in and fix it. The translators have done a wonderful job using transifex, but now I’m leaving the just-as-awesome GNOME translator teams handle localisation.

If you’ve got a few minutes, and want to try it out, you can clone the git repo or install a package for Fedora.

Richard

January 08, 2015

Accessing image metadata: storing tags inside the image file

A recent Slashdot discussion on image tagging and organization a while back got me thinking about putting image tags inside each image, in its metadata.

Currently, I use my MetaPho image tagger to update a file named Tags in the same directory as the images I'm tagging. Then I have a script called fotogr that searches for combinations of tags in these Tags files.

That works fine. But I have occasionally wondered if I should also be saving tags inside the images themselves, in case I ever want compatibility with other programs. I decided I should at least figure out how that would work, in case I want to add it to MetaPho.

I thought it would be simple -- add some sort of key in the images's EXIF tags. But no -- EXIF has no provision for tags or keywords. But JPEG (and some other formats) supports lots of tags besides EXIF. Was it one of the XMP tags?

Web searching only increased my confusion; it seems that there is no standard for this, but there have been lots of pseudo-standards over the years. It's not clear what tag most programs read, but my impression is that the most common is the "Keywords" IPTC tag.

Okay. So how would I read or change that from a Python program?

Lots of Python libraries can read EXIF tags, including Python's own PIL library -- I even wrote a few years ago about reading EXIF from PIL. But writing it is another story.

Nearly everybody points to pyexiv2, a fairly mature library that even has a well-written pyexiv2 tutorial. Great! The only problem with it is that the pyexiv2 front page has a big red Deprecation warning saying that it's being replaced by GExiv2. With a link that goes to a nonexistent page; and Debian doesn't seem to have a package for GExiv2, nor could I find a tutorial on it anywhere.

Sigh. I have to say that pyexiv2 sounds like a much better bet for now even if it is supposedly deprecated.

Following the tutorial, I was able to whip up a little proof of concept that can look for an IPTC Keywords tag in an existing image, print out its value, add new tags to it and write it back to the file.

import sys
import pyexiv2

if len(sys.argv) < 2:
    print "Usage:", sys.argv[0], "imagename.jpg [tag ...]"
    sys.exit(1)

metadata = pyexiv2.ImageMetadata(sys.argv[1])
metadata.read()

newkeywords = sys.argv[2:]

keyword_tag = 'Iptc.Application2.Keywords'
if keyword_tag in metadata.iptc_keys:
    tag = metadata[keyword_tag]
    oldkeywords = tag.value
    print "Existing keywords:", oldkeywords
    if not newkeywords:
        sys.exit(0)
    for newkey in newkeywords:
        oldkeywords.append(newkey)
    tag.value = oldkeywords
else:
    print "No IPTC keywords set yet"
    if not newkeywords:
        sys.exit(0)
    metadata[keyword_tag] = pyexiv2.IptcTag(keyword_tag, newkeywords)

tag = metadata[keyword_tag]
print "New keywords:", tag.value

metadata.write()

Does that mean I'm immediately adding it to MetaPho? No. To be honest, I'm not sure I care very much, since I don't have any other software that uses that IPTC field and no other MetaPho user has ever asked for it. But it's nice to know that if I ever have a reason to add it, I can.

SVG Working Group Meeting Report — Santa Clara (TPAC)

This post got delayed due to work on ‘units’ for the 0.91 Inkscape release followed by the holidays.

The SVG Working Group had a two day meeting in Santa Clara as part of TPAC (the yearly meeting of all W3C working groups) at the end of October. This is an occasion to meet in person with other groups who have some shared interests in your group’s work. I would like to thank the Inkscape board for partially funding my attendance and W3C for waiving the conference fee.

Here are some highlights of the meeting:

Day 1, Morning

Minutes

The morning session was divided into two parts: the first part was an SVG only meeting while the second part was a joint meeting with the Task Force for Accessibility.

  • SVG blending when embedded via <img>:

    This is probably not a real interesting topic to readers of this blog other than it can give one a flavor of the types if discussions that go on inside the SVG working group. We spent considerable time debating if elements inside an SVG that are included into a web page by the HTML <img> tag should blend with elements outside the SVG (other than following the simple “painters model” where transparency is allowed). Recall that in SVG 2 (and CSS) it is possible to select blend modes using the ‘mix-blend-mode’ CSS property (see my blog post about blending). So the question becomes should objects like a rectangle (inside the SVG referenced by an <img> element) with a ‘mix-blend-mode’ value of say ‘screen’ blend with an image in the HTML page behind? We finally concluded that an author would expect an external SVG to be isolated and not blend with other objects in the HTML page.

  • Accessibility:

    The Accessibility Task Force asked to meet with us to discuss accessibility issues in graphics. Work has begun on SVG2 Accessibility API Mappings. An example of how accessibility can work with graphics can be found in a Surfin’ Safari blog post.

Day 1, Afternoon

Minutes

The afternoon session was a joint meeting with the CSS working group.

  • Text Decoration

    CSS has expanded the possibilities of how text is decorated (underlines, over-lines, etc.) by adding three new properties in CSS Text Decorations Module Level 3. The new properties ‘text-decoration-line’ and ‘text-decoration-style’ are easy to adopt into SVG (and in fact are already read and rendered by Inkscape 0.91). The new property ‘text-decoration-color’ is more problematic. SVG has long supported separate ‘fill’ and ‘stroke’ properties on text which also applies to text decoration. By careful nesting of <tspan>’s one can have a different underline color from the text color. Furthermore, SVG allows various paints to be applied to the text decoration, like a gradent or pattern fill. The ‘text-decoration-color’ property allows the color of the text decoration to be set directly, without the need for nested <tspan>’s so it is a quite attractive idea but how to support the richness found in SVG?

    I proposed a number of solutions (see my presentation). The CSS group agreed that my favorite solution, that adding ‘text-decoration-fill’ and ‘text-decoration-stroke’ was the proper way to move forward. (BTW, the CSS working would like to eventually allow fill and stroke on HTML text.)

  • Fitting Text in a Box

    We’ve had numerous requests for the ability to adjust the size of text to fit it inside a given box (note, this is not the same as wrapping text into a shape). SVG has the attribute ‘textLength’ which allows a renderer to adjust the spacing or glyph width to match text to a given length. It was intended to allow renderers to adjust the length of a given text string to account for differences in font metrics if a the specified font wasn’t available; it was never intended to be an overall solution to fitting text inside a box, in fact the SVG 2 spec currently warns against using it in this way. I received a proposal from another Inkscape developer on expanding ‘textLength’ to be more useful in fitting text in a box. It seems to me that finding a solution to this problem would be of more general interest than just for SVG so I added this topic to the SVG/CSS agenda. I prepared a presentation to provide a starting point for the discussion.

    We had quite a lengthy discussion. The consensus seemed to be that CSS could use a set of simple knobs to make small adjustments to text, mostly for the purpose of labels. This would satisfy most use cases. Large adjustments could (should?) be the domain of script libraries. It was decided to solicit more feedback from users.

  • Image Rendering

    CSS Images 3 has co-opted the SVG ‘image-rendering‘ property and redefined in to specify what about an image is important to preserve when scaling as compared to a speed/accuracy trade off as in SVG 1.1. I prepared a short report on a couple of issues I found. The first is that the specification does not describe very well the meaning of the new ‘crisp-edges’ value. Tab Atkins, one of the spec’s authors has agreed to elaborate and add some figures to demonstrate what is intended. I found the Wikipedia section Pixel art scaling algorithms to be particularly enlightening on the subject.

    The second issue is that some browsers and Inkscape use the now deprecated ‘optimizeSpeed’ value to indicate that the nearest neighbor algorithm should be used for scaling. This is important when scaling line art. I asked, and Tab agreed, that ‘optimizeSpeed’ value should correspond to the new ‘pixelated’ value to not break existing content (and not ‘auto’ as is currently in the spec).

  • Connectors

    I’ve been working on a connectors proposal for SVG. There is renewed interest as being able to show relationships between elements would greatly aid accessibility. We even had a brief meeting with the HTML working group where it was suggested that connectors (possibly without visual links) may be of interest to aid accessibility of HTML. One problem I’ve had is how to reference ports inside a <symbol> element. I asked the CSS group for suggestions (this is obviously not a styling issue but the CSS group members are experts at syntax). Tab Atkins suggested: url(#AndGate1) Out, Mid1, Mid2, url(#AndGate2) InA, where, for example, Out is the point defined inside the symbol with the ‘id’ AndGate1.

Day 2

Minutes

The SVG working group met for entire day covering a real hodge-podge of topics, some not well minuted. Here are a few highlights:

  • NVidia presentation.

    NVidia gave a quite impressive demonstration of their OpenGL extensions for rendering 2D vectors, (think SVG), showing an entire HTML web page from the New York Times being rotated and scaled in real time on their Tegra based Shield tablet with all the text rendered as vectors (they can render 700,000 paths per second). They are trying to get other vendors interested in the extensions but it doesn’t seem to be a high priority for them.

  • CTM Calculations

    For mapping applications, a precision of greater than single precision is necessary for calculating the Current Transformation Matrix (CTM) due to rounding errors. It was proposed and accepted that SVG dictate that such calculations be done as double precision (as Inkscape already does). (Note: single precision is sufficient for actual rendering.)

  • Going to Last Call Working Draft

    We discussed when we’ll get SVG 2 out the door. It is a very large specification with various parts in various stages of readiness. We decided to target the February face-to-face meeting in Sydney as the date we move to the next stage in the specification process… where no new features can be added and incomplete ones removed.

  • HTML in SVG

    There has been a desire by some for quite awhile to allow HTML directly inside SVG (not wrapped by a <foriegnElement> tag). I personally am quite hesitant to see this happen. SVG as at the moment a nice stand-alone graphics specification that doesn’t necessarily have to be rendered in a Web browser. Incorporating HTML would threaten this.

  • SVG in HTML

    This is the opposite of the previous topic, allowing SVG to be directly embedded in HTML without using a name space.

  • Non-scaling Patterns

    Just as it often useful to have non-scaling stroke widths (especially for technical drawings), it would also be useful to have non-scaling patterns and hatches. We agreed that this should be added to the specification.

  • Minimum Stroke Width

    It would be useful to have a minimum stroke-width so that certain strokes do not disappear when a drawing is scaled down. It was claimed that this will be handled by vector-effect but I don’t see how.

  • SVG in Industry

    It was mentioned that Boeing is moving all their 787 docs to SVG so they can be viewed in browsers.

Unfortunately, we ran out of time before we could cover some of my other topics: stroke-miterlimit, text on a shape, and auto-path closing.

Fanart by Anastasia Majzhegisheva – 17

2014-11-20-proc

Morevna artwork by Anastasia Majzhegisheva.

January 07, 2015

Guy-In-Black: First sketch

Translated by:

2015-01-06-mibGuy-In-Black is another new character, who will appear in the new episode. Artwork by Nikolai Mamashev.

January 06, 2015

തീവണ്ടിയിൽ ഒപ്പമുള്ള ബംഗാളികൾ നാലുവയസ്സുള്ള ബംഗാളി പയ്യനോട് വാക്കുകളുടെ മലയാളം ചോദിച്ച് ബുദ്ധി പരീക്ഷിക്കുന്നു. ശരിയുത്തരം കേട്ട് ക്ലാപ് ചെയ്യുന്നു. മാതാപിതാക്കൾ സന്തുഷ്ടരാവുന്നു. ശ്രേഷ്ഠം മലയാളം!

Project Activity in Bug Reports

Sven Langkamp recently mentiond that Krita had crept up to second place in the list of projects with most new bugs opened in bugzilla in a year. So I decided play around a litte, while Krita is building.

Bugzilla has a nice little report that can show the top X projects with open bugs for certain periods. Krita never is in the default top 20, because other KDE projects always have more open bugs. But let's take the top 100 of KDE projects with open bugs sort the data a bit and then make top 10 lists from the other columns.

Note, there might be projects where more bugs were opened and closed in the past year, but I cannot get that information without going into SQL directly. But I think most active KDE projects are in the top 100.

New bugs created. This is a pretty fair indication of userbase, actually. A project that has a lot of users will get a lot of bug reports. Some might quibble that there's a component of code quality involved, but honestly, pretty much all code is pretty much equal. If you just use an application, you'll mostly be fine, and if you start hacking on it, you'll be horrified. That's normal, it holds for all software.

  • plasmashell: 1012
  • krita: 748
  • plasma: 674
  • kwin: 482
  • digikam: 460
  • kmail2: 388
  • valgrind: 274
  • Akonadi: 270
  • kate: 267
  • kdevelop: 258

I have to admit to being a bit fuzzy about the difference between plasma and plasmashell. It looks like our own developers know how to find bugzilla without trouble, given that there are two, three developer-oriented projects in the top-ten. Of course, valgrind is also widely used outside the KDE community.

Now for bugs closed. This might say something about project activity, either development or triaging. It's a good statistic to be in the top-ten in!

  • plasmashell: -917
  • krita: -637
  • digikam: -615
  • plasma: -479
  • kwin: -391
  • okular: -346
  • dolphin: -263
  • amarok: -255
  • valgrind: -254
  • kate: -249

Not a hugely different list, but it's interesting to see that there are several projects that are in the top-ten for closing bugs, that aren't in the top-ten for receiving new bugs. Maybe that is an indication of code quality? Or maybe better bug triagers? If a project is in the first list, but not in the second list, it might be taken to mean that it's got users, but that development is lagging.

Open bugs. A project can go a long time and collect a huge amount of bugs over that period without having much activity. For instance, in this list, KMail has 880 bugs, but there were zero new bugs in 2014 and only seven bugs closed. I'd say that it's time to remove kmail from bugzilla entirely, or mark all remaining kmail bugs as "unmaintained". The same goes, I guess, for the kio component: 550 open bugs, 1 new, 1 closed in a year.

  • plasma: 1449
  • konqueror: 1432
  • kmail2: 1107
  • kopete: 942
  • kdelibs: 921
  • kmail: 880
  • Akonadi: 650
  • valgrind: 580
  • kio: 550
  • systemsettings: 495
  • kontact: 479

Krita has 237 open bugs, by the way, but since we're working the 2.9 release, that number fluctuates quite a bit.

Conclusions? Well, perhaps none. If bugs are any indication of a project's user base and activity, it's clear that KDE's desktop (plasma, kwin) have the biggest userbase, followed by Krita and Digikam. Maybe that comes as a surprise -- I know I was surprised when Sven noted it.

And there's one more twist -- everyone who uses the Plasma shell or kwin can easily report crashes to bugzilla, because they're on Linux. Most Krita (and I guess Digikam) users are actually not on Linux. Krita's Windows crashes right now still get reported to a server hosted by KO, which is something I need to work on to change...

January 05, 2015

GNOME MultiWriter and Large Hubs

Today I released the first version of GNOME MultiWriter, and built it for Rawhide and F21. It’s good enough for a first release, although there are still a few things left to do. The most important now is probably the self-profiling mode so that we know the best number of parallel threads to use for the read and the write. I want this to Just Work without any user interaction, but I’ll have to wait for my shipment of USB drives to arrive before I can add that functionality.

Also important to the UX is how we display failed devices. Most new USB devices accept the ISO image without a fuss, but the odd device will disconnect before completion or throw a write error. In this case it’s important to know which device is the one that belongs in the rubbish bin. This is harder than you think, as the electrical port number is not always what matches the decal on the plastic box.

For my test system I purchased a 10-port USB hub. I was interested to know how the vendor implemented this, as I don’t know of a SOIC chip that can drive more than 7 ports. It turns out, my 10-port hub is actually a 4-port hub, with a 7-port hub attached to the last port of the first hub. The second hub was also wired 1,2,3,4,7,6,5 rather than 1,2,3,4,5,6,7. This could cause my dad some issues when we tell him that device #5 needs removing.

I’ve merged some code into GNOME MultiWriter to work around this, but if you’ve got a hub with >7 ports I’d be interested to know if this works for you, or if we need to add some more VID/PID matches. If you do try this out you need libgusb from git master today. Helpfully gnome-multi-writer outputs quirk info to the command line if you use --verbose, so that makes debugging this stuff easier.

January 04, 2015

Mechanic-Sister Concept

Translated by:

Concept of the youngest sister of Ivan (our main character). We haven’t defined her name yet – just calling her “Mechanic”, because this is who she is. Artwork by Anastasia Majzhegisheva.

2014-12-22-mechanic-2

2014-12-22-mechanic

A modular manual to guide new krita users

First post of the year, so I can start wishing you all an awesome and happy new year!

As part of my work with Activ Design, I need to prepare some new training material to teach digital painting with Krita. Part of this material will be some kind of software manual. As we don’t have an up to date manual and the booksprint project we started to discuss is on stand-by for now, I started to write a modular manual to guide new Krita users. I call it modular because I want to split each aspect of the software to its own chapter, in separate pdf files (for convenience I’ll also make a version with all chapters in one file at the end). I’m writing both english and french versions: english as it’s the best base for everyone to read and translate, and french because our current students at Activ Design are french ;)

To follow the good old rule “release early, release often”, the first files are already available. This is just the beginning, I’ll add the next chapters in the coming days. Next topics will be the layer stack, the brush editor, and one chapter for each group of tools.

English files:

French files:

If you want to add some translations, the sources are in a git repository: https://gitorious.org/krita-guide

 

A little side note on another topic: if you’ve followed our fundraising campaign for GCompris, you may have noticed we extended the deadline to February 1st. We really need more budget to be able to complete new illustrations for all the different activities, which is needed to reach unified graphics. Please support this project if you can!

January 03, 2015

Launching the new production!

Translated by:

2015-01-03-4-new-year-v2

Hello, everyone! I am happy to announce that we have started preparations for  production of the new episode for “Morevna” series!

We will take a part of screenplay and produce the animated short with a storyline and dialogues (approximate planned duration is 7 minutes). As usual, the production will be done completely with open-source software only – the main tools are: Synfig, Blender, Krita.

Like in previous production two years ago, we have Nikolai Mamashev in the core team. Also, Anastasia Majzhegisheva will join us (you probably already familiar with her works). For the development we can expect support from Ivan Mahonin, who is famous by his work on Synfig.

In the next few weeks we plan to push massive updates for the website and more production details to be announced soon.


Artwork by: Anastasia Majzhegisheva.


January 02, 2015

Introducing GNOME MultiWriter

I spent last night writing a GNOME application to duplicate a ton of USB devices. I looked at mdcp, Clonezilla and also just writing something loopy in bash, but I need something simple my dad could use for a couple of hours a week without help.

It’s going to be super useful for me when I start sending our LiveUSB disks in the ColorHug box (rather than LiveCDs) and possibly useful to other people just wanting to copy a USB drive for QA testing on a small group of people, a XFCE live CD of Fedora rawhide for a code sprint, and that kind of thing.

GNOME MultiWriter allows you to write and verify an ISO file to up to 20 USB devices at once.

Screenshot from 2015-01-02 16:24:35

Bugs (and especially pull requests) accepted on GitHub; if there’s sufficient interest I’ll move the project to git.gnome.org after a few releases.

December 31, 2014

18 anticipated Blender development projects of 2015

Here’s my random ordered list of 18 projects which are expected to make it in Blender next year. Nearly all of these already started already – but some are still in the design and ideas phase. Most of the projects are being worked on supported by the Blender Foundation Development Fund or by Blender Institute – thanks to Blender Cloud subscribers and open movie supporters.

 

It’s an impressive list, I can’t wait for all of this in 2015 to happen! Please keep in mind that – as for any end-of-year listing – it’s subjective and personal overview. I’m sure there are developers out there who have great surprises up their sleeves for us.

 

On behalf of everyone at blender.org I wish you a happy new year!
Ton Roosendaal – Amsterdam, 31-12-2014

 

ManyNodesGraphSmallDependency Graph

There’s no doubt about the importance of this project, yet it’s a difficult one to present or promote.
Just think of it like this: the “Depsgraph” is the core Animation Engine of Blender, the engine that ensures all the updates work, reliably, threaded, linkable, massively similated or in other ways we predict artists will (ab)use Blender animation for the rest of the decade – without running into too much troubles!

Working on it: Sergey Sharybin, Joshua Leung
Likely to happen in 2015: 100%

Manual_multiview_viewport_settingsMulti View

The real hype for stereo (multi-view) film is over now. But there’s no doubt that stereo-3D film has proven to be of sufficient added value to stay around for a while.

Getting this feature work meant to work on the viewport, UI drawing, imaging systems, movie/IO, render engines, compositor and sequence editor. A huge job that has completed now and will get merged in 2.74.

Working on it: Dalai Felinto
Likely to happen in 2015: 100%

 

OpenSubdiv6a0163057a21c8970d017ee5b48bbf970d

OpenSubdiv is a library that enables GPU generating and drawing of Subdivision Surfaces. The Blender branch with OpenSubdiv was  ready for release last summer – but for performance reasons we decided to wait for the new OpenSubdiv release by Pixar. The release was set to Q4 of 2014, but will likely be Q1 or Q2 2015 then.

Working on it: Sergey Sharybin
Likely to happen in 2015: 90%

alembicAlembic

We currently use many different cache file formats for animation or physics in Blender. The Alembic library (by ILM/SPI) has been designed to replace all of that with a single compact compression format. And even more! It’s the most popular format in film and animation pipelines currently (hey Game Industry, wake up and support something similar!).

Working on it: Lukas Toenne
Likely to happen in 2015: 90% (for Blender caching)

Manipulator_SpinCustom manipulators

Blender suffers from an old disease – the Button Panelitis… which is a contagious plague all the bigger 3d software suites suffer from. Attempts to menufy, tabbify, iconify, and shortcuttify this has only helped to keep the disease within (nearly) acceptable limits. But there’s a cure too!

The real challenge is to rethink adding buttons/panelsl radically and to bring back UI to the viewports – and to the regions in a UI where you actually do your work.
In Blender we now test a (Python controlled) generic viewport widget system, which will first be tested and used for character rigs.

Working on it: Antony Ryakiotakis
Likely to happen in 2015: Still prototyping/designing

imagesWorkflow

As we all know, Blender’s screen estate gets too many buttons, everything gets too many options, and we can’t find keys for shortcuts anymore.

It’s time to admit that Blender has grown too large for a single configuration, that people are just too different, and that a good default just depends on a reference situation!

The solution is to go back to good old design efforts – creativity comes out of well defined restrictions, not out of unlimited choices. Exit: “default”, entrance: “Workflow”.

Working on it: Jonathan Williamson & the UI team, also related to Blender 101 code work.
Likely to happen in 2015: At least prototyping/designing

classroomBlender 101

The “Blender 2.5 project” took off in 2009. It was a massive success for Blender, giving us serious attention and involvement from the media industry.

There are still unfinished code jobs for 2.5 – mostly related to enable advanced configuration of editors and the whole UI (layouts, keymaps, etc). In Python!

As a proof-of-concept for completing this target, we’d like to see a “Blender 101″ prototype released, which is a fully release-compatible Blender version configured for teaching 3D to high school students.

Working on it: Campbell Barton, Bastien Montagne
Prototype likely to happen in 2015: 90%

300px-Frankie_mesh_in_BlenderMesh editing Tools

Innovations in Mesh editing keep happening in many ways – for Blender this was even taking off surprisingly using the Python API and by releasing add-ons.

When it comes to efficient modeling, for example for painting tools, sculpting, retopo and UV unwrapping – we still have enough work to do in Blender. Campbell Barton – the lead coder of the Mesh module – will reserve many months next year to work on Mesh tools.

Working on it: Campbell Barton
Likely to happen in 2015: 100% surprise

 

slider__2026_2Viewport

Blender has been using OpenGL since the beginning, almost 20 years ago. The viewport – as we currently have still – was designed keeping the OpenGL 1.0 spec in mind. It’s really time to upgrade that!

This work is for the larger part very technical – adding APIs, redesigning core functions for drawing. For that reason it took a while to see the real benefits.

Feasible for 2015 is to get a shader-based viewport that can be configured to align with the Workflow project – to ensure that Sculpters, Modelers, Animators, Compositors, Game designers, etc… that they all have their own preferred and useful view on the 3d world.
Shaders can be manually edited glsl scripts, but we should be making a decent Node editor for it as well.

Working on it: Jason Wilkins and Antony Ryakiotakis
Likely to happen in 2015: 95%

franck_017_paint-over-deevadHair and particles

The Gooseberry open movie project requires sophisticard control of hair and fur in animated characters. We’re still working on fixing and updating the old system, by replacing parts of this by newer code that’s more in control.

New is having much better control over physics simulation of hair, including proper collisions using Bullet. There will be a better hair edit mode and hair keyframing as well.

Ideally this was all meant to become a node system. For that, the outcome is still uncertain at this moment.

Working on it: Lukas Toenne
Likely to happen in 2015: 100% for working sims, 50% for nodes

 

splashbot_nodesEverything Nodes

So while we’re on Particle nodes anyway – we should check on having Modifier-Nodes, Constraint-Nodes, Transform-nodes, Generative modeling Nodes, and so on.

And – why not then try to unify it into one system? Here a lot of experimenting and design work is needed still. It’s surprising and inspiring to see how well Python scripters manage to do this (partially) already.

Working on it: Lukas Toenne, Campbell Barton, …
Likely to happen in 2015: Unknown.

matcaps_01_08Asset browsing

Blender already allows to make complicated shots with a lot of linking in of data from other .blend files. It’s quite abstract and basic still though.

What we need is to expose this functionality much better to the user, including adding tools and new paradigms to manage this well.

An editor is going to be made to manage all of this linking, re-using, appending, allowing revisions or ‘level of detail’ versions and to manage preset assets and asset libraries in general.

Working on it: Bastien Montagne
Likely to happen in 2015: 98%

 

splashesBlender Asset Manager + Cloud

With an asset browser working on the Blender side, we also need to have something working on server side – either intranet or via internet.

For this we have the Blender Cloud website running now – which is meant to be the Gooseberry Open Movie’s production platform as well – making it a true open production system.

Currently work is completed on efficient checkouts of parts of a film svn (a shot for render farm, or a job for animator). Logging progress and reviewing is a short term target as well.

Working on it: Francesco Siddi, Campbell Barton, …
Likely to happen in 2015: 100% (we need it ourselves)

imagesPTex support

PTex is a Disney library to support image-textures on Meshes without a need for unwrapping in a UV space first. It’s a massive workflow optimizer, but yet has to be verified that we can make it work for us.

Work will be done on editing (painting, baking) and rendering (in Cycles).

Working on it: Nicholas Bishop
Likely to happen in 2015: 90%

 

Screen Shot 2014-12-31 at 16.33.53Compositor Canvas

Blender’s compositor recode project is still in progress – work is being done on better memory consumption and to replace the last bad (non tiled, non threaded) nodes – which will make composites much more responsive.

Biggest project here would be to give the compositor “Canvas awareness” – to give the compositor an own 2D space in which inputs and outputs will get flexibly mapped.

Working on it: Nobody has it assigned yet
Likely to happen in 2015: 50%

 Screen Shot 2014-12-31 at 17.26.56Cycles speedup

The Cycles render engine now has more or less the features we want (although baked rendering and ‘shadow catcher’ material is still high on the list).

To survive as a real production render system we have to find ways to get render times under control. Work can still be done on optimizing BVH, more efficient sampling, coherence, noise reduction.

We don’t expect miracles here though – in the end it comes down to the artist – and giving artists sufficient tools to construct an efficient pipeline for fast renders of shots.

Working on it: Sergey Sharybin, Thomas Dinges
Likely to happen in 2015: 100%

 

Screen Shot 2014-12-31 at 16.36.18Game Engine – revisited

We don’t forget about BGE users and game modelers! They can expect a lot from the Viewport project – it should allow to model and texture using advanced shading and lighting models as currently common in Unreal and similar engines.

If that works, and our animation system is smooth and fast, and physics unified, then we only have one main target to upgrade: A decent logic editor and a way to have logic and playback work smoothly inside the animation system. (Check my proposal 18 months ago about the future of BGE).

Working on it: Not assigned yet
Likely to happen in 2015: 50%

Tears of SteelMotion tracking

A couple of important updates are being scheduled for (video) motion tracking. This especially to get a decent automatic camera solving work – just one button press to get quick solves!

Working on it: Keir Mierle and Sergey Sharybin
Likely to happen in 2015: 90%

 

 

 

 

 

 

 

 

 

 

December 29, 2014

The International Obfuscated Clojure Coding Contest

While some people may argue that Clojure is already obfuscated by default, I think it is about time to organise an official International Obfuscated Clojure Coding Contest similar to the IOCCC. This idea was born out of my own attempts to fit my Clojure experiments in one single tweet, that is 140 characters.

Winning IOCCC entry: flight simulator

The plan

First get some feed back from the Clojure community on this idea. You are invited to share your thoughts as comments to this blogpost. I will also twitter about this idea. If this particular tweet will get 100+ retweets, I will go ahead with the next step in the plan, which is establishing the rules for this contest.

These rules are also open to discussion. At the moment I’m considering for example a category ‘Code fits in a single tweet’ and another one like ‘Code size is limited to 1024 characters’.

After these preliminary steps I will set up a website, find a jury to judge the submissions and will continue from there.

Inspiration

Since Clojure is such a powerful language, there are also plenty opportunities to make the code more challenging to read. Mike Anderson already created a GitHub project called clojure-golf with some tricks.

You are also invited to violate the first rule of the macro club: Don’t write macros. And obviously the second rule (write macros if that is the only way to encapsulate a pattern) should be ignored as well.

Also extending datatypes in unexpected ways is alway a good idea. See for example my answer to this StackOverflow question about ‘an idiomatic clojure way to repeat a string n times‘.

Generating code on the fly is of course a breeze in a ‘code-as-data’ language like Clojure.

Finally

So if you think you can create a fully functional chess engine in 1024 characters, a Java interpreter in a single tweet or managed to make the Clojure REPL self-conscious with your obfuscated code, leave a comment. Also if you have suggestions for rules, want to help with setting up a website, want to be a judge or want to help in another way, I would love to hear from you.

And most importantly: have fun!


December 23, 2014

GCompris needs your help!

At the beginning of this month, we launched a crowdfunding campaign to support my project to work on the artwork and graphics redesign for GCompris. The goal of this project is to provide better unified graphics to all different activities inside GCompris. It is quite a big work, so I really need some financial support to work on it fast enough.

The fundraiser started very good, but then we had less contributions in the next days.. We need to make more noise and reach more pepole, so please donate and keep spreading the word to your friends and family. Still I want to thank all who already donated and/or left some nice comments. As well I want to thank Mageia.org for their nice support, see their interesting blog post about it.

I count on your support, and hope the spirit of Christmas will help us.
And by the way, I wish you all some happy holidays!

GCompris crowdfunding

 

Descending into the bowels of Inkscape code

Introduction

This post is more geared to Inkscape developers than Inkscape users. I hope that by recording my trials and tribulations here it can help others in their coding efforts.

I have been working on adding support for ‘context-fill’ and ‘context-stroke’ to Inkscape. These magical ‘fill’ ans ‘stroke’ property values will allow Inkscape to finally match marker fill (e.g. arrowhead color) to path stroke (e.g. arrow tail) in a simple way. These new property values are part of SVG 2.

Adding this support is a multi-step process. First one must be able to read the new values and then one must be able to apply the values. The former part is rather straight forward. It simply required modifying the SPIPaint class by adding a new ‘enum’ SPPaintOrigin with entries that keep track of where the paint originates. The previous ‘currentColor’ boolean has been incorporated into this new ‘enum’. The latter part proved to be much more of a challenge.

Where does the ‘apply’ code go?

The ‘context-fill’ and ‘context-stroke’ values are applicable to things that are cloned. The driving force for these values is certainly the <marker> element but <symbol> and <pattern> elements could also find the values useful as could anything cloned by the <use> element. For the moment, I concentrated on implementing the values in markers and things cloned by the <use> element.

The first question that comes to mind is: Where in the code is styling applied? This turns out to not be best starting question for cloned objects. A better question is: How does the cloning take place? To answer this question I implemented three routines with the same name: recursivePrintTree(); but, as member functions of different classes: SimpleNode, SPObject, and DisplayItem. These represent different stages in Inkscape’s processing of an SVG document. Here are the findings:

XML Tree (SimpleNode)
The XML Tree matches the structure of the XML file with some extra elements possibly added by Inkscape such as <metadata> and <sodipodi:namedview>. This is the tree that is shown in the XML editor dialog. No cloning is evident.
Object Tree (SPObject)
The object tree is similar to the XML tree. Its main purpose is to handle the CSS styling which involves merging the various sources of styling (external style sheets (rect: {fill:red;}), styling properties (style=”fill:red”), and presentation attributes (fill=”red”), as well as handling all the necessary cascading of styles from parent to child. A few non-rendering elements are missing such as <RFD:rfd> and some new elements appear. Here is our first clue: the object tree includes the unnamed clones of elements created by the <use> element. It makes sense that they appear here. Cloned objects descending from a <use> element participate in the style cascading process. Marker clones, however, are no where to be seen.
Display Tree (DisplayItem)
The display (or rendering) tree includes only elements that will be rendered. All the styling has been worked out; all the placement and sizing of graphical items has been calculated. The metadata, the Inkscape editing data, and the <defs> section are all gone. But now clones of the markers appear, one clone for each node where a marker is placed, each containing the geometric data of position, scale, and orientation. This is a quite reasonable way to handle markers as each marker clone is identical to its siblings (at least until ‘context-fill’ and ‘context-stroke’ came along).

The above “tree” analysis gives the structure of each tree after it has been built, but doesn’t explain how each tree is created. This has ramifiations of how one can handle the ‘context-fill’ and ‘context-stroke’ values.

Creating the XML tree

There are a couple of ways to create the XML tree. The most common is via sp_repr_read_file(). This is used in SPDocument::createNewDoc() as well as routines to read in filters, templates, etc. Files opened by Inkscape from the command line use XSLT::open() which allows XSLT stylesheets to be applied to the SVG file (something I didn’t know we supported). Both functions call sp_repr_do_read() to read the actual files and both end up with an Inkscape::XML::Document.

Creating the object tree

Once the XML tree is created, it is passed to SPDocument::creatDoc() which builds the object tree top down via the recursive calling of SPObject::invoke_build() starting with the document root <svg> element. SPObject::invoke_build() calls the virtual function SPObject::build() which handles element specific things such as reading in attributes. Attributes are read by calling SPObject::readAttr() which performs a few checks before calling the virtual SPObject::set() function. The set() function for SPUse reads in ‘href’, the reference to the object to be cloned by the <use> element. Reading in the reference inserts a copy of the referenced object (the clone) into the object tree via SPUse::href_changed().

Markers are not cloned; only references to the marker elements are added at this step, in the SPShape::build() function.

Creating the display tree

The display tree is created by calling recursively SPItem::invoke_show() on the root object (SPItem, derived from SPObject, is for visible objects). This is done in SPDesktop::setDocument(). SPItem::invoke_show immediately calls the virtual function SPItem::show() which handles element specific things. (SPRoot::show() calls SPGroup::show() which calls SPGroup::_showChildren.) SPItem::show() creates an instance of an appropriate display class object that is derived from Inkscape::DrawingItem(). The virtual function DrawingItem::setStyle() is called to set the style information. One thing to note is that the a child is constructed (with styling) before adding it to the tree so a child’s style cannot be directly dependent on an ancestor in the tree. But ‘context-fill’ and ‘context-stroke’ need ancestor style information so we need to supply that in a different way.

Markers are tracked by a map of vectors of pointers to Inkscape::DrawingItem instances. This map is stored in SPMarker. The map is indexed by a key for each type of marker (start, mid, end) on a particular path. The vector has an entry for each position along the path a marker is to be placed. The DrawingItem instances are created in a two step process from SPShape::show(). First a call to sp_marker_show_dimensions() ensures that the vector is of the correct size. Then a call to sp_shape_update_marker_view() calls sp_marker_show_instance() which creates a new Inkscape::DrawingItem instance if it doesn’t already exist.

Setting the style of a clone

As mentioned above, style is handled in the virtual function DrawingItem()::setStyle(). The DrawingShape and DrawingText setStyle() functions call NRStyle.set() where the work of translating the style information from the object tree form to the display tree form takes place (in the display tree, the styling information is in a form used by the Cairo rendering library). To handle ‘context-fill’ and ‘context-stroke’ we would like to walk up the display tree to find the style of the element that references the clone, i.e. the <use> element or the <path> or <shape> elements in the case of markers. But at the point the setStyle() function is called on an element in the clone, the clone has not yet been inserted into the display tree, so one cannot walk up the display tree. What we can do is hand two sets of styling information from the object tree to to the setStyle() function, the first being the stying information for the object at hand and the second being the styling information that should be used in the case of ‘context-fill’ and ‘context-stroke’. We know the later for the <use> element as its clone is in the object tree. All that is required is setting a pointer to the context style information in the SPUse class and then passing it down to its children. This solution doesn’t work for markers as their clones are not in the object tree. The solution for markers is calling a function that walks down the tree setting the correct styling information after the cloned marker elements are added to the display tree.

Future work

There are still quite a few things to be done. In particular, before we can switch to using ‘context-fill’ and ‘context-stroke’, as well as the ‘orient’ attribute value ‘auto-start-reverse’ (which allows us to handle both start and end arrows with just one marker) we’ll need a fallback solution for SVG 1.1 renderers. Here is a list of things to do:

  • Handle gradients and patterns.
  • Handle text (appendText()).
  • Export to SVG 1.1.
  • Redo markers.svg (source of markers in Inkscape).

Reflections

All the code is now in trunk. To implement ‘context-fill’ and ‘context-stroke’, I added about 200 lines of code. I don’t know the exact amount of time it took, but I would guess a minimum of 20 hours. That works out to about 10 lines per hour… not very productive. The first part, reading in ‘context-stroke’ and ‘context-fill’ took about an hour. I am quite familiar with this part of the code, having C++ified the SPStyle and related classes and having implemented other new properties and property values. It was the second part that took so long. Although I have worked with parts of this code before, it was often quite opaque as to which code is doing what (and why). I ended up going down many false paths. There is a serious lack of comments and often confusing variable and function names (what the heck is an ‘arenaitem’?). A few comments at the appropriate places could have shaved up to 90% of the time it took. In this case, the different way markers and clones are handled required solving the same problem twice.

When you begin a project, it is very hard to estimate the time it will take. I would have thought this project could be done in less than 5 hours, possibly in just a couple. It didn’t turn out that way. On the otherhand, implementing the new CSS property ‘paint-order’ which I thought would take considerable time, took only a couple of hours.

December 22, 2014

Passwordless ssh with a key: the part most tutorials skip

I'm working on my Raspberry Pi crittercam again. I got a battery, so it can be a standalone box -- it was such a hassle to set it up with two power cords dangling from it at all times -- and set it up to run automatically at boot time.

But there was one aspect of the camera that wasn't automated: if close enough to the house to see the wi-fi router, I want it to mount a filesystem from our server and store its image files there. That makes it a lot easier to check on its progress, and also saves wear on the Pi's SD card.

Only one problem: I was using sshfs to mount the disk remotely, and ssh always prompts me for a password.

Now, there are a gazillion tutorials on how to set up an ssh key. Just do a web search for ssh key or passwordless ssh key. They vary a bit in their details, but they're all the same in the important aspects. They're all the same in one other detail: none of them work for me. I generate a new key (various types) with no pass phrase, I copy it to the server's authorized keys file (several different ways, two possible filenames), I try to ssh -- and I'm prompted for a password.

After much flailing I finally found out what was missing. In addition to those two steps, you need to modify your .ssh/config file to tell it which key to use. This is especially critical if you have multiple keys on the client machine, or if you've named the file anything but the default id_dsa or id_rsa.

So here are the real steps for making an ssh key. Assume the server, the machine to which you want to ssh, is named "myserver". But these steps are all run on the client machine, the one from which you want to run ssh.

ssh-keygen -t rsa -C "Comment"
When it prompts you for a filename, give it a full pathname, e.g. ~/.ssh/id_rsa_myserver. Type in a pass phrase, or hit return twice if you want to be able to ssh without a password.
ssh-copy-id -i .ssh/id_rsa_myserver user@myserver
You can omit the user@ if you're using the same username on both machines. You'll have to type in your password on myserver.

Then edit ~/.ssh/config, and add an entry like this:

Host myserver
  User my_username
  IdentityFile ~/.ssh/id_rsa_myserver
The User line is optional, and refers to your username on myserver if it's different from the one on the client. For instance, on the Raspberry Pi, everything has to run as root because most of the hardware and camera libraries can't work any other way. But I want it using my user ID on the server side, not root.

Eliminating strict host key checking

Of course, you can use this to go the other way too, and ssh to your Pi without needing to type a password every time. If you do that, and if you have several Pis, Beaglebones, plug computers or other little Linux gizmos which sometimes share the same IP address, you may run into the annoying whine ssh is prone to:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
The only way to get around this once it happens is by editing ~/.ssh/known_hosts, finding the line corresponding to the pi, and removing it (or just removing the whole file).

You're supposed to be able to turn off this check with StrictHostKeyChecking no, but it doesn't work. Fortunately, there's a trick I discovered several years ago and discussed in Three SSH tips. Here's how the Pi entry ends up looking in my desktop's ~/.ssh/config:

Host pipi
  HostName pi
  User pi
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  IdentityFile ~/.ssh/id_pi

OpenHardware : Ambient Light Sensor

My OpenHardware post about an entropy source got loads of high quality comments, huge thanks to all who chimed in. There look to be a few existing projects producing OpenHardware, and the various comments have links to the better ones. I’ll put this idea back on the shelf for another holiday-hacking session. I’ve still not given up on the SD card interface, although it looks like emulating a storage device might be the easiest and quickest route for any future project.

So, on to the next idea. An OpenHardware USB ambient light sensor. A lot of hardware doesn’t have a way of testing the ambient light level. Webcams don’t count, they use up waaaay too much power and the auto-white balence is normally hardcoded in hardware. So I was thinking of producing a very low cost mini-dongle to measure the ambient light level so that lower-spec laptops could be saving tons of power. With smartphones people are now acutely aware than up to 60% of their battery power is just making the screen light up and I’m sure we could be smarter about what we do in GNOME. The problem traditionally, has been the lack of hardware with this support.

Anyone interested?

December 20, 2014

Switching between Stable, Nearly Stable and Unstable Krita -- a guide for artists.

The first thing to do is to open the Cat's Guide To Building Krita, because that's the base for what I'm going to try to explain here. It's what I use for developing Krita, I usually have every version from 2.4 up ready for testing.

Read more ...

December 19, 2014

Programming with LLDB

Some notes on poking around with LLDB programatically.

Build

Building on OSX was a bit of a pain - use `./configure` then `make` as suggested here, but to build the debugserver you need to open and build debugserver from tools/debugserver. You'll need to set up code signing like this.

LLDB Text API

Start with  `lldb`

Has a Complete API - complex to parse responses programatically, no way to tie a response to the commnd that triggered it.

A ruby example invoking the lldb text api https://gist.github.com/jorj1988/e6c3df64199b6932f39d

LLDB Machine Interface

Start with `lldb-mi --interpreter`

Incomplete API (seemed to be missing a reasonable amount of commands?), easier to tie responses to commands. Still requires parsing. 


C++ Interface

Include `include/lldb/API`, link to `liblldb.dylib`

Has a stable api here. Example here.


Python Interface

Mirrors C++ API. Example of usage here.


Summary

The python API seem like the most sensible option for interacting with LLDB... But I don't think I want to program a large project in python - maybe C++?