April 28, 2017

Fri 2017/Apr/28

  • gboolean is not Rust bool

    I ran into an interesting bug in my Rust code for librsvg. I had these structures in the C code and in the Rust code, respectively; they are supposed to be bit-compatible with each other.

    /* C code */
    
    /* Keep this in sync with rust/src/viewbox.rs::RsvgViewBox */
    typedef struct {
        cairo_rectangle_t rect;
        gboolean active;
    } RsvgViewBox;
    
    
    
    /* Rust code */
    
    /* Keep this in sync with rsvg-private.h:RsvgViewBox */
    #[repr(C)]
    pub struct RsvgViewBox {
        pub rect: cairo::Rectangle,
        pub active: bool
    }
    	    

    After I finished rustifying one of the SVG element types, a test started failing in an interesting way. The Rust code was generating a valid RsvgViewBox structure, but the C code was receiving it with a garbled active field.

    It turns out that Rust's bool is not guaranteed to repr(C) as anything in particular. Rust chooses to do it as a single byte, with the only possible values being 0 or 1. In contrast, C code that uses gboolean assumes that gboolean is an int (... which C allows to be zero or anything else to represent a boolean value). Both structs have the same sizeof or mem::size_of, very likely due to struct alignment.

    I'm on x86_64, which is of course a little-endian platform, so the low byte of my gboolean active field had the correct value, but the higher bytes were garbage from the stack.

    The solution is obvious in retrospect: if the C code says you have a gboolean, bite the bullet and use a glib_sys::gboolean.

    There are impl FromGlib<gboolean> for bool and the corresponding impl ToGlib for bool trait implementations, so you can do this:

    extern crate glib;
    extern crate glib_sys;
    
    use self::glib::translate::*
    
    let my_gboolean: glib_sys::gboolean = g_some_function_that_returns_gboolean ();
    
    let my_rust_bool: bool = from_glib (my_gboolean);
    
    g_some_function_that_takes_gboolean (my_rust_bool.to_glib ());
    	    

    ... Which is really no different from the other from_glib() and to_glib() conversions you use when interfacing with the basic glib types.

    Interestingly enough, when I had functions exported from Rust to C with repr(C), or C functions imported into Rust with extern "C", my naive assumption of gboolean <-> bool worked fine for passed arguments and return values. This is probably because C promotes chars to ints in function calls, and Rust was looking only at the first char in the value. Maybe? And maybe it wouldn't have worked on a big-endian platform? Either way, I was into undefined behavior (bool is not something you repr(C)), so anything goes. I didn't disassemble things to see what was actually happening.

    There is an interesting, albeit extremely pedantic discussion, in this bug I filed about wanting a compiler warning for bool in a repr(C). People suggested that bool should just be represented as C's _Bool or bool if you include <stdbool.h>. BUT! Glib's gboolean predates C99, and therefore stdbool.h.

    Instead of going into a pedantic rabbit hole of whether Glib is non-standard (I mean, it has only been around for over 20 years), or whether C99 is too new to use generally, I'll go with just using the appropriate conversions from gtk-rs.

    Alternative conclusion: C99 is the Pont Neuf of programming languages.

Krita 2017 Survey Results

A bit later than planned, but here are the 2017 Krita Survey results! We wanted to know a lot of things, like, what kind of hardware and screen resolution are most common, what drawing tablets were most common, and which ones gave most trouble. We had more than 1000 responses! Here’s a short summary, for the full report, head to Krita User Survey Report.

  • About 55% of respondents use Windows, about 40% Linux, about 10% MacOS. Web-traffic, wise, 75% browses krita.
  • Almost half of respondents have an NVidia graphics card, less than 25% AMD: the rest is Intel.
  • The most common amount of RAM is 8GB
  • The most common screen resolution 1920×1080
  • The most common tablet brand is Wacom, the next most common Huion; the tablet brand that gives people most trouble is, unsurprisingly, Genius
  • The most common image sizes are 1920×1080, then A4 at 300 DPI: 2480×3508
  • And finally, most people wish that Krita were a bit faster — something we suspect will be the case forever!

And we’ve also learned (as if we didn’t know!) that Krita users are lovely people. We got so many great messages of support in the write-in section!

April 25, 2017

Flock Cod Registration Form Design

Flock logo (plain)

We’re prepping the regcfp site for Flock to open up registrations and CFP for Flock. As a number of changes are underfoot for this year’s Flock compared to previous Flocks, we’ve needed to change up the registration form accordingly. (For those interested, the discussion has been taking place on the flock-planning list).

This is a second draft of those screens after the first round of feedback. The first screen is going to spoil the surprises herein, hopefully.

First screen – change announcements, basic details

On the first screen, we announce a few changes that will be taking place at this year’s Flock. The most notable one is that we’ll now have partial Flock funding available, in an attempt to fund as many Fedora volunteers as possible to enable them to come to Flock. Another change is the addition of a nominal (~$25 USD) registration fee. We had an unusually high number of no-shows at the last Flock, which cost us funding that could have been used to bring more people to Flock. This registration fee is meant to discourage no-shows and enable more folks to come.

Flock registration mockup.

Second screen – social details, personal requirements

This is the screen where you can fill out your badge details as well as indicate your personal requirements (T-shirt size, dietary preferences/restrictions, etc.)

Second Flock registration screen - personal details for badge and prefs (dietary, etc.)

Third screen – no funding needed

So depending, the next section may be split into a separate form or be a conditional based on whether or not the registrant is requesting funding. The reason we would want to split funding requests into a separate form is that applicants will need to do some research into cost estimates for their travel, and that could take some time, and we don’t want the form to time out while that’s going on.

Anyhow, this is what this page of the form looks like if you don’t need funding. We offer an opportunity to help out other attendees to those folks who don’t need funding here.

Third screen – travel details

This is the travel details page for those seeking financial assistance; it’s rather long, as we’ve many travel options, domestic and international.

Fourth screen – funding request review

This is a summary of the total funding request cost as well as the breakdown of partial funding options. I’d really like to hear your feedback on this, if it’s confusing or if it makes sense. Are there too many partial options?

mockup providing partial funding options

Final screen – summary

This screen is just a summary of everything submitted as well as information about next steps.

final screen - registration summary and next steps

What do you think?

Do these seem to make sense? Any confusion or issues come up as you were reading through them? Please let me know. You can drop a comment or join the convo on flock-planning.

Cheers!

(Update: Changed the language of the first questions in both of the 3rd screens; there were confusing double-negatives pointed out by Rebecca Fernandez. Thanks for the help!)

Reverse engineering ComputerHardwareIds.exe with winedbg

In an ideal world vendors could use the same GUID value for hardware matching in Windows and Linux firmware. When installing firmware and drivers in Windows vendors can always use some generated HardwareID GUIDs that match useful things like the BIOS vendor and the product SKU. It would make sense to use the same scheme as Microsoft. There are a few issues in an otherwise simple plan.

The first, solved with a simple kernel patch I wrote (awaiting review by Jean Delvare), exposes a few more SMBIOS fields into /sys/class/dmi/id that are required for the GUID calculation.

The second problem is a little more tricky. We don’t actually know how Microsoft joins the strings, what encoding is used, or more importantly the secret namespace UUID used to seed the GUID. The only thing we have got is the closed source ComputerHardwareIds.exe program in the Windows DDK. This, luckily, runs in Wine although Wine isn’t able to get the system firmware data itself. This can be worked around, and actually makes testing easier.

So, some research. All we know from the MSDN page is that Each hardware ID string is converted into a GUID by using the SHA-1 hashing algorithm which actually tells us quite a bit. Generating a GUID from a SHA-1 hash means this has to be a type 5 UUID.

The reference code for a type-5 UUID is helpfully available in the IETF RFC document so it’s quite quick to get started with research. From a few minutes of searching online, the most likely symbols the program will be using are the BCrypt* set of functions. From the RFC code, we call the checksum generation update function with first the encoded namespace (aha!) and then the encoded joined string (ahaha!). For Win32 programs, BCryptHashData is the function we want to trace.

So, to check:

wine /home/hughsie/ComputerHardwareIds.exe /mfg "To be filled by O.E.M."

…matches the reference HardwareID-14 output from Microsoft. So onto debugging, using +relay shows all the calling values and return values from each Win32 exported symbol:

WINEDEBUG=+relay winedbg --gdb ~/ComputerHardwareIds.exe
Wine-gdb> b BCryptHashData
Wine-gdb> r ~/ComputerHardwareIds.exe /mfg "To be filled by O.E.M." /family "To be filled by O.E.M."
005b:Call bcrypt.BCryptHashData(0011bab8,0033fcf4,00000010,00000000) ret=0100699d
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> 

Great, so this is the secret namespace. The first parameter is the context, the second is the data address, the third is the length (0x10 as a length is indeed SHA-1) and the forth is the flags — so lets print out the data so we can see what it is:

Wine-gdb> x/16xb 0x0033fcf4
0x33fcf4:	0x70	0xff	0xd8	0x12	0x4c	0x7f	0x4c	0x7d
0x33fcfc:	0x00	0x00	0x00	0x00	0x00	0x00	0x00	0x00

Using either the uuid in python, or uuid_unparse in libuuid, we can format the namespace to 70ffd812-4c7f-4c7d-0000-000000000000 — now this doesn’t look like a randomly generated UUID to me! Onto the next thing, the encoding and joining policy:

Wine-gdb> c
005f:Call bcrypt.BCryptHashData(0011bb90,00341458,0000005a,00000000) ret=010069b3
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> x/90xb 0x00341458
0x341458:	0x54	0x00	0x6f	0x00	0x20	0x00	0x62	0x00
0x341460:	0x65	0x00	0x20	0x00	0x66	0x00	0x69	0x00
0x341468:	0x6c	0x00	0x6c	0x00	0x65	0x00	0x64	0x00
0x341470:	0x20	0x00	0x62	0x00	0x79	0x00	0x20	0x00
0x341478:	0x4f	0x00	0x2e	0x00	0x45	0x00	0x2e	0x00
0x341480:	0x4d	0x00	0x2e	0x00	0x26	0x00	0x54	0x00
0x341488:	0x6f	0x00	0x20	0x00	0x62	0x00	0x65	0x00
0x341490:	0x20	0x00	0x66	0x00	0x69	0x00	0x6c	0x00
0x341498:	0x6c	0x00	0x65	0x00	0x64	0x00	0x20	0x00
0x3414a0:	0x62	0x00	0x79	0x00	0x20	0x00	0x4f	0x00
0x3414a8:	0x2e	0x00	0x45	0x00	0x2e	0x00	0x4d	0x00
0x3414b0:	0x2e	0x00
Wine-gdb> q

So there we go. The encoding looks like UTF-16 (as expected, much of the Windows API is this way) and the joining character seems to be &.

I’ve written some code in fwupd so that this happens:

$ fwupdmgr hwids
Computer Information
--------------------
BiosVendor: LENOVO
BiosVersion: GJET75WW (2.25 )
Manufacturer: LENOVO
Family: ThinkPad T440s
ProductName: 20ARS19C0C
ProductSku: LENOVO_MT_20AR_BU_Think_FM_ThinkPad T440s
EnclosureKind: 10
BaseboardManufacturer: LENOVO
BaseboardProduct: 20ARS19C0C

Hardware IDs
------------
{c4159f74-3d2c-526f-b6d1-fe24a2fbc881}   <- Manufacturer + Family + ProductName + ProductSku + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{ff66cb74-5f5d-5669-875a-8a8f97be22c1}   <- Manufacturer + Family + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{2e4dad4e-27a0-5de0-8e92-f395fc3fa5ba}   <- Manufacturer + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{3faec92a-3ae3-5744-be88-495e90a7d541}   <- Manufacturer + Family + ProductName + ProductSku + BaseboardManufacturer + BaseboardProduct
{660ccba8-1b78-5a33-80e6-9fb8354ee873}   <- Manufacturer + Family + ProductName + ProductSku
{8dc9b7c5-f5d5-5850-9ab3-bd6f0549d814}   <- Manufacturer + Family + ProductName
{178cd22d-ad9f-562d-ae0a-34009822cdbe}   <- Manufacturer + ProductSku + BaseboardManufacturer + BaseboardProduct
{da1da9b6-62f5-5f22-8aaa-14db7eeda2a4}   <- Manufacturer + ProductSku
{059eb22d-6dc7-59af-abd3-94bbe017f67c}   <- Manufacturer + ProductName + BaseboardManufacturer + BaseboardProduct
{0cf8618d-9eff-537c-9f35-46861406eb9c}   <- Manufacturer + ProductName
{f4275c1f-6130-5191-845c-3426247eb6a1}   <- Manufacturer + Family + BaseboardManufacturer + BaseboardProduct
{db73af4c-4612-50f7-b8a7-787cf4871847}   <- Manufacturer + Family
{5e820764-888e-529d-a6f9-dfd12bacb160}   <- Manufacturer + EnclosureKind
{f8e1de5f-b68c-5f52-9d1a-f1ba52f1f773}   <- Manufacturer + BaseboardManufacturer + BaseboardProduct
{6de5d951-d755-576b-bd09-c5cf66b27234}   <- Manufacturer

Which basically matches the output of ComputerHardwareIds.exe on the same hardware. If the kernel patch gets into the next release I’ll merge the fwupd branch to master and allow vendors to start using the Microsoft HardwareID GUID values.

Typing Greek letters

I'm taking a MOOC that includes equations involving Greek letters like epsilon. I'm taking notes online, in Emacs, using the iimage mode tricks for taking MOOC class notes in emacs that I worked out a few years back.

Iimage mode works fine for taking screenshots of the blackboard in the videos, but sometimes I'd prefer to just put the equations inline in my file. At first I was typing out things like E = epsilon * sigma * T^4 but that's silly, and of course the professor isn't spelling out the Greek letters like that when he writes the equations on the blackboard. There's got to be a way to type Greek letters on this US keyboard.

I know how to type things like accented characters using the "Multi key" or "Compose key". In /etc/default/keyboard I have XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp" which, among other things, sets the compose key to be my "Menu" key, which I never used otherwise. And there's a file, /usr/share/X11/locale/en_US.UTF-8/Compose, that includes all the built-in compose key sequences. I have a shell function in my .zshrc,

composekey() {
  grep -i $1 /usr/share/X11/locale/en_US.UTF-8/Compose
}
so I can type something like composekey epsilon and find out how to type specific codes. But that didn't work so well for Greek letters. It turns out this is how you type them:
<dead_greek> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<dead_greek> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<dead_greek> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<dead_greek> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<dead_greek> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<dead_greek> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<dead_greek> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<dead_greek> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON
... and so forth. And this <dead_greek> key isn't actually defined in most US/English keyboard layouts: you can check whether it's defined for you with: xmodmap -pke | grep dead_greek

Of course you can use xmodmap to define a key to be <dead_greek>. I stared at my keyboard for a bit, and decided that, considering how seldom I actually need to type Greek characters, I didn't see the point of losing a key for that purpose (though if you want to, here's a thread on how to map <dead_greek> with xmodmap).

I decided it would make much more sense to map it to the compose key with a prefix, like 'g', that I don't need otherwise. I can do that in ~/.XCompose like this:

<Multi_key> <g> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<Multi_key> <g> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<Multi_key> <g> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<Multi_key> <g> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<Multi_key> <g> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<Multi_key> <g> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<Multi_key> <g> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<Multi_key> <g> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON
... and so forth.

And now I can type [MENU] g e and a lovely ε appears, at least in any app that supports Greek fonts, which is most of them nowadays.

April 24, 2017

Of humans and feelings

It was a Wednesday morning. I just connected to email, to realise that something was wrong with the developer web site. People had been having issues accessing content, and they were upset. What started with “what’s wrong with Trac?” quickly escalated to “this is just one more symptom of how The Company doesn’t care about us community members”.

As I investigated the problem, I realised something horrible. It was all my fault.

I had made a settings change in the Trac instance the night before – attempting to impose some reason and structure in ACLs that had grown organically over time – and had accidentally removed a group, containing a number of community members not working for The Company, from having the access they had.

Oh, crap.

After the panic and cold sweats died down, I felt myself getting angry. These were people who knew me, who I had worked alongside for months, and yet the first reaction for at least a few of them was not to assume this was an honest mistake. It was to go straight to conspiracy theory. This was conscious, deliberate, and nefarious. We may not understand why it was done, but it’s obviously bad, and reflects the disdain of The Company.

Had I not done enough to earn people’s trust?

So I fixed the problem, and walked away. “Don’t respond in anger”, I told myself. I got a cup of coffee, talked about it with someone else, and came back 5 minutes later.

“Look at it from their side”, I said – before I started working with The Company, there had been a strained relationship with the community. Yes, they knew Dave Neary wouldn’t screw them over, but they had no way of knowing that it was Dave Neary’s mistake. I stopped taking it personally. There is deep-seated mistrust, and that takes time to heal, I said to myself.

Yet, how to respond on the mailing list thread? “We apologise for the oversight, blah blah blah” would be interpreted as “of course they fixed it, after they were caught”. But did I really want to put myself out there and admit I had made what was a pretty rookie mistake? Wouldn’t that undermine my credibility?

In the end, I bit the bullet. “I did some long-overdue maintenance on our Trac ACLs yesterday, they’re much cleaner and easier to maintain now that we’ve moved to more clearly defined roles. Unfortunately, I did not test the changes well enough before pushing them live, and I temporarily removed access from all non-The Company employees. It’s fixed now. I messed up, and I am sorry. I will be more careful in the future.” All first person – no hiding behind the corporate identity, no “we stand together”, no sugar-coating.

What happened next surprised me. The most vocal critic in the thread responded immediately to apologise, and to thank me for the transparency and honesty. Within half an hour, a number of people were praising me and The Company for our handling of the incident. The air went out of the outrage balloon, and a potential disaster became a growth opportunity – yes, the people running the community infrastructure are human too, and there is no conspiracy. The Man was not out to get us.

I no longer work for The Company, and the team has scattered to the winds. But I never forgot those cold sweats, that feeling of vulnerability, and the elation that followed the community reaction to a heartfelt mea culpa.

Part of the OSS Communities series – difficult conversations. Contribute your stories and tag them on Twitter with #osscommunities to be included.

April 21, 2017

Comb Ridge and Cedar Mesa Trip

[House on Fire ruin, Mule Canyon UT] Last week, my hiking group had its annual trip, which this year was Bluff, Utah, near Comb Ridge and Cedar Mesa, an area particular known for its Anasazi ruins and petroglyphs.

(I'm aware that "Anasazi" is considered a politically incorrect term these days, though it still seems to be in common use in Utah; it isn't in New Mexico. My view is that I can understand why Pueblo people dislike hearing their ancestors referred to by a term that means something like "ancient enemies" in Navajo; but if they want everyone to switch from using a mellifluous and easy to pronounce word like "Anasazi", they ought to come up with a better, and shorter, replacement than "Ancestral Puebloans." I mean, really.)

The photo at right is probably the most photogenic of the ruins I saw. It's in Mule Canyon, on Cedar Mesa, and it's called "House on Fire" because of the colors in the rock when the light is right.

The light was not right when we encountered it, in late morning around 10 am; but fortunately, we were doing an out-and-back hike. Someone in our group had said that the best light came when sunlight reflected off the red rock below the ruin up onto the rock above it, an effect I've seen in other places, most notably Bryce Canyon, where the hoodoos look positively radiant when seen backlit, because that's when the most reflected light adds to the reds and oranges in the rock.

Sure enough, when we got back to House on Fire at 1:30 pm, the light was much better. It wasn't completely obvious to the eye, but comparing the photos afterward, the difference is impressive: Changing light on House on Fire Ruin.

[Brain main? petroglyph at Sand Island] The weather was almost perfect for our trip, except for one overly hot afternoon on Wednesday. And the hikes were fairly perfect, too -- fantastic ruins you can see up close, huge petroglyph panels with hundreds of different creatures and patterns (and some that could only have been science fiction, like brain-man at left), sweeping views of canyons and slickrock, and the geology of Comb Ridge and the Monument Upwarp.

And in case you read my last article, on translucent windows, and are wondering how those generated waypoints worked: they were terrific, and in some cases made the difference between finding a ruin and wandering lost on the slickrock. I wish I'd had that years ago.

Most of what I have to say about the trip are already in the comments to the photos, so I'll just link to the photo page:

Photos: Bluff trip, 2017.

April 19, 2017

Visual Effects for The Man in the High Castle

About Barnstorm

Barnstorm VFX embodies the diverse skills, freewheeling spirit, and daredevil attitude of the early days stunt plane pilots. Nominated for VES award for their outstanding work on the TV series “The Man in the High Castle”, they have been using Blender as integral part of their pipeline.

The following text is an edited version of the answers of a Reddit AMA held by the heads of the team (Lawson Deming and Cory Jamieson) on February 3, 2017.

Getting into Blender

We’ve experimented with a variety of programs over the years, but for 3D work, we settled on using Blender starting about 3 years ago. It’s very unusual for VFX houses (at least in the US) to use Blender (as opposed to, say, Maya), but there are a number of great features that caused us to switch over to it. One of them was the Cycles render engine, that we’ve used for our rendering of most of the 3D elements in High Castle and other shows. In order to deal with the huge rendering needs of High Castle, we set up cloud rendering using Amazon’s own AWS servers through Deadline, which allowed us to have as many as 150 machines working at a time to render some of the big sequences.

In addition to Blender, we occasionally use other 3D programs, including Houdini for particle systems, fire, etc. Our texturing and material work is done in Substance Painter, and compositing is done in Nuke and After Effects.

The original decision to use Blender actually didn’t have anything to do with the cost (though it’s certainly helpful now that we have more people using it). We were already using Nuke and NukeX as a company (which are pretty expensive software packages) and had been using Maya for about a year. Before that, Lightwave was what we used.

Assembling a team

The real turning point came when we had to pull together a small team of freelancers to do a sequence. The process went a little bit like this:

1) We hire a 3D artist to start modeling for us. He’s an experienced modeler but his background is in a studio environment where there are a lot of departments and a pretty hefty pipeline to help deal with everything. He’s nominally a Maya guy, but the studio he was at had their own custom modeling software which he’s more familiar with, so even though he’s working in Maya, it’s not his first choice.

2) The modeling guy only does modeling, so we need to bring in a texture artist. She doesn’t actually use Maya for UV work or texturing. Instead she uses Mari (a Foundry product). She and the Modeler have some issues making the texturing work back and forth between Mari and Maya because they aren’t used to being outside of a studio pipeline that takes care of everything for them.

3) Since neither of the above are experienced in layout or rendering, we hire a third guy to do the setup of the scene. He is a Maya guy as well, but once he starts working, he says “oh, you guys don’t have VRay? I can get by in Mental Ray (Maya’s renderer at the time) but I prefer Vray.” We spend a ton of time trying to work around Mental Ray’s idiosyncrasies, including weird behavior with the HDR lighting major gamma issues with the textures.

4) We need to do some particle simulation work and smoke and create some water in the same scene… Guess who uses Maya to do these things? No one, apparently. Water and particles are Houdini in this case. Smoke is FumeFX (which at the time only existed as a 3DStudio Max plugin and had no Maya version).

So, pop quiz. What is Maya doing for us in this instance? We’ve got a modeler who is begrudgingly using it but prefers other modeling software, a texture artist who isn’t using it at all, a layout/lighter who would rather be using a third party rendering engine, and the prospect of doing SFX that will require multiple additional third party softwares totaling thousands of dollars. At the time we were attempting this, the core team of our company was just 5 people, of which I was the only one who regularly did 3D work (in Lightwave).


I consider myself a generalist and had been puttering along in Maya, but I found it very obtuse and difficult to approach from a generalist standpoint. I’d just started dabbling in Blender and found it very approachable and easy to use, with a lot of support and tutorials out there. At the same time our three freelancers were struggling with the above sequence, I managed to build and render another shot from the scene fully in Blender (a program that I was a novice in at the time), utilizing its internal smoke simulation tools and the ocean simulation toolkit (which is actually a port of the one in Houdini) to do SFX on my own, and I got a great looking render out of Cycles.

Blender has its weaknesses, and as a general 3D package, it’s not the best in any one area, but neither is Maya. Any specialty task will always be better in another program. But without a pre-existing Maya pipeline, and with the fact that Maya’s structure encourages the use of many specialists collaborating on a single task (rather than one well-rounded generalist working solo) it didn’t make sense to dump a lot of resources and money into making Maya work for such a small studio.

I ended up falling in love with working in Blender, and as we brought on and trained some other 3D artists, I encouraged them to use it. Eventually we found ourselves a Blender studio. That advantage of being good for a generalist, though, has also been a weakness as we’ve grown as a company, because it’s hard to find people who are really amazing artists in Blender. Our solution up until now has been to work hard on finding good Blender artists and to try and train others who want to learn.

Blender in production

Also, since Blender acts as a hub for VFX work, it’s still possible for specialists to contribute from their respective programs. Initial modeling, for example, can be done in almost any program. It can be difficult, but the more people from other VFX studios I talk to, the more I realize that everybody’s pipeline is pretty messy, and even the studios who are fully behind Maya use a ton of other software and have a lot of custom scripts and techniques to get everything working the way they want it to.


We use Blender for modeling, animation, and rendering. Our partners at Theory Animation have focused a lot on how to make Blender better for animation (they all came from a Maya background as well but fell in love with Blender the same way I did). We’ve used Blender’s fluid system and particle system (though both of these need work) and render everything in Cycles. We still use Houdini for the stuff that it’s good at. We used Massive to create character animations for “The Man in the High Castle”. We also started using Substance Painter and Substance Designer for texture work. Cycles is good at exporting render layers, which we composited mostly in Nuke.

One of the big hurdles that Blender has to overcome is the the fact that its licensing rules can make it difficult legally for it to interact with paid software. Most companies want to keep their code closed, so the open-source nature of Blender has made it tricky to, for example, get a Substance Designer plugin. It’s something we’re working on though.

When collaborating with other companies, we usually separate the 3d and compositing aspects of the work to keep the software issues from being a problem. It’s getting easier every day, though, especially now that Blender is starting to support Alembic. For season one, the sequence we worked on was completely separate and turnkey, so we didn’t have any issues sharing assets. For season 2, however, we did need to do a lot of conversion and re-modeling of elements. Also, many of the models we received were textured using UDIMs, which Blender does not currently support. It would be great for blender to eventually adopt the UDIM workflow for texturing.
We do get a lot of raised eyebrows from people when we tell them we use Blender professionally. Hopefully the popularity of the show (and the fact that we’ve been nominated for some VFX awards) will help remove some of the stigma that Blender has developed over the years. It’s a great program.


We’ve developed a number of in-house solutions for Blender. We use Blender solely for 3D and NukeX for tracking and compositing, but we hand camera data back and forth between Nuke and Blender using .chan files (that’s technically built into blender but we’ve developed a system to make it a bit easier). Fitting Blender into a compositing pipeline (Nuke, EXR workflow) is surprisingly easy. Layer render rendering, and the ease of setting up Blender have made it pretty fast for passing around assets between artists and vendors. We also have a custom procedure and PBR shader setup for working with materials out of Substance Painter in Blender. A mix of Shotgun, our own asset tracking, and a workflow based on Blender Linking with a handful of add-ons are needed to make sure everything works.

Production Design

We worked really hard to make it feel correct. You can also thank the Production Designer, Andrew Boughton, who designed the practical sets in the show. He has a lot of architectural knowledge and was very collaborative with us to help make sure our designs matched the feel of the rest of the stuff in the show.

Our visual bible for Germania was a book called “Albert Speer: Architecture 1932-1942”. There were extensive and detailed plans for the transformation of Berlin, including blueprints for buildings like the Volkshalle. We did take some creative liberties with the arrangement and positioning of buildings for the sake of the narrative and to better coordinate with the production designer’s aesthetic of the sets. We looked at old film reels including the famous “Triumph of the Will” for references of how Nazi rallies were organized. One video game that I remember paying attention to was “Wolfenstein: The New Order” because it presents a world that was taken over by the Nazis, though its presentation of post war Berlin (including the Volkshalle) was much more futuristic and sci-fi-ish that what we went for. Our goal in MITHC was to create a sense of the world that felt fairly mundane and grounded in reality. The more it felt like something that could really happen, the more effective the message of the show.

April 18, 2017

Flock Cod Logo Ideas

Flock logo (plain)

Ryan Lerch put together an initial cut at a Flock 2017 logo and website design (flock2017-WIP branch of fedora-websites). It was an initial cut he offered for me to play with; in trying to work on some logistics for Flock to make sure things happen on time I felt locking in on a final logo design would be helpful at this point.

Here is the initial cut of the top part of the website with the first draft logo:

Overall, this is very Cape Cod. Ryan created a beautiful piece of work in the landscape illustration and the overall palette. Honestly, this would work fine as-is, but there were a few points of critique for the logo specifically that I decided to explore –

  • There weren’t any standard Fedora fonts in it; I considered at least the date text could be in one of the standard Fedora fonts to tie back to Fedora / Flock. The standard ‘Flock’ logotype wasn’t used either; generally we try to keep stuff with logotypes in that logotype (if anything, so it seems more official.)
  • The color palette is excellent and evocative of Cape Cod, but maybe some Fedora accent colors could tie it into the broader Fedora look and feel and make it seem more like part of the family.
  • The hierarchy of the logo is very “Cape Cod”-centric and my gut told me that “Flock” should be primary and “Cape Cod” should be subordinate to that.
  • Some helpful nautically-experienced folks in the broader community (particularly Pat David) pointed out the knot wasn’t tied quite correctly.

So here were the first couple of iterations I played with (B,C) based on Ryan’s design (A), but trying to take into account the critique / ideas above, with an illustration I created of the Lewis Bay lighthouse (the closest to the conference site):

I posted this to Twitter and Mastodon, and got a ton of very useful feedback. The main points I took away:

  • The seagulls probably complicate things too much – ditch ’em.
  • The Fedora logo was liked.
  • There seemed to be a preference for having the full dates for the conference in the logo.
  • The lighthouse beams in C were sloppily / badly aligned… 🙂 I knew this and was lazy and posted it anyway.
  • Some folks liked the dark blue ones because it was a Fedora color, some folks felt A’s color palette was more “Cape Cod” like.
  • At least a couple folks felt C was reminiscent of a nuclear symbol.
  • The simplicity / cleanness of A was admired.

So here’s the next round; things I tried:

  • Took a position on the hierarchy and placed ‘Flock’ above ‘Cape Cod’ in the general hierarchy in the logo.
  • Standardized all non-Flock logotype fonts on Montserrat, which is a standard Fedora brand font.
  • Shifted to original color palette from A.
  • Properly aligned lighthosue lights.
  • Added full dates to every mockup.
  • Corrected knot tie.

One more round, based on further helpful Mastodon feedback. You can see some play with fonts and mashing up elements from other iterations together based on folks’ suggestions:

I have a few favorites. Maybe you do too. I’m not sure which to go with yet – I have been staring at these too long for today. I did some quick mockups of how they look in the website:

I’ll probably sit on this and come back with fresh eyes later. I’m happy for any / all feedback in the comments here!

3 things community managers can learn from the 50 state strategy

This is part of the opensource.com community blogging challenge: Maintaining Existing Community.

There are a lot of parallels between the world of politics and open source development. Open source community members can learn a lot about how political parties cultivate grass-roots support and local organizations, and empower those local organizations to keep people engaged. Between 2005 and 2009, Howard Dean was the chairman of the Democratic National Congress in the United States, and instituted what was known as the “50 state strategy” to grow the Democratic grass roots. That strategy, and what happened after it was changed, can teach community managers some valuable lessons about keeping community contributors. Here are three lessons community managers can learn from it.

Growing grass roots movements takes effort

The 50 state strategy meant allocating rare resources across parts of the country where there was little or no hope of electing a congressman, as well as spending some resources in areas where there was no credible opposition. Every state and electoral district had some support from the national organization. Dean himself travelled to every state, and identified and empowered young, enthusiastic activists to lead local organizations. This was a lot of work, and many senior democrats did not agree with the strategy, arguing that it was more important to focus effort on the limited number of races where the resources could make a difference between winning and losing (swing seats). Similarly, for community managers, we have a limited number of hours in the day, and investing in outreach in areas where we do not have a big community already takes attention away from keeping our current users happy. But growing the community, and keeping community members engaged, means spending time in places where the short-term return on that investment is not clear. Identifying passionate community users and empowering them to create local user groups, or to man a stand aty a small local conference, or speak at a local meet-up helps keep them engaged and feel like part of a greater community, and it also helps grow the community for the future.

Local groups mean you are part of the conversation

Because of the 50 state strategy, every political conversation in the USA had Democratic voices expressing their world-view. Every town hall meeting, local election, and teatime conversation had someone who could argue and defend the Democratic viewpoint on issues of local and national importance. This means that people were aware of what the party stood for, even in regions where that was not a popular platform. It also meant that there was an opportunity to get a feel for how national platform messaging was being received on the ground. And local groups would take that national platform and “adjust” it for a local audience – emphasizing things which were beneficial to the local community. Open source projects also benefit from having a local community presence, by raising awareness of your project to free software enthusiasts who hear about it at conferences and meet-ups. You also have an opportunity to improve your project, by getting feedback from users on their learning curve in adopting and using it. And you have an increasing number of people who can help you understand what messaging resonates with people, and which arguments for adoption are damp squibs which do not get traction, helping you promote your project more effectively.

Regular contact maintains engagement

After Howard Dean finished his term as head of the DNC in 2009, and Debbie Wasserman-Schultz took over as the DNC chair, the 50 state strategy was abandoned, in favour of a more strategic and focussed investment of efforts in swing states. While there are many possible reasons that can be put forward, it is undeniable that the local Democratic party structures which flourished under Dean have lost traction. The Democratic party has lost hundreds of state legislature seats, dozens of state senate seats, and a number of governorships  in “red” states since 2009, in spite of winning the presidency in 2012. The Democrats have lost control of the House and the Senate nationally, in spite of winning the popular vote in 2016 and 2012. For community managers, it is equally important to maintain contact with local user groups and community members, to ensure they feel empowered to act for the community, and to give the resources they need to be successful. In the absence of regular maintenance, community members are less inclined to volunteer their time to promote the project and maintain a local community.

Summary

Growing local user groups and communities is a lot of work, but it can be very rewarding. Maintaining regular contact, empowering new community members to start a meet-up or a user group in their area, and creating resources for your local community members to speak about and promote your project is a great way to grow the community, and also to make life-long friends. Political organizations have a long history of organizing people to buy into a broader vision and support and promote it in their local communities.

What other lessons can community managers and organizers learn from political organizations?

 

April 16, 2017

April 14, 2017

Profiling Haskell: Don’t chase the red herring

I’m currently working on a small Haskell tool which helps me minimize the waiting time for catching a train into the city (or out). One feature I’ve implemented recently is an automated import of aprox. 25MB compressed CSV data into an SQLite3 database, which was very slow in the beginning. Not focusing on the first results of the profiling information helped to optimize the implementation for a swift import.

Background

The data comes as a 25MB zip archive of text files in a CSV format. All imported, the SQLite database grows to about 800 MiB. My work-in-progress solution was a cruddy shell + SQL script which imports the CSV files into an SQLite database. With this solution, the import takes about 30 seconds, excluding the time you need to manually download the zip file. But this is not very portable, as I wanted to have a more user friendly solution.

The initial Haskell implementation using mostly the esqueleto and persistent DSL functions showed an abysmal performance. I had to stop the process after half an hour.

Finding the culprit

A first profiling pass showed this result summary:

COST CENTRE          MODULE                         %time %alloc                                                                                                                               
                                                                                                                                                                                               
stepError            Database.Sqlite                 77.2    0.0                                                                                                                               
concat.ts'           Data.Text                        1.8   14.5                                                                                                                               
compareText.go       Data.Text                        1.4    0.0                                                                                                                               
concat.go.step       Data.Text                        1.0    8.2                                                                                                                               
concat               Data.Text                        0.9    1.4                                                                                                                               
concat.len           Data.Text                        0.8   13.9                                                                                                                               
sumP.go              Data.Text                        0.8    2.1                                                                                                                               
concat.go            Data.Text                        0.7    2.6                                                                                                                               
singleton_           Data.Text.Show                   0.6    4.0                                                                                                                               
run                  Data.Text.Array                  0.5    3.1                                                                                                                               
escape               Database.Persist.Sqlite          0.5    7.8                                                                                                                               
>>=.\                Data.Attoparsec.Internal.Types   0.5    1.4                                                                                                                               
singleton_.x         Data.Text.Show                   0.4    2.9                                                                                                                               
parseField           CSV.StopTime                     0.4    1.6                                                                                                                               
toNamedRecord        Data.Csv.Types                   0.3    1.2                                                                                                                               
fmap.\.ks'           Data.Csv.Conversion              0.3    2.9                                                                                                                               
insertSql'.ins       Database.Persist.Sqlite          0.2    1.4                                                                                                                               
compareText.go.(...) Data.Text                        0.1    4.3                                                                                                                               
compareText.go.(...) Data.Text                        0.1    4.3

Naturally I checked the implementation of the first function, since that seemed to have the largest impact. It is a simple foreign function call to C. Fraser Tweedale made me aware, that there is not more speed to gain here, since it’s already calling a C function. With that in mind I had to focus on the next entries. It turned out that’s where I gained most of the speed to something more competitive against the crude SQL script and having it more user friendly.

It turned out that Data.Persistent uses primarily Data.Text concatenation to create the SQL statements. That being done for every insert statement is very costly, since it prepares, binds values and executes the statement for each insert (for reference see this Stack Overflow answer).

The solution

My current solution is to prepare the statement once and only bind the values for each insert.

Having done another benchmark, the import time now comes down to approximately a minute on my Thinkpad X1 Carbon.


April 13, 2017

Mailing list for fwupd and the LVFS

I’ve created a mailing list for fwupd and LVFS discussions. If you’re interested in firmware updating on Linux, or want to know what’s happening on the Linux Vendor Firmware Service you probably want to join. There are a few interesting things I’ll post in a few days.

What’s up with Graupner’s screen design?

Graupner

Graupner is a remote control model equipment company, originally founded 1930 in Germany. It went bankrupt in 2012 and was taken over by South Korean manufacturer SJ Ltd one year later. Graupner now continues as brand and sales organization. A large part of the product palette are stick-type transmitters for RC aircraft.

X-8E RC transmitter

The X-8E pistol-style transmitter for surface vehicles was first announced in 2013, but delayed until 2016. I can only assume this was caused by the change of ownership and restructuring. Given the rich feature-set and a price point of € 469.99 in their own shop, it is clearly in the high-end category and must face comparisons to Futaba 4PX, Hitec Lynx 4S, KO Propo EX1, Sanwa M12s and Spektrum DX6R.

However, the screen design seems incredibly rushed and not at all befitting to a flagship model in its category. Let’s have a look at the dashboard screen, which should be visible fairly often.

Import

I imported the dashboard screen from the PDF manual into Inkscape and scaled it to match the resolution of 320 x 480 pixels, with a little tweaking to have the raster image icons  at their original 40 x 40 and lined up on the pixel grid. A photo of the real thing and the result of these first steps:

graupner_x-8e_screen_layout_as_is

As you can see, the layout is all over the place. At least the varying corner radii seem to appear only in the PDF.

Quick and easy improvements

graupner_x-8e_screen_layout_changes

A: In the second row, I removed TX, 2x RX and 4.8V as they are absent in the photo, though their visibility seems to be conditional. There’s space left for them, anyway.

B: Making things line up within a grid. A table section with left aligned labels and units (%) in their own row.

C: Vertical steering and throttle meters (ST and TH) are aligned with the physical controls (wheel and trigger), but steering is better shown on a left-right axis and having the same orientation for all 4 channel meters tames the layout. Graupner is already written above the screen; the space can be put to much better use.

There are several deeper issues I did not touch:

  • Lack of differentiation between pure indicators, toggles and menu buttons.
  • Questionable icons, especially the two in the third row.
  • Just white outlines for some elements, where filled backgrounds would make them more defined.
  • Lacking and bad labelling with unnecessary abbreviations. The O.TIME in the bottom left is explained as model use time in the manual

Filed under: User Experience

April 10, 2017

Encouraging new community members

My friend and colleague Stormy Peters just launched a challenge to the community – to blog on a specific community related topic before the end of the week. This week, the topic is “Encouraging new contributors”.

I have written about the topic of encouraging new contributors in the past, as have many others. So this week, I am kind of cheating, and collecting some of the “Greatest Hits”, articles I have written, or which others have written, which struck a chord on this topic.

Some of my own blog posts I have particular affection for on the topic are:

I also have a few go-to articles I return to often, for the clarity of their ideas, and for their general usefulness:

  • Open Source Community, Simplified” by Max Kanat-Alexander, does a great job of communicating the core values of communities which are successful at recruiting new contributors. I particularly like his mantra at the end: “be really, abnormally, really, really kind, and don’t be mean“. That about sums it up…
  • Building Belonging“, by Jono Bacon: I love Jono’s ability to weave a narrative from personal stories, and the mental image of an 18 year old kid knocking on a stranger’s door and instantly feeling like he was with “his people” is great. This is a key concept of community for me – creating a sense of “us” where newcomers feel like part of a greater whole. Communities who fail to create a sense of belonging leave their engaged users on the outside, where there is a community of “core developers” and those outside. Communities who suck people in and indoctrinate them by force-feeding them kool-aid are successful at growing their communities.
  • I love all of “Producing Open Source Software“, but in the context of this topic, I particularly love the sentiment in the “Managing Participants” chapter: “Each interaction with a user is an opportunity to get a new participant. When a user takes the time to post to one of the project’s mailing lists, or to file a bug report, she has already tagged herself as having more potential for involvement than most users (from whom the project will never hear at all). Follow up on that potential.”

To close, one thing I think is particularly important when you are managing a team of professional developers who work together is to ensure that they understand that they are part of a team that extends beyond their walls. I have written about this before as the “water cooler” anti-pattern. To extend on what is written there, it is not enough to have a policy against internal discussion and decisions – creating a sense of community, with face to face time and with quality engagements with community members outside the company walls, can help a team member really feel like they are part of a community in addition to being a member of a development team in a company.

 

Interview with Marcos Ebrahim

Could you tell us something about yourself?

My name is Marcos Ebrahim. I’m an Egyptian artist and illustrator specialized in children’s book art, having 5 years experience with children’s animation episodes as computer graphics artist. I have just finished my first whole book as children’s illustrator on a freelance basis that will be on the market at Amazon soon. I’m also working on my own children’s book project as author and illustrator.

What genre(s) do you work in?

Children’s illustrations and concept art in general or children’s book art specifically.

Do you paint professionally, as a hobby artist, or both?

Because I’m not a member of any children’s illustration agencies, associations or publishing houses yet, I’m now doing this work on a small scale as a freelancer. I changed careers to be an illustrator a few months ago, so I can’t call myself a professional yet. I hope to achieve that soon.

Whose work inspires you most — who are your role models as an artist?

Nathan Fowkes‘ works and illustrations. I found this illustrator and concept artist on a website that called him “the master of value and colour”. I was lucky engough to study online (an art program by Schoolism) under his supervision and learn a lot. However, I’m not a brilliant student 🙂

Other great illustrators and artists I like: Goro Fujita, Marco Bucci, Patrice Barton, Will Terry, Lynne Chapman, John Manders and many others. I always look forward to seeing their art works to learn from them.

How and when did you get to try digital painting for the first time?

About three years ago, when I was trying to use my new Wacom Intuos tablet for painting and drawing, practicing studies from the great Renaissance masters as fan art.

What makes you choose digital over traditional painting?

Let me describe it like this: “The Undo-Time Machine — the great digital button”. Beside the ability to make changes in illustrations easily, I found its benefit when I tried to work with authors and they asked me to make changes that I couldn’t have made in traditional painting without redoing the illustration from scratch.

How did you find out about Krita?

I used to surf Youtube for viewing Illustrations and artists demo their work. Then I heard about Krita as open source art software. So I decided to search more and found out that the illustrations made with it could be similar to my work, so I should devote some of my time to know more about it and try it. Then I searched out more learning videos on Youtube. Frankly, the most impressive and helpful one was a long video tutorial by the art champion of the Krita community, David Revoy, making a whole comic page from scratch using Krita. He showed the whole illustration process, as well as the brushes and tools he provides for others to use (thank you very much!).

What was your first impression?

I think the Krita program has a user-friendly interface and tools that become more familiar when I configure the shortcuts similarly to most popular other art programs. This make it easier to work without the need to learn many things in a short time.

What do you love about Krita?

I think the most wonderful thing is the brush sets and the way they look like real-world tools. In addition some other tools like the transformation tool (perspective) and the pop-up tool.

Also I can say that working with Krita is the first time that I can work on one of my previous sketches and achieve a good result (according to my current art skills) that I’m happy with.

What do you think needs improvement in Krita? Is there anything that really annoys you?

As I mentioned to the Krita team, there are some issues that we could call bugs. However, I know that Krita is in development and the great Krita team makes things better from one version to another and add new features all the time. Thanks to them and hoping that they will continue their great work!

What sets Krita apart from the other tools that you use?

I think that Krita, as open-source art software, could soon compete with commercial art software if it continues on this path (fixing bugs and adding new features).

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Frankly, I’ve only recently started to use Krita and I’ve finished only two pictures. One I could call an illustration, the other is the background of my art blog, trying to use the wrap tool to make a tiled pattern. You can see this in the screenshots.

What techniques and brushes did you use in it?

I prefer to make a sketch, then work on it adding base colors and adjusting
the lighting value to reach the final details.

Where can people see more of your work?

I add new works to my art blog once or twice every month or more
often, depending on the time I have available and whether I make anything
new.

http://marcosebrahimart.tumblr.com

And also here:
https://www.behance.net/MarcosEbrahim
http://marcosebrahim.deviantart.com/gallery/
https://www.artstation.com/artist/marcosebrahim

Anything else you’d like to share?

All the best wishes to the great Krita team for continuing success in their
work in developing Krita open source software for all artists and all people.

April 07, 2017

Krita 3.1.3 beta 1 released

A week after the alpha release, we present the beta release for Krita 3.1.3. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. The final release of 3.1.3 is planned for end of April.

We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!

Things fixed in this release, compared to 3.1.3-alpha:

  • Added the credits for the 2016 Kickstarter backers to the About Krita dialog
  • Use the name of the filter when creating a filter mask from the filter dialog instead of “effect”
  • Don’t cover startup dialogs (for instance, for the pdf import filter) with the splash screen
  • Fix a race condition that made the a transform mask with a liquify transformation unreliable
  • Fix canvas blackouts when using the liquify tool at a high zoom level
  • Fix loading the playback cache
  • Use the native color selector on OSX: Krita’s custom color selector cannot pick screen colors on OSX
  • Set the default PNG compression to 3 instead of 9: this makes saving png’s much faster and the resulting size is the same.
  • Fix a crash when pressing the V shortcut to draw straight lines
  • Fix a warning when the installation is incomplete that still mentioned Calligra
  • Make dragging the guides with a tablet work correctly

Note for Windows Users

We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue. For now, if you are affected, you have to disable OpenGL in krita/settings/configure Krita/display.

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is also available. You can also use the Krita Lime PPA to install Krita 3.1.3-beta.1 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

April 06, 2017

Clicking through a translucent window: using X11 input shapes

It happened again: someone sent me a JPEG file with an image of a topo map, with a hiking trail and interesting stopping points drawn on it. Better than nothing. But what I really want on a hike is GPX waypoints that I can load into OsmAnd, so I can see whether I'm still on the trail and how to get to each point from where I am now.

My PyTopo program lets you view the coordinates of any point, so you can make a waypoint from that. But for adding lots of waypoints, that's too much work, so I added an "Add Waypoint" context menu item -- that was easy, took maybe twenty minutes. PyTopo already had the ability to save its existing tracks and waypoints as a GPX file, so no problem there.

[transparent image viewer overlayed on top of topo map] But how do you locate the waypoints you want? You can do it the hard way: show the JPEG in one window, PyTopo in the other, and do the "let's see the road bends left then right, and the point is off to the northwest just above the right bend and about two and a half times as far away as the distance through both road bends". Ugh. It takes forever and it's terribly inaccurate.

More than once, I've wished for a way to put up a translucent image overlay that would let me click through it. So I could see the image, line it up with the map in PyTopo (resizing as needed), then click exactly where I wanted waypoints.

I needed two features beyond what normal image viewers offer: translucency, and the ability to pass mouse clicks through to the window underneath.

A translucent image viewer, in Python

The first part, translucency, turned out to be trivial. In a class inheriting from my Python ImageViewerWindow, I just needed to add this line to the constructor:

    self.set_opacity(.5)

Plus one more step. The window was translucent now, but it didn't look translucent, because I'm running a simple window manager (Openbox) that doesn't have a compositor built in. Turns out you can run a compositor on top of Openbox. There are lots of compositors; the first one I found, which worked fine, was xcompmgr -c -t-6 -l-6 -o.1

The -c specifies client-side compositing. -t and -l specify top and left offsets for window shadows (negative so they go on the bottom right). -o.1 sets the opacity of window shadows. In the long run, -o0 is probably best (no shadows at all) since the shadow interferes a bit with seeing the window under the translucent one. But having a subtle .1 shadow was useful while I was debugging.

That's all I needed: voilà, translucent windows. Now on to the (much) harder part.

A click-through window, in C

X11 has something called the SHAPE extension, which I experimented with once before to make a silly program called moonroot. It's also used for the familiar "xeyes" program. It's used to make windows that aren't square, by passing a shape mask telling X what shape you want your window to be. In theory, I knew I could do something like make a mask where every other pixel was transparent, which would simulate a translucent image, and I'd at least be able to pass clicks through on half the pixels.

But fortunately, first I asked the estimable Openbox guru Mikael Magnusson, who tipped me off that the SHAPE extension also allows for an "input shape" that does exactly what I wanted: lets you catch events on only part of the window and pass them through on the rest, regardless of which parts of the window are visible.

Knowing that was great. Making it work was another matter. Input shapes turn out to be something hardly anyone uses, and there's very little documentation.

In both C and Python, I struggled with drawing onto a pixmap and using it to set the input shape. Finally I realized that there's a call to set the input shape from an X region. It's much easier to build a region out of rectangles than to draw onto a pixmap.

I got a C demo working first. The essence of it was this:

    if (!XShapeQueryExtension(dpy, &shape_event_base, &shape_error_base)) {
        printf("No SHAPE extension\n");
        return;
    }

    /* Make a shaped window, a rectangle smaller than the total
     * size of the window. The rest will be transparent.
     */
    region = CreateRegion(outerBound, outerBound,
                          XWinSize-outerBound*2, YWinSize-outerBound*2);
    XShapeCombineRegion(dpy, win, ShapeBounding, 0, 0, region, ShapeSet);
    XDestroyRegion(region);

    /* Make a frame region.
     * So in the outer frame, we get input, but inside it, it passes through.
     */
    region = CreateFrameRegion(innerBound);
    XShapeCombineRegion(dpy, win, ShapeInput, 0, 0, region, ShapeSet);
    XDestroyRegion(region);

CreateRegion sets up rectangle boundaries, then creates a region from those boundaries:

Region CreateRegion(int x, int y, int w, int h) {
    Region region = XCreateRegion();
    XRectangle rectangle;
    rectangle.x = x;
    rectangle.y = y;
    rectangle.width = w;
    rectangle.height = h;
    XUnionRectWithRegion(&rectangle, region, region);

    return region;
}

CreateFrameRegion() is similar but a little longer. Rather than post it all here, I've created a GIST: transregion.c, demonstrating X11 shaped input.

Next problem: once I had shaped input working, I could no longer move or resize the window, because the window manager passed events through the window's titlebar and decorations as well as through the rest of the window. That's why you'll see that CreateFrameRegion call in the gist: -- I had a theory that if I omitted the outer part of the window from the input shape, and handled input normally around the outside, maybe that would extend to the window manager decorations. But the problem turned out to be a minor Openbox bug, which Mikael quickly tracked down (in openbox/frame.c, in the XShapeCombineRectangles call on line 321, change ShapeBounding to kind). Openbox developers are the greatest!

Input Shapes in Python

Okay, now I had a proof of concept: X input shapes definitely can work, at least in C. How about in Python?

There's a set of python-xlib bindings, and they even supports the SHAPE extension, but they have no documentation and didn't seem to include input shapes. I filed a GitHub issue and traded a few notes with the maintainer of the project. It turned out the newest version of python-xlib had been completely rewritten, and supposedly does support input shapes. But the API is completely different from the C API, and after wasting about half a day tweaking the demo program trying to reverse engineer it, I gave up.

Fortunately, it turns out there's a much easier way. Python-gtk has shape support, even including input shapes. And if you use regions instead of pixmaps, it's this simple:

    if self.is_composited():
        region = gtk.gdk.region_rectangle(gtk.gdk.Rectangle(0, 0, 1, 1))
        self.window.input_shape_combine_region(region, 0, 0)

My transimageviewer.py came out nice and simple, inheriting from imageviewer.py and adding only translucency and the input shape.

If you want to define an input shape based on pixmaps instead of regions, it's a bit harder and you need to use the Cairo drawing API. I never got as far as working code, but I believe it should go something like this:

    # Warning: untested code!
    bitmap = gtk.gdk.Pixmap(None, self.width, self.height, 1)
    cr = bitmap.cairo_create()
    # Draw a white circle in a black rect:
    cr.rectangle(0, 0, self.width, self.height)
    cr.set_operator(cairo.OPERATOR_CLEAR)
    cr.fill();

    # draw white filled circle
    cr.arc(self.width / 2, self.height / 2, self.width / 4,
           0, 2 * math.pi);
    cr.set_operator(cairo.OPERATOR_OVER);
    cr.fill();

    self.window.input_shape_combine_mask(bitmap, 0, 0)

The translucent image viewer worked just as I'd hoped. I was able to take a JPG of a trailmap, overlay it on top of a PyTopo window, scale the JPG using the normal Openbox window manager handles, then right-click on top of trail markers to set waypoints. When I was done, a "Save as GPX" in PyTopo and I had a file ready to take with me on my phone.

Comments be gone

We are sorry to inform you that we had to disable comments on this website. Currently there are more than 21 thousand messages in the spam queue plus another 2.6 thousand in the review queue. There is no way we can handle those. If you want to get in touch with us then head over to the contact page and find what suits you best – mailing lists, IRC, bug tracker, …
We hope to be able to get some alternative up and running, but that might take some time as it's not really a high priority for us.

darktable 2.2.4 released

we're proud to announce the fourth bugfix release for the 2.2 series of darktable, 2.2.4!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.4.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.4.tar.xz
bd5445d6b81fc3288fb07362870e24bb0b5378cacad2c6e6602e32de676bf9d8  darktable-2.2.4.tar.xz
$ sha256sum darktable-2.2.4.6.dmg
b7e4aeaa4b275083fa98b2a20e77ceb3ee48af3f7cc48a89f41a035d699bd71c  darktable-2.2.4.6.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.3 can be found below.

New features:

  • Better brush trace handing of opacity to get better control.
  • tools: Add script to purge stale thumbnails
  • tools: A script to watch a folder for new images

Bugfixes:

  • DNG: fix camera name demangling. It used to report some wrong name for some cameras.
  • When using wayland, prefer XWayland, because native Wayland support is not fully functional yet
  • EXIF: properly handle image orientation '2' and '4' (swap them)
  • OpenCL: a few fixes in profiled denoise, demosaic and colormapping
  • tiling: do not process uselessly small end tiles
  • masks: avoid assertion failure in early phase of path generation,
  • masks: reduce risk of unwanted self-finalization of small path shapes
  • Fix rare issue when expanding $() variables in import/export string
  • Camera import: fix ignore_jpg setting not having an effect
  • Picasa web exporter: unbreak after upstream API change
  • collection: fix query string for folders ( 'a' should match 'a/b' and 'a/c', but not 'ac/' )

Base Support:

  • Fujifilm X-T20 (only uncompressed raw, at the moment)
  • Fujifilm X100F (only uncompressed raw, at the moment)
  • Nikon COOLPIX B700 (12bit-uncompressed)
  • Olympus E-M1MarkII
  • Panasonic DMC-TZ61 (4:3, 3:2, 1:1, 16:9)
  • Panasonic DMC-ZS40 (4:3, 3:2, 1:1, 16:9)
  • Sony ILCE-6500

Noise Profiles:

  • Canon PowerShot G7 X Mark II
  • Olympus E-M1MarkII
  • Lge Nexus 5X

Wed 2017/Apr/05

  • The Rust+GNOME Hackfest in Mexico City, part 1

    Last week we had a Rust + GNOME hackfest in Mexico City (wiki page), kindly hosted by the Red Hat office there, in its very new and very cool office in the 22nd floor of a building and with a fantastic view of the northern part of the city. Allow me to recount the event briefly.

    View from the Red Hat office

    Inexplicably, in GNOME's 20 years of existence, there has never been a hackfest or event in Mexico. This was the perfect chance to remedy that and introduce people to the wonders of Mexican food.

    My friend Joaquín Rosales, also from Xalapa, joined us as he is working on a very cool Rust-based monitoring system for small-scale spirulina farms using microcontrollers.

    Alberto Ruiz started getting people together around last November, with a couple of video chats with Rust maintainers to talk about making it possible to write GObject implementations in Rust. Niko Matsakis helped along with the gnarly details of making GObject's and Rust's memory management play nicely with each other.

    GObject implementations in Rust

    During the hackfest, I had the privilege of sitting next to Niko to do an intensive session of pair programming to function as a halfway-reliable GObject reference while I fixed my non-working laptop (intermission: kids, never update all your laptop's software right before traveling. It will not work once you reach your destination.).

    The first thing was to actually derive a new class from GObject, but in Rust. In C there is a lot of boilerplate code to do this, starting with the my_object_get_type() function. Civilized C code now defines all that boilerplate with the G_DEFINE_TYPE() macro. You can see a bit of the expanded code here.

    What G_DEFINE_TYPE() does is to define a few functions that tell the GType system about your new class. You then write a class_init() function where you define your table of virtual methods (just function pointers in C), you register signals which your class can emit (like "clicked" for a GtkButton), and you can also define object properties (like "text" for the textual contents of a GtkEntry) and whether they are readable/writable/etc.

    You also define an instance_init() function which is responsible for initializing the memory allocated to instances of your class. In C this is quite normal: you allocate some memory, and then you are responsible for initializing it. In Rust things are different: you cannot have uninitialized memory unless you jump through some unsafe hoops; you create fully-initialized objects in a single shot.

    Finally, you define a finalize function which is responsible for freeing your instance's data and chaining to the finalize method in your superclass.

    In principle, Rust lets you do all of this in the same way that you would in C, by calling functions in libgobject. In practice it is quite cumbersome. All the magic macros we have to define the GObject implementation boilerplate in gtype.h are there precisely because doing it in "plain C" is quite a drag. Rust makes this no different, but you can't use the C macros there.

    A GObject in Rust

    The first task was to write an actual GObject-derived class in Rust by hand, just to see how it could be done. Niko took care of this. You can see this mock object here. For example, here are some bits:

    #[repr(C)]
    pub struct Counter {
        parent: GObject,
    }
    
    struct CounterPrivate {
        f: Cell<u32>,
        dc: RefCell<Option<DropCounter>>,
    }
    
    #[repr(C)]
    pub struct CounterClass {
        parent_class: GObjectClass,
        add: Option<extern fn(&Counter, v: u32) -> u32>,
        get: Option<extern fn(&Counter) -> u32>,
        set_drop_counter: Option<extern fn(&Counter, DropCounter)>,
    }
    	    

    Here, Counter and CounterClass look very similar to the GObject boilerplate you would write in C. Both structs have GObject and GObjectClass as their first fields, so when doing C casts they will have the proper size and fields within those sub-structures.

    CounterPrivate is what you would declare as the private structure with the actual fields for your object. Here, we have an f: Cell<u32> field, used to hold an int which we will mutate, and a DropCounter, an utility struct which we will use to assert that our Rust objects get dropped only once from the C-like implementation of the finalize() function.

    Also, note how we are declaring two virtual methods in the CounterClass struct, add() and get(). In C code that defines GObjects, that is how you can have overridable methods: by exposing them in the class vtable. Since GObject allows "abstract" methods by setting their vtable entries to NULL, we use an Option around a function pointer.

    The following code is the magic that registers our new type with the GObject machinery. It is what would go in the counter_get_type() function if it were implemented in C:

    lazy_static! {
        pub static ref COUNTER_GTYPE: GType = {
            unsafe {
                gobject_sys::g_type_register_static_simple(
                    gobject_sys::g_object_get_type(),
                    b"Counter\0" as *const u8 as *const i8,
                    mem::size_of::<CounterClass>() as u32,
                    Some(CounterClass::init),
                    mem::size_of::<Counter>() as u32,
                    Some(Counter::init),
                    GTypeFlags::empty())
            }
    };
    	    

    If you squint a bit, this looks pretty much like the corresponding code in G_DEFINE_TYPE(). That lazy_static!() means, "run this only once, no matter how many times it is called"; it is similar to g_once_*(). Here, gobject_sys::g_type_register_static_simple() and gobject_sys::g_object_get_type() are the direct Rust bindings to the corresponding C functions; they come from the low-level gobject-sys module in gtk-rs.

    Here is the equivalent to counter_class_init():

    impl CounterClass {
        extern "C" fn init(klass: gpointer, _klass_data: gpointer) {
            unsafe {
                let g_object_class = klass as *mut GObjectClass;
                (*g_object_class).finalize = Some(Counter::finalize);
    
                gobject_sys::g_type_class_add_private(klass, mem::size_of::<CounterPrivate>>());
    
                let klass = klass as *mut CounterClass;
                let klass: &mut CounterClass = &mut *klass;
                klass.add = Some(methods::add);
                klass.get = Some(methods::get);
                klass.set_drop_counter = Some(methods::set_drop_counter);
            }
        }
    }	    

    Again, this is pretty much identical to the C implementation of a class_init() function. We even set the standard g_object_class.finalize field to point to our finalizer, written in Rust. We add a private structure with the size of our CounterPrivate...

    ... which we later are able to fetch like this:

    impl Counter {
       fn private(&self) -> &CounterPrivate {
            unsafe {
                let this = self as *const Counter as *mut GTypeInstance;
                let private = gobject_sys::g_type_instance_get_private(this, *COUNTER_GTYPE);
                let private = private as *const CounterPrivate;
                &*private
            }
        }
    }
    	    

    I.e. we call g_type_instance_get_private(), just like C code would, to get the private structure. Then we cast it to our CounterPrivate and return that.

    But that's all boilerplate

    Yeah, pretty much. But don't worry! Niko made it possible to get rid of it in a comfortable way! But first, let's look at the non-boilerplate part of our Counter object. Here are its two interesting methods:

    mod methods {
        #[allow(unused_imports)]
        use super::{Counter, CounterPrivate, CounterClass};
    
        pub(super) extern fn add(this: &Counter, v: u32) -> u32 {
            let private = this.private();
            let v = private.f.get() + v;
            private.f.set(v);
            v
        }
    
        pub(super) extern fn get(this: &Counter) -> u32 {
            this.private().f.get()
        }
    }
    	    

    These should be familar to people who implement GObjects in C. You first get the private structure for your instance, and then frob it as needed.

    No boilerplate, please

    Niko spent the following two days writing a plugin for the Rust compiler so that we can have a mini-language to write GObject implementations comfortably. Instead of all the gunk above, you can simply write this:

    extern crate gobject_gen;
    use gobject_gen::gobject_gen;
    
    use std::cell::Cell;
    
    gobject_gen! {
        class Counter {
            struct CounterPrivate {
                f: Cell<u32>
            }
    
            fn add(&self, x: u32) -> u32 {
                let private = self.private();
                let v = private.f.get() + x;
                private.f.set(v);
                v
            }
    
            fn get(&self) -> u32 {
                self.private().f.get()
            }
        }
    }	      
    	    

    This call to gobject_gen!() gets expanded to the the necessary boilerplate code. That code knows how to register the GType, how to create the class_init() and instance_init() functions, how to register the private structure and the utility private() to get it, and how to define finalize(). It will fill the vtable as appropriate with the methods you create.

    We figured out that this looks pretty much like Vala, except that it generates GObjects in Rust, callable by Rust itself or by any other language, once the GObject Introspection machinery around this is written. That is, just like Vala, but for Rust.

    And this is pretty good! We are taking an object system in C, which we must keep around for compatibility reasons and for language bindings, and making an easy way to write objects for it in a safe, maintained language. Vala is safer than plain C, but it doesn't have all the machinery to guarantee correctness that Rust has. Finally, Rust is definitely better maintained than Vala.

    There is still a lot of work to do. We have to support registering and emitting signals, registering and notifying GObject properties, and probably some other GType arcana as well. Vala already provides nice syntax to do this, and we can probably use it with only a few changes.

    Finally, the ideal situation would be for this compiler plugin, or an associated "cargo gir" step, to emit the necessary GObject Introspection information so that these GObjects can be called from other languages automatically. We could also spit C header files to consume the Rust GObjects from C.

    Railing with horses

    And the other people in the hackfest?

    I'll tell you in the next blog post!

April 02, 2017

Stellarium 0.12.9

Stellarium 0.12.9 has been released today!

The series 0.12 is LTS for owners of old computers (old with weak graphics cards) and this is release with once fix for Solar System Editor plugin.

March 31, 2017

Show mounted filesystems

Used to be that you could see your mounted filesystems by typing mount or df. But with modern Linux kernels, all sorts are implemented as virtual filesystems -- proc, /run, /sys/kernel/security, /dev/shm, /run/lock, /sys/fs/cgroup -- I have no idea what most of these things are except that they make it much more difficult to answer questions like "Where did that ebook reader mount, and did I already unmount it so it's safe to unplug it?" Neither mount nor df has a simple option to get rid of all the extraneous virtual filesystems and only show real filesystems.

http://unix.stackexchange.com/questions/177014/showing-only-interesting-mount-p oints-filtering-non-interesting-types had some suggestions that got me started:

mount -t ext3,ext4,cifs,nfs,nfs4,zfs
mount | grep -E --color=never  '^(/|[[:alnum:]\.-]*:/)'
Another answer there says it's better to use findmnt --df, but that still shows all the tmpfs entries (findmnt --df | grep -v tmpfs might do the job).

And real mounts are always mounted on a filesystem path starting with /, so you can do mount | grep '^/'.

But it also turns out that mount will accept a blacklist of types as well as a whitelist: -t notype1,notype2... I prefer the idea of excluding a blacklist of filesystem types versus restricting it to a whitelist; that way if I mount something unusual like curlftpfs that I forgot to add to the whitelist, or I mount a USB stick with a filesystem type I don't use very often (ntfs?), I'll see it.

On my system, this was the list of types I had to disable (sheesh!):

mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl

df is easier: like findmnt, it excludes most of those filesystem types to begin with, so there are only a few you need to exclude:

df -hTx tmpfs -x devtmpfs -x rootfs

Obviously I don't want to have to type either of those commands every time I want to check my mount list. SoI put this in my .zshrc. If you call mount or df with no args, it applies the filters, otherwise it passes your arguments through. Of course, you could make a similar alias for findmnt.

# Mount and df are no longer useful to show mounted filesystems,
# since they show so much irrelevant crap now.
# Here are ways to clean them up:
mount() {
    if [[ $# -ne 0 ]]; then
        /bin/mount $*
        return
    fi

    # Else called with no arguments: we want to list mounted filesystems.
    /bin/mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl
}

df() {
    if [[ $# -ne 0 ]]; then
        /bin/df $*
        return
    fi

    # Else called with no arguments: we want to list mounted filesystems.
    /bin/df -hTx tmpfs -x devtmpfs -x rootfs
}

Update: Chris X Edwards suggests lsblk or lsblk -o 'NAME,MOUNTPOINT'. it wouldn't have solved my problem because it only shows /dev devices, not virtual filesystems like sshfs, but it's still a command worth knowing about.

Recipe Icon

Initially I was going to do a more elaborate workflow tutorial, but time flies when you’re having fun on 3.24. With the release out, I’d rather publish this than let it rot. Maybe the next one!

Recipe Icon

Krita 3.1.3 Alpha released

We’re working like crazy on the next versions of Krita — 3.1.3 and 4.0. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. This week we’ve prepared the first 3.1.3 alpha builds for testing! The final release of 3.1.3 is planned for end of April.

We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!

Note for Windows Users

We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue.

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.3-alpha.2 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

March 30, 2017

Game art course released!

Here’s Nathan with a piece of good news:

After months of work, I’m glad to announce that Make Professional Painterly Game Art with Krita is out! It is the first Game Art training for your favourite digital painting program.

In this course, you’ll learn:
1. The techniques professionals use to make beautiful sprites
2. How to create characters, background and even simple UI
3. How to build smart, reusable assets

With the pro and premium versions, you’ll also get the opportunity to improve your art fundamentals, become more efficient with Krita, and build a detailed game mockup for your portfolio.

The course page has free sample tutorials and the answers to all of your questions.

Learn more:
https://gumroad.com/l/krita-game-art-tutorial-1

 

March 29, 2017

GIMP is Going to LGM!


GIMP is Going to LGM!

Tall and tan and young and lovely...

This years Libre Graphics Meeting (2017) is going to be held in the lovely city seen above, Rio de Janeiro, Brazil! This is an important meeting for so many people in the Free/Libre art community as it’s one of the only times they have an opportunity to meet face to face.

We’ve had some folks attending the past LGM’s (Leipzig and London) and it’s a wonderful opportunity to spend some time with friends. (Also, @frd from the community will be there!)

GIMP and darktable at LGM GIMPers, some darktable folks, and even Nate Willis at the flat during LGM/London!

So in the spirit of camaraderie, I have a request…

The GIMP team will be in attendance this year. I happen to have a fondness for them so I’m asking anyone reading this to please head over and donate to the project.

GIMP Wilber

That link is for the GNOME PayPal account, but there are other ways to donate as well.

This is one of the few times that the GIMP team gets a chance to meet in person. They use the time to hack at GIMP and to manage internal business. The time they get to spend together is invaluable to the project and by extension everyone that uses GIMP.

Just look at these faces! Surely this (Brady) Bunch of folks is worth helping to get a better GIMP?

GIMPers at LGM/London Left to right, top to bottom:
Ville, Mitch, Øyvind,
Simon, Liam, João,
Aryeom, Jehan, Michael

Attending

Besides @frd I’m not sure who else from the community might be attending, so if I’ve missed you I apologize! Please feel free to use this topic to communicate and coordinate if you’d like.

It appears that personally I’m on a biennial schedule with attending LGM - so I’m looking forward to next year to be able to catch up with everyone!

March 27, 2017

Interview with Dolly

Could you tell us something about yourself?

My nickname is Dolly, I am 11 years old, I live in Cannock, Staffordshire, England. I am at Secondary school, and at the weekends I attend drama, dance and singing lessons, I like drawing and recently started using the Krita app.

How did you find out about Krita?

My dad and my friend told me about it.

Do you draw on paper too, and which is more fun, paper or computer?

I draw on paper, and I like Krita more than paper art as there’s a lot more colours instantly available than when I do paper art.

What kind of pictures do you draw?

I mostly draw my original character (called Phantom), I draw animals, trees and stars too.

What is easy to do with Krita? What is difficult to do?

I think choosing the colour is easy, its really good, I find getting the right brush size a little difficult due to the scrolling needed to select the brush size.

Which thing about Krita is most fun?

The thing most fun for me is colouring in my pictures as there is a great range of colour available, far more than in my pencil case.

Is there anything in Krita that you’d like to be different?

I think Krita is almost perfect the way it is at the moment however if the brush selection expanded automatically instead of having to scroll through it would be better for me.

Can you show us a picture you made with Krita?

I can, I have attached some of my favourites that I have done for my friends.

How did you make it?

I usually start with the a standard base line made up of a circle for the face and the ears, I normally add the hair and the other features (eyes, noses and mouth) and finally colour and shade and include any accessories.

Is there anything else you’d like to tell us?

I really enjoy Krita, I think its one of the best drawing programs there is!

March 25, 2017

Reading keypresses in Python

As part of preparation for Everyone Does IT, I was working on a silly hack to my Python script that plays notes and chords: I wanted to use the computer keyboard like a music keyboard, and play different notes when I press different keys. Obviously, in a case like that I don't want line buffering -- I want the program to play notes as soon as I press a key, not wait until I hit Enter and then play the whole line at once. In Unix that's called "cbreak mode".

There are a few ways to do this in Python. The most straightforward way is to use the curses library, which is designed for console based user interfaces and games. But importing curses is overkill just to do key reading.

Years ago, I found a guide on the official Python Library and Extension FAQ: Python: How do I get a single keypress at a time?. I'd even used it once, for a one-off Raspberry Pi project that I didn't end up using much. I hadn't done much testing of it at the time, but trying it now, I found a big problem: it doesn't block.

Blocking is whether the read() waits for input or returns immediately. If I read a character with c = sys.stdin.read(1) but there's been no character typed yet, a non-blocking read will throw an IOError exception, while a blocking read will wait, not returning until the user types a character.

In the code on that Python FAQ page, blocking looks like it should be optional. This line:

fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)
is the part that requests non-blocking reads. Skipping that should let me read characters one at a time, block until each character is typed. But in practice, it doesn't work. If I omit the O_NONBLOCK flag, reads never return, not even if I hit Enter; if I set O_NONBLOCK, the read immediately raises an IOError. So I have to call read() over and over, spinning the CPU at 100% while I wait for the user to type something.

The way this is supposed to work is documented in the termios man page. Part of what tcgetattr returns is something called the cc structure, which includes two members called Vmin and Vtime. man termios is very clear on how they're supposed to work: for blocking, single character reads, you set Vmin to 1 (that's the number of characters you want it to batch up before returning), and Vtime to 0 (return immediately after getting that one character). But setting them in Python with tcsetattr doesn't make any difference.

(Python also has a module called tty that's supposed to simplify this stuff, and you should be able to call tty.setcbreak(fd). But that didn't work any better than termios: I suspect it just calls termios under the hood.)

But after a few hours of fiddling and googling, I realized that even if Python's termios can't block, there are other ways of blocking on input. The select system call lets you wait on any file descriptor until has input. So I should be able to set stdin to be non-blocking, then do my own blocking by waiting for it with select.

And that worked. Here's a minimal example:

import sys, os
import termios, fcntl
import select

fd = sys.stdin.fileno()
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON
newattr[3] = newattr[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldterm = termios.tcgetattr(fd)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

print "Type some stuff"
while True:
    inp, outp, err = select.select([sys.stdin], [], [])
    c = sys.stdin.read()
    if c == 'q':
        break
    print "-", c

# Reset the terminal:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

A less minimal example: keyreader.py, a class to read characters, with blocking and echo optional. It also cleans up after itself on exit, though most of the time that seems to happen automatically when I exit the Python script.

March 24, 2017

RawTherapee and Pentax Pixel Shift


RawTherapee and Pentax Pixel Shift

Supporting multi-file raw formats

What is Pixel Shift?

Modern digital sensors (with a few exceptions) use an arrangement of RGB filters over a square grid of photosites. For a given 2x2 square of photosites the filters are designed to allow two green, and one each red and blue colors through to the photosite. These are arranged on a grid:

Bayer pattern on sensor

The pattern is known as a Bayer pattern (after the creator Bryce Bayer of Eastman Kodak). The resulting pattern shows how each RGB is offset into the grid.

Bayer pattern on sensor profile

Each of the pixel sites captures a single color. In order to produce a full color representation at each pixel, the other color values need to be interpolated from the surrounding grid. This interpolation and methods for calculating it are referred to as demosaicing. The methods for accomplishing this vary across different algorithms.

Bayer Interpolation Example The final RGB value for the initially Red pixel needs to be interpolated from the surrounding Blue and Green pixels.

Unfortunately, this can often result in problems. There can be chromatic aliasing problems resulting in odd color fringing and roughness on edges or a loss of detail and sharpness.

Pixel Shift

Pentax‘s Pixel Shift (Available on the K-1, K-3 II, KP, K-70) attempts to alleviate some of these problems through a novel approach of capturing four images quickly in succession and by moving the entire camera sensor a single pixel for each shot. This has the effect of capturing a full RGB value at each pixel location:

Pixel Shift Example Diagram Pixel Shift shifts the sensor by one pixel in each direction to be able to generate a full set of RGB values at each photosite.

This means a full RGB value for a pixel location can be created without having to interpolate from neighboring values.

Advantages

Less Noise

If you look carefully at the Bayer pattern, you’ll notice that when shifting to adjacent pixels there will always be two green values captured per pixel. The average of these green values helps to suppress noise that may have been interpolated and spread through a normal, single-shot raw file.

Pixel Shift Noise Reduction Example Top: single raw frame, Bottom: Pixel Shift

Less Moiré

Avoiding the interpolation of pixel colors from surrounding photosites helps to reduce the appearance of Moiré in the final result:

Pixel Shift Moiré Reduction Example Top: single raw frame, Bottom: Pixel Shift

Increased Resolution

This method is similar in concept to what was previously seen when Olympus announced their “High Resolution” mode for the OMD E-M5mkII camera (or manually as we previously described in this blog post). In that case they combine 8 frames moved by sub-pixel amounts to increase the overall resolution. The difference here is that Olympus generates a single, combined raw file from the results, while Pixel Shift gets you access to each of the four raw files before they’re combined.

In each case, a higher resolution image can be created from the results:

Pixel Shift Increased Resolution Example Top: single raw frame, Bottom: Pixel Shift

Disadvantages

Movement

As with most approaches for capturing multiple images and combining them, a particularly problematic area is when there are objects in motion between the frames being captured. This is a common problem when stitching panoramic photography, when creating image stacks for noise reduction, and when combining images using methods such as Pixel Shift.

Although…

The RawTherapee Approach

Simply combining four static frames together is really trivial, and is something that all the other Pixel Shift-capable software can do without issue. The real world is not often so accommodating as a studio setup, and that is where the recent work done by @Ingo and @Ilias on RawTherapee really begins to shine.

What they’ve been working on in RawTherapee is to improve the detection of movement in a scene. There are several types of movement possible:

  • Objects showing at different places in a scene such as fast moving cars.
  • Partly moving objects like foliage in the wind.
  • Moving objects reflecting light onto static objects in the scene
  • Changing illumination conditions such as long exposures at sunset.

All of these types of movement need to be detected to avoid the artifacts they may cause in the final shot.

One of the key features of Pixel Shift movement detection in RawTherapee is that it allows you to show the movement mask, so you get feedback on which regions of the image are detected as movement and which are static. For the regions with movement RawTherapee will then use the demosaiced frame of your choice to fill it in, and for regions without movement it will use the Pixel Shift combined image with more detail and less noise.

Pixel Shift Movement Mask from RawTherapee Unique to RawTherapee is the option to export the resulting motion mask
(for those that may want to do further blending/processing manually).

The accuracy of movement detection in RawTherapee leads to much better handling of motion artifacts that works well in places where proprietary solutions fall short. For most cases the Automatic motion correction mode works well, but you can also fine tune the parameters in custom mode to correctly detect motion in high ISO shots.

Besides being the only option (barring dcrawps possibly) to process Pixel Shift files in Linux, RawTherapee has some other neat options that aren’t found in other solutions. One of them is the ability to export the actual movement mask separate from the image. This will let users generate separate outputs from RT, and to combine them later using the movement mask. Another option is the ability to choose which of the other frames to use for filling in the movement areas on the image.

Pixel Shift Support in Other Software

Pentax’s own Digital Camera Utility (a rebranded version of SilkyPix) naturally supports Pixel Shift, but as with most vendor-bundled software it can be slow, unwieldy, and a little buggy sometimes. Having said that, the results do look good, and at least the “Motion Correction” is able to be utilized with this software.

Adobe Camera Raw (ACR) got support for Pixel Shift files in version 9.5.1 (but doesn’t utilize the “Motion Correction”). In fact, ACR didn’t have support at the time that DPReview.com looked at the feature last year, causing them to retract the article and re-post when they had a chance to use a version of ACR with support.

A recent look at Pixel Shift processing over at DPReview.com showed some interesting results.

Sample Image Raw The image used in the DPReview article. ©Chris M Williams

We’re going to look at some 100% crops from that article and compare them to the results available using RawTherapee (the latest development version, to be released as 5.1 in April). The RawTherapee versions were set to the most neutral settings with only an exposure adjustment to match other samples better.

Looking first at an area of foliage with motion, the places where there are issues becomes apparent.

For reference, here is the Adobe Camera Raw (ACR) version of a single frame from a Pixel Shift file:

Pixel Shift Comparison #1

The results with Pixel Shift on, and motion correction on, from straight-out-of-camera (SOOC), Adobe Camera Raw (ACR), SilkyPix, and RawTherapee (RT) are decidedly mixed. In all but the RT version, there’s a very clear problem with effective blending and masking of the frames in areas with motion:

Pixel Shift Comparison #2

Things look much worse for Adobe Camera Raw when looking at high-motion areas like the water spray at the foot of the waterfall, though SilkyPix does a much better job here.

The ACR version of a single frame for reference:

Pixel Shift Comparison #2

Both the SOOC and SilkyPix versions handle all of the movement well here. RawTherapee also does a great job blending the frames despite all of the movement. Adobe Camera Raw is not doing well at all…

Pixel Shift Comparison #2

Finally, in a frame full of movement, such as the surface of the water.

The ACR version of a single frame for reference:

Pixel Shift Comparison #3

In a frame full of movement the SOOC, ACR, and SilkyPix processing all struggle to combine a clean set of frames. They exhibit a pixel pattern from the processing, and the ACR version begins to introduce odd colors:

Pixel Shift Comparison #3

As mentioned earlier, a unique feature of RawTherapee is the ability to show the motion mask. Here is an example of the motion mask for this image

Pixel Shift Motion Mask The motion mask generated by RawTherapee for the above image.

Also worth mentioning is the “Smooth Transitions” feature in RawTherapee. When there are regions with and without motion, the regions with motion are masked and filled in with data from a demosaiced frame of your choice. The other regions are taken from the Pixel Shift combined image. This can occasionally lead to harsh transitions between the two.

For instance, a transition as processed in SilkyPix:

Pixel Shift Transition SilkyPix

RawTherapee’s “Smooth Transitions” feature does a much better job handling the transition:

Pixel Shift Transition RawTherapee

In Conclusion

In another example of the power and community of Free/Libre and Open Source Software we have a great enhancement to a project based on feedback and input from the users. In this case, it all started with a post on the RawTherapee forums.

Thanks to the hard work of @Ingo and @Ilias Pentax shooters now have a Pixel Shift capable software that is not only FLOSS but also produces better results than the proprietary solutions!

Not so coincidentally, community member @nosle gave permission to use one of his PS files for everyone to try processing on the Play pixelshift thread. If you’d like to practice consider heading over to get his file and feedback from others!

Pixel Shift is currently in the development branch of RawTherapee and is slated for release with version 5.1.

March 22, 2017

Industry support for Blender

Blender is a true community effort. It’s an open public project where everyone’s welcome to contribute. In the past year, a growing number of corporations started to contribute to Blender as well.

We’d like to credit the companies who helping out to make Blender 2.8 happen.

Tangent Animation

This animation studio released Ozzy last year, a feature film entirely made with Blender. They currently have 2 new films in production. The facility has two departments (Toronto, Winnipeg) and is growing to 150 people in 2017. They exclusively use Blender for 3D.

Since October 2016, Tangent supports two Blender Institute devs full time to work on the 2.8 viewport. They also hired their own Cycles developer team, who will be contributing openly.

Nimble Collective

Nimble Collective was founded by former Dreamworks animators. Their goal is to give artists access to a complete studio pipeline, accessible online by just using your browser.

Since their launch in 2016 Nimble Collective has seriously invested in integrating Blender in their platform. They currently support one full time developer position in Blender Institute to support animation tools (dependency graph) and pipelines (Alembic).

AMD

AMD is developing a prominent open source strategy, leading the way for FOSS graphics card drivers and the new open graphics standard Vulkan.
Since last summer 2016 AMD supports a developer to work on modernizing Blender OpenGL, and a developer to work on Cycles OpenCL (GPU) rendering.

Aleph Objects

Aleph Objects is the manufacturer of the popular Libre Hardware Lulzbot 3D printer.

Starting this year, Aleph Objects will support Blender Institute to hire two people to work full time on UI and Workflow topics for Blender 2.8, with as goal to deliver a release-compatible “Blender 101” + training material for occasional 3D users.

Development Fund Sponsors

The Blender Development fund is an essential instrument to keep Blender alive. Blender Foundation uses the Development fund and donations to support 2-3 full time developer positions. Big and loyal corporate sponsors to the fund are BlenderMarket , Cambridge Medical Robotics , Valve Steam Workshop , Blend4Web , CGCookie , Effetti Digitali , Insydium , Sketchfab , Wube Software , blendFX , Machinimatrix , Pepeland and RenderStreet.

Blender Institute

The studio of Blender Institute gets hardware seeds – for example we had servers from Intel and Dell, GPUs from AMD and Nvidia. Blender Institute uses Blender Cloud income, sponsoring and subsidies to support developers and artists to work on free/open movies and 3D computer graphics production pipelines. BI currently employs 14 people, including BF chairman Ton Roosendaal.

Interview with Ito Ryou-ichi

Ryou is the amazing artist from Japan who made the Kiki plastic model. Thanks to Tyson Tan, we now have an interview with him!

Can you tell us something about yourself?

I’m Ito Ryou-ichi (Ryou), a Japanese professional modeler and figure sculptor. I work for the model hobby magazine 月刊モデルグラフィックス (Model Graphics Monthly), writing columns, building guides as well as making model samples.

When did you begin making models like this?

Building plastic models has been my hobby since I was a kid. Back then I liked building robot models from anime titles like the Gundam series. When I grew up, I once worked as a manga artist, but the job didn’t work out for me, so I became a modeler/sculptor around my 30s (in the 2000s). That said, I still love drawing pictures and manga!

How do you design them?

Being a former manga artist, I like to articulate my figure design from a manga character design perspective. First I determine the character’s general impression, then collect information like clothing style and other stuff to match that impression. Using those references, I draw several iterations until I feel comfortable with the whole result.

Although I like human and robot characters in general, my favorite has to be kemono (Japanese style furry characters). A niche genre indeed, especially in the modeling scene — you don’t see many of those figures around. But to me, it feels like a challenge in which I can make the best use of my taste and skills.

How do you make the prototypes? And how were they produced?

There are many ways of prototyping a figure. I have been using epoxy putty sculpting most of the time. First I make the figure’s skeleton using metallic wires, then put epoxy putty around the skeleton to make a crude shape for the body. I then use art knives and other tools to do the sculpting work, slowly making all the details according to the design arts. A trusty old “analogue approach” if you will. In contrast, I have been trying the digital approach with ZBrushCore as well. Although I’m still learning, I can now make something like a head out of it.

In case of Kiki’s figure (and most of my figures), the final product is known as a “Garage Kit” — a box of unassembled, unpainted resin parts. The buyer builds and paints the figure by themselves. To turn the prototype into a garage kit, the finished prototype must first be broken into a few individual parts, make sure they have casting friendly shapes. Silicon-based rubber is then used to make molds out of those parts. Finally, flowing synthetic resin is injected into the molds and parts are harvested after the injected resin settled. This method is called “resin casting”. Although I can cast them at home by myself, I often commission a professional workshop to do it for me. It costs more that way, but they can produce parts of higher quality in large quantity.

How did you learn about Krita?

Some time ago I came across Tyson Tan’s character designs on Pixiv.net and immediately became a big fan of his work. His Kiki pictures caught my attention and I did some research out of curiosity, leading me to Krita. I haven’t yet learned how to use Krita, but I’ll do that eventually.

Why did you decide to make a Kiki statuette?

Ryou: Before making Kiki, I had already collaborated with a few other artists, turning their characters into figures. Tyson has a unique way of mixing the beauty of living beings and futuristic robotic mechanism that I really liked, so I contacted him on Twitter. I picked a few characters from his creations as candidates, one of them was Kiki. Although more ”glamorous” would have been great too, after some discussion we finally decided to make Kiki.

Tyson: During the discussions, we looked into many of my original characters, some cute, some sexy. We did realize the market prefer figures with glamorous bodies, but we really wanted to make something special. Kiki being Krita’s mascot, a mascot of a free and open source art software, has one more layer of meaning than “just someone’s OC”. It was very courageous for Ryou to agree on a plan like that, since producing such a figure is very expensive and he would be the one to bear the monetary risk. I really admire his decision.

Where can people order them?

The Kiki figure kit can be ordered from my personal website. I send them worldwide:  http://bmwweb3.nobody.jp/mail2.html

Anything else you want to share with us?

I plan to collaborate with other artists in the future to make more furry figures like Kiki. I will contact the artist if I like their work, but you may also commission me to make a figure for a specific character.

I hope through making this Kiki figure I can connect with more people!

Ryou’s Personal Website: http://bmwweb3.nobody.jp/

 

March 21, 2017

Presentation at Charlottetown UI/UX/Design Meetup

It’s short notice, but if you’re in Charlottetown, I’ll be giving a talk tonight at 7:30pm (March 21, 2017) at the Charlottetown UI/UX/Design Meetup about the history of silverorange and our involvement with the Firefox logo.

Stellarium 0.15.2 has been released

The Stellarium development team after three months of development is proud to announce the second correcting release of Stellarium in series 0.15.x - version 0.15.2. This version contains few closed bugs (ported from series 1.x) and some new additions and improvements.

We have updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we highly recommended reset the settings when you will install the new version (you can choose required points in the installer).

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes:
- Added new algorithm for DeltaT from Stephenson, Morrison and Hohenkerk (2016)
- Added use QOpenGLWidget
- Added new option to InfoString group
- Added orbit visualization data for asteroids
- Added calculation of extincted magnitudes of satellites
- Added new type of Solar system objects: sednoids
- Added classificator of objects into Solar System Editor plugin
- Added albedo for infostring (planets and moons)
- Added some improvements and clean up of code in Search Tool
- Added use ISO 8601 to date formatting in Date and Time Dialog (LP: #1655630)
- Added "Restore direction to initial values" in Oculars plugin (LP: #1656085)
- Added the define for GL_DOUBLE again to restore compilation on ARM.
- Added calculation and show of horizontal and vertical scales of visible field of view of CCD (LP: #1656825)
- Added caching for landscapes, including preloading and some other manipulation via scripting.
- Added binning for CCD in Oculars plugin
- Added textures for DSO
- Added a mean solar day (equals to Earth's day) on the Sun for educational purposes
- Added transmitting map of object info via RemoteControl
- Added int property handling with sliders
- Added a scriptable function to retrieve landscape brightness.
- Added Spout licence.txt to Windows installer script
- Added displaying solstices points (LP: #1670046)
- Added extension of objectInfoMaps per object, most useful for scripting (LP: #1670412)
- Added tentative fix for crash without network (LP: #1667703)
- Added separate storing of view direction/FoV and other settings.
- Added Meade MA12 Astrometric Eyepiece support in Oculars plugin
- Added option to change the prediction depth of Iridium flares (Satellites plugin)
- Added tooltips for AstroCalc features
- Fixed indirect dependency to QtOpenGL by QtMultimediaWidgets (LP: #1656525)
- Fixed text encoding in installer (LP: #1652515)
- Fixed changing value of n.dot in tooltip when ephemeris type is changed (LP: #1652762)
- Fixed mistakes in DeltaT stuff
- Fixed typos in AstroCalc tool
- Fixed visual style for spinup/spindown markers
- Fixed missing cross-id of Epsilon Lyrae (LP: #1653388)
- Fixed updating a list of Solar system bodies in AstroCalc tool when new objects added or objects was removed
- Fixed calculation of period for comets on elliptial orbits
- Fixed prediction of Iridium flares (LP: #1643311)
- Fixed saving visibility flag for Bookmarks button (LP: #1654164)
- Fixed refraction for Satellites (LP: #1654331)
- Fixed wrong parallax and distance for IC 59 (LP: #1655423)
- Fixed updating a text in Help window when shortcuts are changed (LP: #1656001)
- Fixed saving flags of visibility of Milky Way and Zodiacal Light (LP: #1656067)
- Fixed memory leaks
- Fixed few reports of Clang static analyzer
- Fixed double clicks causing crashes (LP: #1656525)
- Fixed packaging QtOpenGL in Windows/macOS packages
- Fixed handling a log lines with missing newline char
- Fixed a bad-value crash in ArchaeoLines plugin
- Fixed an invalid escape sequence in RemoteControl plugin
- Fixed bug in Search Tool (LP: #1655055)
- Fixed doing a screenshots (do it via FBO - solution for QOpenGLWidget)
- Fixed work a button for ArchaeoLines plugin
- Fixed calculation and rendering CCD frame in Oculars plugin
- Fixed an memory leak with the spheric mirror distorter, and removed stencil buffer from the effect FBO (we don't need it for our rendering) (LP: #1661375)
- Fixed tile-based render performance (always glClear all buffers at the start of the frame) (LP: #1661375)
- Fixed glClear alpha channel usage (glClear alpha to zero instead of one)
- Fixed Scenery3d cubemap rendering (restores rendering)
- Fixed crash, when location 'Sierra Nevada Observatory, Spain' is chosen (LP: #1662113)
- Fixed NetBSD and OpenBSD build by linking glues with Qt5::Gui.
- Fixed size for few DSO textures (NPOT textures for ancient GPUs) (LP: #1641773)
- Fixed crash, when missing a stars catalog from the middle of list (e.g. stars4 is missing and we tried zooming) (LP: #1653315)
- Fixed crash for configure color of generic DSO marker (LP: #1667787)
- Fixed date limit in AstroCalc tool (Set a minimum possible date limit for the range of dates for
QDateTimeEdit widgets) (LP: #1660208)
- Fixed escaping of symbols for Simbad Lookup (LP: #1669088)
- Fixed DE431 mismatch (LP: #1606583)
- Fixed overbright Sun when zooming in (LP: #1421173)
- Fixed absolute magnitude calculation of the planets, their moons, and the Pluto-Charon system (LP: #1664143)
- Fixed a long-standing bug concerning centering small-fov views in equatorial mount mode (LP: #1484976)
- Fixed influencing sky luminance/eye adaptation for bright objects covered by the landscape horizon. (LP: #1138533)
- Fixed atmospheric brightening by Earth's moon when location is on other planets (LP: #1673283)
- Fixed application of DE43x DeltaT when date outside range of the selected DE43x.
- Fixed Night mode issue for binocular mode of Oculars Plugin (LP: #1673187)
- Fixed altitude computation for landscapes
- Fixed a small error in that Zodiacal light was aligned with Ecliptic J2000, not Ecliptic of date (LP: #1628765)
- Fixed crash Stellarium in debug mode on OS X with Qt 5.7+, through clear GL error state after using QPainter (LP: #1628072)
- Fixed a problem with Qt timezone handling, when some IANA timezones have been renamed compared to entries in our location database (LP: #1662132)
- Allows SkyImages in all reference frames and deprecate the old explicit core.loadSkyImageAltAz() type commands
- Updated rules for usage of custom time zones (the custom time zone may be use in all time now) (LP: #1652763)
- Updated shortcuts
- Updated rules for source package builder
- Updated URL of DSS collection
- Updated detect of OS
- Updated deployment rules for Windows installer
- Updated script for building Stellarium User Guide
- Updated GUI for set coefficients for custom equation of DeltaT
- Updated list of contributors
- Updated ArchaeoLines and Gridlines options to RemoteControl pages
- Updated Tongan sky culture
- Updated catalog of DSO
- Updated common names of DSO
- Updated star names (LP: #1664671)
- Updated Solar System Screen Saver.
- Updated Oculars plugin
- Updated Satellites plugin (Let's start looking to the Iridium on 15 seconds before flash)
- Updated plist data for macOS
- Updated textures of minor planets
- Updated default color scheme
- Removed code for automatic tuning star scales of the view through ocular/CCD (LP: #1656940)
- Code clean-up
- Prevent an unnecessary StelProperty change
- Changed the way the OpenGL format is set once again

Animate with Krita is out!

Timothée Giet has finished his latest training course for Krita. In three parts, Timothée introduces the all-new animation feature in Krita. Animation was introduced in Krita 3.0, last year and is already used by people all over the world, for fun and for real work.

Animation in Krita is meant to recreate the glory days of hand-drawn animation, with a modern twist. It’s not a flash substitute, but allows you to pair Krita’s awesome drawing capabilities with a frame-based animation approach.

In this training course, Timothée first gives us a tour of the new animation features and panels in Krita. The second part introduces the foundation of traditional animation. The final part takes you through the production of an entire short clip, from sketching to exporting. All necessary production files are included, too!

Animate with Krita is available as a digital download and costs just €14,95 (excluding VAT in the European Union) English and French subtitles are included, as well as all project files.


Get Animate with Krita

March 20, 2017

Blender Constraints

Last time I wrote about artistic constraints being useful to remain focus and be able to push yourself to the max. In the near future I plan to dive into the new contstraint based layout of gtk4, Emeus. Today I’ll briefly touch on another type of constraint, the Blender object constraint!

So what are they and how are they useful in the context of a GNOME designer? We make quite a few prototypes and one of the things to decide whether a behavior is clear and comprehensible is motion design, particularly transitions. And while we do not use tools directly linked to out stack, it helps to build simple rigs to lower the manual labor required to make sometimes similar motion designs and limit the number of mistakes that can be done. Even simple animations usually consist of many keyframes (defined, non-computed states in time). Defining relationships between objects and createing setups, “rigs”, is a way to create of a sort of working model of the object we are trying to mock up.

Blender Constraints Blender Constraints

Constraints in Blender allow to define certain behaviors of objects in relation to others. Constraints allow you to limit movement of an object to specific ranges (a scrollbar not being able to be dragged outside of its gutter), or to convert certain motion of an object to a different transformation of another (a slider adjusting a horizon of an image, ie. rotating it).

The simplest method of defining relation is through a hierarchy. An object can become a parent of another, and thus all children will inherit movements/transforms of a parent. However there are cases — like interactions of a cursor with other objects — where this relationship is only temporary. Again, constraints help here, in particular the copy location constraint. This is because you can define the influence strength of a constraint. Like everything in Blender, this can also be keyframed, so at some point you can follow the cursor and later disengage this tight relationship. Btw if you ever though you can manualy keyframe two animations manually so they do not slide, think again.

Inverse transform in Blender Inverse transform in Blender

The GIF screencasts have been created using Peek, which is available to download as a flatpak.

Peek, a GIF screencasting app. Peek, a GIF screencasting app.

Everyone Does IT (and some Raspberry Pi gotchas)

I've been quiet for a while, partly because I've been busy preparing for a booth at the upcoming Everyone Does IT event at PEEC, organized by LANL.

In addition to booths from quite a few LANL and community groups, they'll show the movie "CODE: Debugging the Gender Gap" in the planetarium, I checked out the movie last week (our library has it) and it's a good overview of the problem of diversity, and especially the problems women face in in programming jobs.

I'll be at the Los Alamos Makers/Coder Dojo booth, where we'll be showing an assortment of Raspberry Pi and Arduino based projects. We've asked the Coder Dojo kids to come by and show off some of their projects. I'll have my RPi crittercam there (such as it is) as well as another Pi running motioneyeos, for comparison. (Motioneyeos turned out to be remarkably difficult to install and configure, and doesn't seem to do any better than my lightweight scripts at detecting motion without false positives. But it does offer streaming video, which might be nice for a booth.) I'll also be demonstrating cellular automata and the Game of Life (especially since the CODE movie uses Life as a background in quite a few scenes), music playing in Python, a couple of Arduino-driven NeoPixel LED light strings, and possibly an arm-waving penguin I built a few years ago for GetSET, if I can get it working again: the servos aren't behaving reliably, but I'm not sure yet whether it's a problem with the servos and their wiring or a power supply problem.

The music playing script turned up an interesting Raspberry Pi problem. The Pi has a headphone output, and initially when I plugged a powered speaker into it, the program worked fine. But then later, it didn't. After much debugging, it turned out that the difference was that I'd made myself a user so I could have my normal shell environment. I'd added my user to the audio group and all the other groups the default "pi" user is in, but the Pi's pulseaudio is set up to allow audio only from users root and pi, and it ignores groups. Nobody seems to have found a way around that, but sudo apt-get purge pulseaudio solved the problem nicely.

I also hit a minor snag attempting to upgrade some of my older Raspbian installs: lightdm can't upgrade itself (Errors were encountered while processing: lightdm). Lots of people on the web have hit this, and nobody has found a way around it; the only solution seems to be to abandon the old installation and download a new Raspbian image.

But I think I have all my Raspbian cards installed and working now; pulseaudio is gone, music plays, the Arduino light shows run. Now to play around with servo power supplies and see if I can get my penguin's arms waving again when someone steps in front of him. Should be fun, and I can't wait to see the demos the other booths will have.

If you're in northern New Mexico, come by Everyone Does IT this Tuesday night! It's 5:30-7:30 at PEEC, the Los Alamos Nature Center, and everybody's welcome.

WebKitGTK+ 2.16

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

New API

The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

$ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

“Hobby harder; it’ll stunt!” RC-car T-shirt design

Backstory

In 2016, RC-car company Arrma released the Outcast, calling it a stunt truck. That label lead to some joking around in the UltimateRC forum. One member had trouble getting his Outcast to stunt. Utrak said “The stunt car didn’t stunt do hobby to it, it’ll stunt “. frystomer went: “If it still doesn’t stunt, hobby harder.” and finally stewwdog was like: “I now want a shirt that reads ‘Hobby harder, it’ll stunt’.” He wasn’t alone, so I created a first, very rough sketch.

Process

After a positive response, I decided to make it look like more of a stunt in another sketch:

Meanwhile, talk went to onesies and related practical considerations. Pink was also mentioned, thus I suddenly found myself confronted with a mental image that I just had to get out:

To find the right alignment and perspective, I created a Blender scene with just the text and boxes and cylinders to represent the car. The result served as template for drawing the actual image in Krita, using my trusty Wacom Intuos tablet.

Result

hobby_harder_121_on_white_1024x0958

This design is now available for print on T-shirts, other apparel, stickers and a few other things, via Redbubble.


Filed under: Illustration, Planet Ubuntu Tagged: Apparel, Blender, Krita, RC, T-shirt

Revit and FreeCAD

Last couple of weeks I've had the opportunity to do some work in Revit. This was highly welcome as my knowledge of that application was becoming rather obsolete. Having a fresh, new look on it brought me a lot of new ideas, reinforced some ideas I already had, and changed a couple of old perceptions...

March 16, 2017

FreeCAD Arch development news - March 2017

I'll start adding a date to these "Arch development news", it will be easy to look back and motivate me to write them more regularly. A monthly report seems good I think, no? Hi all, Long time I didn't write, but that doesn't mean things have been quiet down here, it's actually more the contrary. To begin...

March 13, 2017

Interview with Sonia Bennett

Could you tell us something about yourself?

Hi I’m Sonia Bennett! Born and brought up in India and now living in Nashville, Tennessee with my husband and 3 year old daughter.

Do you paint professionally, as a hobby artist, or both?

I always loved traditional drawing and painting since childhood but became a graphic designer after college. Right now I am trying to improve my painting skills again =) I’m always open to commissions!

What genre(s) do you work in?

There isn’t a specific genre that I see myself in because exploring and looking at different art styles keeps my mind open to seeing things from different perspectives.

Whose work inspires you most — who are your role models as an artist?

From Vermeer, Fragonard, American and French Impressionism to the artists in the Krita group on Facebook… there are a lot of artists both classical and contemporary that inspire me each day. I come from a very creative family, my father was the first person to inspire me to draw. He used to draw horses for me and I would try to copy them. He is a very talented violinist and can sing a wide vocal range. My mother loved dancing, acted and directed theater, can sew almost anything and is a wonderful cook! They have always inspired and encouraged me to be creative. If it wasn’t for their encouragement, and God opening doors for me to pursue art as a way to glorify Him… I would be stuck doing something painfully uncreative.

How and when did you get to try digital painting for the first time?

Digital painting was a mystery to me until the end of 2015 (yes I may have lived under a rock until then! ) I had seen digital artwork online but didnt realise HOW exactly it was created. I asked many dumb questions. I’m sure, to finally figure out which graphics tablet I needed to get. When I finally got one as a Christmas present in 2015 and tried it out with Krita for the very first time, I was blown away at how easy it was to paint with Krita. After that, I couldn’t stop painting!

What makes you choose digital over traditional painting?

Honestly, if I had the space and could leave my messy paint equipment undisturbed (impossible with a toddler toddling at high speed) I would keep painting the traditional way, but I enjoy digital painting because I can paint without the mess and drying time for oil paint. And I can save my work and come back the next day and not see suspicious little hand prints on the canvas. =D

How did you find out about Krita?

I used Adobe software for a long time, but it was just for photo and vector layout and design and that’s about it. It wasn’t until I changed computers that I realized the older software didn’t work on my updated computer anymore. And I didn’t want to start ‘renting’ the software that wasn’t available to own directly anymore. So I searched for free painting software and somehow landed on David Revoy’s Youtube channel and it was the best introduction to Krita. I didnt need to keep looking after that!

What was your first impression?

I think my family may have heard me express my excitement rather loudly several times throughout the first day! I still say “I love Krita!”

What do you love about Krita?

The stabilizer, the different assistants, the brush engine, so many blending modes, the color to alpha filter and the different color selectors are especially cool. Krita is much more advanced this way. I also love the fact that it is made available to everyone for free. Not everyone can afford hundreds of dollars to create art. Artists from all walks of life can build up their portfolios and have a great opportunity to showcase their talent thanks to the wonderful people behind Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I struggle a bit with the lag when I paint large projects, and it would be nice to have a way to save for web versions and a better text tool, but I know with the tremendous advancements that Krita has already made in such a short time, that all these improvements and more will be made… one day Krita will be on every artist’s computer.

What sets Krita apart from the other tools that you use?

It has such a professional feel and look to it. It’s unlike any other free software that exists today. And it’s only getting better. It is extremely user friendly, the intelligent design of the interface makes it so easy to understand and get used to. Right away, when you download it and start painting, you know this has been designed by people who know what artists like.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Even though the original is not mine, the practice painting of Fragonard’s The Reader, is my favorite, because it was the first real painting that I made on Krita that showed me I could still paint, even after almost a decade of not picking up a brush. I had painted a couple of small paintings before this but they didn’t really challenge me. Trying to replicate a master’s painting is a really good training.

What techniques and brushes did you use in it?

I used David Revoy’s brushes and Ramon Miranda’s brushes and just tried to replicate the smoky, textured feel of the original work.

Where can people see more of your work?

I post my work on several social media sites. My main website is soniabennett.net and I am also on :
https://www.facebook.com/ArtfulButterfly
https://www.artstation.com/artist/soniabennett
https://www.instagram.com/theartfulbutterfly/
http://graphicker.me/application/works/index/5499
I’m also in a Krita group on Facebook that has more than 2000 members (and growing) and has wonderful artists that help and encourage each other (with the occasional joking around!)

Anything else you’d like to share?

I want to thank the people who worked so hard to create Krita and keep making it better and better. Thank you for this opportunity to show my work here and I appreciate all the encouragement and support I have received from my friends and family. I hope my art can encourage more people to paint with Krita and develop their talent and creativity. If there is any way I can contribute to making Krita better, I would be most happy to help!

 

March 10, 2017

At last! A roadrunner!

We live in what seems like wonderful roadrunner territory. For the three years we've lived here, we've hoped to see a roadrunner, and have seen them a few times at neighbors' places, but never in our own yard.

Until this morning. Dave happened to be looking out the window at just the right time, and spotted it in the garden. I grabbed the camera, and we watched it as it came out from behind a bush and went into stalk mode.

[Roadrunner stalking]

And it caught something!

[close-up, Roadrunner with fence lizard] We could see something large in its bill as it triumphantly perched on the edge of the garden wall, before hopping off and making a beeline for a nearby juniper thicket.

It wasn't until I uploaded the photo that I discovered what it had caught: a fence lizard. Our lizards only started to come out of hibernation about a week ago, so the roadrunner picked the perfect time to show up.

I hope our roadrunner decides this is a good place to hang around.

March 09, 2017

Time Claustrophobia

My friend and last-blogger-standing, Peter Rukavina, emailed me this week to ask about a concept he remembered from my blog, but couldn’t find.

He called the concept Time Claustrophobia. I immediately knew what he meant. We must have discussed it in person, because there doesn’t seem to be anything about it here on my blog. Let’s correct that.

Time Claustrophobia is the feeling of anxiety cast by an impending appointment over the free time that precedes it.

For example, when I was younger I sometimes had shift-work that started at 4pm. The entire day, thought free and open until 4pm, felt constrained by the looming commitment near the end of the day.

March 07, 2017

You Can Hear The Difference Between Hot and Cold Water

I’ve always thought I could hear the difference between hot and could water pouring, but wasn’t sure if I’d be able to prove it in a blind test. Steve Mould confirms that You Can Hear The Difference Between Hot and Cold Water on the great Tom Scott YouTube channel.

March 05, 2017

The Curious Incident of the Junco in the Night-Time

Dave called from an upstairs bedroom. "You'll probably want to see this."

He had gone up after dinner to get something, turned the light on, and been surprised by an agitated junco, chirping and fluttering on the sill outside the window. It evidently was tring to fly through the window and into the room. Occasionally it would flutter backward to the balcony rail, but no further.

There's a piñon tree whose branches extend to within a few feet of the balcony, but the junco ignored the tree and seemed bent on getting inside the room.

As we watched, hoping the bird would calm down, instead it became increasingly more desperate and stressed. I remembered how, a few months earlier, I opened the door to a deck at night and surprised a large bird, maybe a dove, that had been roosting there under the eaves. The bird startled and flew off in a panic toward the nearest tree. I had wondered what happened to it -- whether it had managed to find a perch in the thick of a tree in the dark of night. (Unlike San Jose, White Rock gets very dark at night.)

And that thought solved the problem of our agitated junco. "Turn the porch light on", I suggested. Dave flipped a switch, and the porch light over the deck illuminated not only the deck where the junco was, but the nearest branches of the nearby piñon.

Sure enough, now that it could see the branches of the tree, the junco immediately turned around and flew to a safe perch. We turned the porch light back off, and we heard no more from our nocturnal junco.

Using Krita Without a Keyboard

Recently we added a custom hotkey file to Krita to work with a hotkey application called Tablet Pro. Tablet Pro allows you to use your tablet without a keyboard by replacing the keyboard shortcuts with custom onscreen hotkeys. For our Krita users our goal has been to give digital artists the power to create at a professional level without a huge expense. Tablet Pro is working with us on that goal. We were happy to work together on this and are excited to share the results. The hotkeys they provide will give you a very similar experience to a Wacom Cintiq with expresskeys.

In order to try using your tablet with custom touch hotkeys they’ve created a custom “Artist Pad” built to work with Krita keyboard shortcuts. We have also added a profile and hotkey preset into Krita built to align the shortcut settings. You can download it here.

They give a 14 day free trial (to make sure it works on your tablet) and the Artist Pad is $9.99. I’ve talked with one of the owners named Justice and he is happy to aid in setup and help you should you have any questions. His email is justice@tabletpro.net.

Their website is www.tabletpro.net.

March 01, 2017

Tue 2017/Feb/28

  • Porting librsvg's tree of nodes to Rust

    Earlier I wrote about how librsvg exports reference-counted objects from Rust to C. That was the preamble to this post, in which I'll write about how I ported librsvg's tree of nodes from C to Rust.

    There is a lot of talk online about writing recursive data structures in Rust, as there are various approaches to representing the links between your structure's nodes. You can use shared references and lifetimes. You can use an arena allocator and indices or other kinds of identifiers into that arena. You can use reference-counting. Rust really doesn't like C's approach of pointers-to-everything-everywhere; you must be very explicit about who owns what and for how long things live.

    Librsvg keeps an N-ary tree of Node structures, each of which corresponds to an XML element in the SVG file. Nodes can be of various kinds: "normal" shapes that know how to draw themselves like Bézier paths and ellipses; "reference-only" objects like markers, which get referenced from paths to draw arrowheads and the like; "paint server" objects like gradients and fill patterns which also don't draw themselves, but only get referenced for a fill or stroke operation on other shapes; filter objects like Gaussian blurs which get referenced after rendering a sub-group of objects.

    Even though these objects are of different kinds, they have some things in common. They can be nested — a marker contains other shapes or sub-groups or shapes so you can draw multi-part arrowheads, for example; or a filter can be a Gaussian blur of the alpha channel of the referencing shape, followed by a light-source operation, to create an embossed effect on top of a shape. Objects can also be referenced by name in different places: you can declare a gradient and give it a name like #mygradient, and then use it for the fill or stroke of other shapes without having to declare it again.

    Also, all the objects maintain CSS state. This state gets inherited from an object's ancestors in the N-ary tree, and finally gets overriden with whatever specific CSS properties the object has for itself. You could declare a Group (a <g> element) with a black 4 pixel-wide outline, and then into it put a bunch of other shapes, but each with a fill of a different color. Those shapes will inherit the black outline, but use their own fill.

    Librsvg's representation of tree nodes, in C

    The old C code had a simple representation of nodes. There is an RsvgNodeType enum which just identifies the type of each node, and an RsvgNode structure for each node in the tree. Each RsvgNode also keeps a small vtable of methods.

    typedef enum {
        RSVG_NODE_TYPE_INVALID = 0,
    
        RSVG_NODE_TYPE_CHARS,
        RSVG_NODE_TYPE_CIRCLE,
        RSVG_NODE_TYPE_CLIP_PATH,
        RSVG_NODE_TYPE_COMPONENT_TRANFER_FUNCTION,
        ...
    } RsvgNodeType;
    
    
    typedef struct {
        void (*free) (RsvgNode *self);
        void (*draw) (RsvgNode *self, RsvgDrawingCtx *draw_ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNodeVtable;
    
    typedef struct _RsvgNode RsvgNode;
    
    typedef struct _RsvgNode {
        RsvgState      *state;
        RsvgNode       *parent;
        GPtrArray      *children;
        RsvgNodeType    type;
        RsvgNodeVtable *vtable;
    };
    	    

    What about memory management? A node keep an array pointers to its children, and also a pointer to its parent (which of course can be NULL for the root node). The master RsvgHandle object, which is what the caller gets from the public API, maintains a big array of pointers to all the nodes, in addition to a pointer to the root node of the tree. Nodes are created while reading an SVG file, and they don't get freed until the toplevel RsvgHandle gets freed. So, it is okay to keep shared references to nodes and not worry about memory management within the tree: the RsvgHandle will free all the nodes by itself when it is done.

    Librsvg's representation of tree nodes, in Rust

    In principle I could have done something similar in Rust: have the master handle object keep an array to all the nodes, and make it responsible for their memory management. However, when I started porting that code, I wasn't very familiar with how Rust handles lifetimes and shared references to objects in the heap. The syntax is logical once you understand things, but I didn't back then.

    So, I chose to use reference-counted structures instead. It gives me a little peace of mind that for some time I'll need to keep references from the C code to Rust objects, and I am already comfortable with writing C code that uses reference counting. Once everything is ported to Rust and C code no longer has references to Rust objects, I can probably move away from refcounts into something more efficient.

    I needed to have a way to hook the existing C implementations of nodes into Rust, so that I can port them gradually. That is, I need to have a way to have nodes implemented both in C and Rust, while I port them one by one to Rust. We'll see how this is done.

    Here is the first approach to the C code above. We have an enum that matches RsvgNodeType from C, a trait that defines methods on nodes, and the Node struct itself.

    /* Keep this in sync with rsvg-private.h:RsvgNodeType */
    #[repr(C)]
    #[derive(Debug, Copy, Clone, PartialEq)]
    pub enum NodeType {
        Invalid = 0,
    
        Chars,
        Circle,
        ClipPath,
        ComponentTransferFunction,
        ...
    }
    
    /* A *const RsvgCNodeImpl is just an opaque pointer to the C code's
     * struct for a particular node type.
     */
    pub enum RsvgCNodeImpl {}
    
    pub type RsvgNode = Rc<Node>;
    
    pub trait NodeTrait: Downcast {
        fn set_atts   (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag);
        fn draw       (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32);
        fn get_c_impl (&self) -> *const RsvgCNodeImpl;
    }
    
    impl_downcast! (NodeTrait);
    
    pub struct Node {
        node_type: NodeType,
        parent:    Option<Weak<Node>>,       // optional; weak ref to parent
        children:  RefCell<Vec<Rc<Node>>>,   // strong references to children
        state:     *mut RsvgState,
        node_impl: Box<NodeTrait>
    }
    	    

    The Node struct is analogous to the old C structure above. The parent field holds an optional weak reference to another node: it's weak to avoid circular reference counts, and it's optional because not all nodes have a parent. The children field is a vector of strong references to nodes; it is wrapped in a RefCell so that I can add children (i.e. mutate the vector) while the rest of the node remains immutable.

    RsvgState is a C struct that holds the CSS state for nodes. I haven't ported that code to Rust yet, so the state field is just a raw pointer to that C struct.

    Finally, there is a node_impl: Box<NodeTrait> field. This has a reference to a boxed object which implements NodeTrait. In effect, we are separating the "tree stuff" (the basic Node struct) from the "concrete node implementation stuff", and the Node struct has a reference inside it to the node type's concrete implemetation.

    The vtable turns into a trait, more or less

    Let's look again at the old C code for a node's vtable:

    typedef struct {
        void (*free) (RsvgNode *self);
        void (*draw) (RsvgNode *self, RsvgDrawingCtx *draw_ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNodeVtable;
    	    

    The free method is responsible for freeing the node itself and all of its inner data; this is common practice in C vtables.

    The draw method gets called, well, to draw a node. It gets passed a drawing context, plus a magic dominate argument which we can ignore for now (it has to do with CSS cascading).

    Finally, the set_atts method is just a helper at construction time: after a node gets allocated, it is initialized from its XML attributes by the set_atts method. The pbag argument is just a dictionary XML attributes for this node, represented as key-value pairs; the method pulls the key-value pairs out of the pbag and initializes its own fields from the values.

    The NodeTrait in Rust is similar, but has a few differences:

    pub trait NodeTrait: Downcast {
        fn set_atts (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag);
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32);
        fn get_c_impl (&self) -> *const RsvgCNodeImpl;
    }
    	    

    You'll note that there is no free method. Rust objects know how to free themselves and their fields automatically, so we don't need that method anymore. We will need a way to free the C data that corresponds to the C implementations of nodes — those are external resources not managed by Rust, so we need to tell it about them; see below.

    The set_atts and draw methods are similar to the C ones, but they also have an extra node argument. Read on.

    There is a new get_c_impl method. This is a temporary thing to accomodate C implementations of nodes; read on.

    Finally, what about the "NodeTrait: Downcast" in the first line? We'll get to it in the end.

    Separation of concerns?

    So, we have the basic Node struct, which forms the N-ary tree of SVG elements. We also have a NodeTrait with a few of methods that nodes must implement. The Node struct has a node_impl field, which holds a reference to an object in the heap which implements NodeTrait.

    I think I saw this pattern in the source code for Servo; I was looking at its representation of the DOM to see how to do an N-ary tree in Rust. I *think* it splits things in the same way; or maybe I'm misremembering and using the pattern from another tree-of-multiple-node-types implementation in Rust.

    How should things look from C?

    This is easy to answer: they should look exactly as they were before the conversion to Rust, or as close to that as possible, since I don't want to change all the C code at once!

    In the post about exposing reference-counted objects from Rust to C, we already saw the new rsvg_node_ref() and rsvg_node_unref() functions, which hand out pointers to boxed Rc<Node>.

    Previously I had made accessor functions for all of RsvgNode's fields, so the C code doesn't touch them directly. There are functions like rsvg_node_get_type(), rsvg_node_get_parent(), rsvg_node_foreach_child(), that the C code already uses. I want to keep them exactly the same, with only the necessary changes. For example, when the C code did not reference-count nodes, the implementation of rsvg_node_get_parent() simply returned the value of the node->parent field. The new implementation returns a strong reference to the parent (upgraded from the node's weak reference to its parent), and the caller is responsible for unref()ing it.

    Rust implementation of Node

    Let's look at two "trivial" methods of Node:

    impl Node {
        ...
    
        pub fn get_type (&self) -> NodeType {
            self.node_type
        }
    
        pub fn get_state (&self) -> *mut RsvgState {
            self.state
        }
    
        ...
    }
    	    

    Nothing special there; just accessor functions for the node's fields. Given that Rust makes those fields immutable in the presence of shared references, I'm not 100% sure that I actually need those accessors. If it turns out that I don't, I'll remove them and let the code access the fields directly.

    Now, the method that adds a child to a Node:

        pub fn add_child (&self, child: &Rc<Node>) {
            self.children.borrow_mut ().push (child.clone ());
        }
    	    

    The children field is a RefCell<Vec<Rc<Node>>>. We ask to borrow it mutably with borrow_mut(), and then push a new item into the array. What we push is a new strong reference to the child, which we get with child.clone(). Think of this as "g_ptr_array_add (self->children, g_object_ref (child))".

    And now, two quirky methods that call into the node_impl:

    impl Node {
        ...
    
        pub fn set_atts (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag) {
            self.node_impl.set_atts (node, handle, pbag);
        }
    
        pub fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            self.node_impl.draw (node, draw_ctx, dominate);
        }
    
        ...
    }
    	    

    The &self argument is implicitly a &Node. But we also pass a node: &RsvgNode argument! Remember the type declaration for RsvgNode; it is just "pub type RsvgNode = Rc<Node>". What these prototypes mean is:

        pub fn set_atts (&self: reference to the Node, node: refcounter for the Node, ...) {
            ... call the actual implementation in self.node_impl ...
        }
    	    

    This is because of the following. In objects that implement NodeTrait, the actual implementations of set_atts() and draw() still need to call into C code for a few things. And the only view that the C code has into the Rust world is through pointers to RsvgNode, that is, pointers to the Rc<Node> — the refcounting wrapper for nodes. We need to be able to pass this refcounting wrapper to C from somewhere, but once we are down in the concrete implementations of the trait, we don't have the refcounts anymore. So, we pass them as arguments to the trait's methods.

    This may look strange; at first sight it may look as if you are passing self twice to a method call, but not really! The self argument is implicit in the method call, and the first node argument is something rather different: it is a reference count to the node, not the node itself. I may be able to remove this strange argument once all the nodes are implemented in Rust and there is no interfacing to C code anymore.

    Accomodating C implementations of nodes

    Now we get to the part where the Node and NodeTrait, implemented in Rust, both need to accomodate the existing C implementations of node types.

    Instead of implementing a node type in Rust (i.e. implement NodeTrait for some struct), we will implement a Rust wrapper for node implementations in C, which implements NodeTrait. Here is the declaration of CNode:

    /* A *const RsvgCNodeImpl is just an opaque pointer to the C code's
     * struct for a particular node type.
     */
    pub enum RsvgCNodeImpl {}
    
    type CNodeSetAtts = unsafe extern "C" fn (node: *const RsvgNode, node_impl: *const RsvgCNodeImpl, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag);
    type CNodeDraw = unsafe extern "C" fn (node: *const RsvgNode, node_impl: *const RsvgCNodeImpl, draw_ctx: *const RsvgDrawingCtx, dominate: i32);
    type CNodeFree = unsafe extern "C" fn (node_impl: *const RsvgCNodeImpl);
    
    struct CNode {
        c_node_impl: *const RsvgCNodeImpl,
    
        set_atts_fn: CNodeSetAtts,
        draw_fn:     CNodeDraw,
        free_fn:     CNodeFree,
    }
    	    

    This struct CNode has essentially a void* to the C struct that will hold a node's data, and three function pointers. These function pointers (set_atts_fn, draw_fn, free_fn) are very similar to the original vtable we had, and that we turned into a trait.

    We implement NodeTrait for this CNode wrapper as follows, by just calling the function pointers to the C functions:

    impl NodeTrait for CNode {
        fn set_atts (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag) {
            unsafe { (self.set_atts_fn) (node as *const RsvgNode, self.c_node_impl, handle, pbag); }
        }
    
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            unsafe { (self.draw_fn) (node as *const RsvgNode, self.c_node_impl, draw_ctx, dominate); }
        }
    
        ...
    }
    	    

    Maybe this will make it easier to understand why we neeed that "extra" node argument with the refcount: it is the actual first argument ot the C functions, which don't get the luxury of a self parameter.

    And the free_fn()? Who frees the C implementation data? Rust's Drop trait, of course! When Rust decides to free CNode, it will see if it implements Drop. Our implementation thereof calls into the C code to free its own data:

    impl Drop for CNode {
        fn drop (&mut self) {
            unsafe { (self.free_fn) (self.c_node_impl); }
        }
    }
    	    

    What does it look like in memory?

    Node, CNode, RsvgNodeEllipse

    This is the basic layout. A Node gets created and put on the heap. Its node_impl points to a CNode in the heap (or to any other thing which implements NodeTrait, really). In turn, CNode is our wrapper for C implementations of SVG nodes; its c_node_impl field is a raw pointer to data on the C side — in this example, an RsvgNodeEllipse. We'll see how that one looks like shortly.

    So how does C create a node?

    I'm glad you asked! This is the rsvg_rust_cnode_new() function, which is implemented in Rust but exported to C. The C code uses it when it needs to create a new node.

    #[no_mangle]
    pub extern fn rsvg_rust_cnode_new (node_type:   NodeType,
                                       raw_parent:  *const RsvgNode,
                                       state:       *mut RsvgState,
                                       c_node_impl: *const RsvgCNodeImpl,
                                       set_atts_fn: CNodeSetAtts,
                                       draw_fn:     CNodeDraw,
                                       free_fn:     CNodeFree) -> *const RsvgNode {
        assert! (!state.is_null ());
        assert! (!c_node_impl.is_null ());
    
        let cnode = CNode {                                             // 1
            c_node_impl: c_node_impl,
            set_atts_fn: set_atts_fn,
            draw_fn:     draw_fn,
            free_fn:     free_fn
        };
    
        box_node (Rc::new (Node::new (node_type,                        // 2
                                      node_ptr_to_weak (raw_parent),    // 3
                                      state,
                                      Box::new (cnode))))               // put the CNode in the heap; pass it to the Node
    }
    	    

    1. We create a CNode structure and fill it in from the parameters that got passed to rsvg_rust_cnode_new().

    2. We create a new Node with Node::new(), wrap it with a reference counter with Rc::new(), box that ("put it in the heap") and return a pointer to the box's contents. The boxificator is just this; it's similar to what we used before:

    pub fn box_node (node: RsvgNode) -> *mut RsvgNode {
        Box::into_raw (Box::new (node))
    }
    	    

    3. We create a weak reference to the parent node. Here, raw_parent comes in as a pointer to a strong reference. To obtain a weak reference, we do this:

    pub fn node_ptr_to_weak (raw_parent: *const RsvgNode) -> Option<Weak<Node>> {
        if raw_parent.is_null () {
            None
        } else {
            let p: &RsvgNode = unsafe { & *raw_parent };
            Some (Rc::downgrade (&p.clone ()))               // 5
        }
    }
    	    

    5. Here, we take a strong reference to the parent with p.clone(). Then we turn it into a weak reference with Rc::downgrade(). We stuff that in an Option with the Some() — remember that not all nodes have a parent, and we represent this with an Option<Weak<Node>>.

    Creating a C implementation of a node

    This is the C code for rsvg_new_group(), the function that creates nodes for SVG's <g> element.

    RsvgNode *
    rsvg_new_group (const char *element_name, RsvgNode *parent)
    {
        RsvgNodeGroup *group;
    
        group = g_new0 (RsvgNodeGroup, 1);
    
        /* ... fill in the group struct ... */
    
        return rsvg_rust_cnode_new (RSVG_NODE_TYPE_GROUP,
                                    parent,
                                    rsvg_state_new (),
                                    group,
                                    rsvg_group_set_atts,
                                    rsvg_group_draw,
                                    rsvg_group_free);
    }
    	    

    The resulting RsvgNode*, which from C's viewpoint is an opaque pointer to something on the Rust side — the boxed Rc<Node> — gets stored in a couple of places. It gets put in the toplevel RsvgHandle's array of all-the-nodes. It gets hooked, as a child, to its parent node. It may get referenced in other places as well, for example, in the dictionary of string-to-node for nodes that have an id="name" attribute. All those are strong references created with rsvg_node_ref().

    Getting the implementation structs from C

    Let's look again at the implementation of NodeTrait for CNode. This is one of the methods:

    impl NodeTrait for CNode {
        ...
    
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            unsafe { (self.draw_fn) (node as *const RsvgNode, self.c_node_impl, draw_ctx, dominate); }
        }
    
        ...
    }
    	    

    In self.draw_fn we have a function pointer to a C function. We call it, and we pass the self.c_node_impl. This gives the function access to its own implementation data.

    But what about cases where we need to access an object's data outside of the methods, and so we don't have that c_node_impl argument? If you are cringing because This Is Not Something That Is Done in OOP, well, you are right, but this is old code with impurities. Maybe once it is Rustified thoroughly, I'll have a chance to clean up those inconsistencies. Anyway, here is a helper function that the C code can call to get ahold of its c_node_impl:

    #[no_mangle]
    pub extern fn rsvg_rust_cnode_get_impl (raw_node: *const RsvgNode) -> *const RsvgCNodeImpl {
        assert! (!raw_node.is_null ());
        let node: &RsvgNode = unsafe { & *raw_node };
    
        node.get_c_impl ()
    }
    	    

    That get_c_impl() method is the temporary thing I mentioned above. It's one of the methods in NodeTrait, and of course CNode implements it like this:

    impl NodeTrait for CNode {
        ...
    
        fn get_c_impl (&self) -> *const RsvgCNodeImpl {
            self.c_node_impl
        }
    }
    	    

    You may be thinking that is is an epic hack: not only do we provide a method in the base trait to pull an obscure field from a particular implementation; we also return a raw pointer from it! And a) you are absolutely right, but b) it's a temporary hack, and it is about the easiest way I found to shove the goddamned C implementation around. It will be gone once all the node types are implemented in Rust.

    Now, from C's viewpoint, the return value of that rsvg_rust_cnode_get_impl() is just a void*, which needs to be casted to the actual struct we want. So, the proper way to use this function is first to assert that we have the right type:

    g_assert (rsvg_node_get_type (handle->priv->treebase) == RSVG_NODE_TYPE_SVG);
    RsvgNodeSvg *root = rsvg_rust_cnode_get_impl (handle->priv->treebase);
    	    

    This is no different or more perilous than the usual downcasting one does when faking OOP with C. It's dangerous, sure, but we know how to deal with it.

    Creating a Rust implementation of a node

    Aaaah, the fun part! I am porting the SVG node types one by one to Rust. I'll show you the simple implementation of NodeLine, which is for SVG's <line> element.

    struct NodeLine {
        x1: Cell<RsvgLength>,
        y1: Cell<RsvgLength>,
        x2: Cell<RsvgLength>,
        y2: Cell<RsvgLength>
    }
    	    

    Just x1/y1/x2/y2 fields with our old friend RsvgLength, no problem. They are in Cells to make them mutable after the NodeLine is constructed and referenced all over the place.

    Now let's look at its three methods of NodeTrait.

    impl NodeTrait for NodeLine {
    
        fn set_atts (&self, _: &RsvgNode, _: *const RsvgHandle, pbag: *const RsvgPropertyBag) {
            self.x1.set (property_bag::lookup_length (pbag, "x1", LengthDir::Horizontal));
            self.y1.set (property_bag::lookup_length (pbag, "y1", LengthDir::Vertical));
            self.x2.set (property_bag::lookup_length (pbag, "x2", LengthDir::Horizontal));
            self.y2.set (property_bag::lookup_length (pbag, "y2", LengthDir::Vertical));
        }
    }
    	    

    The set_atts() method just sets the fields of the NodeLine to the corresponding values that it gets in the property bag. This pbag is of key-value pairs from XML attributes. That lookup_length() function looks for a specific key, and parses it into an RsvgLength. If the key is not available, or if parsing yields an error, the function returns RsvgLength::default() — the default value for lengths, which is zero pixels. This is where we "bump" into the C code that doesn't know how to propagate parsing errors yet. Internally, the length parser in Rust yields a proper Result value, with error information if parsing fails. Once all the code is on the Rust side of things, I'll start thinking about propagating errors to the toplevel RsvgHandle. For now, librsvg is a very permissive parser/renderer indeed.

    I'm starting to realize that set_atts() only gets called immediately after a new node is allocated. After that, I *think* that nodes don't mutate their internal fields. So, it may be possible to move the "pull stuff out of the pbag and set the fields" code from set_atts() to the actual constructors, remove the Cell wrappers around all the fields, and thus get immutable objects.

    Maybe it is that I'm actually going through all of librsvg's code, so I get to know it better; or maybe it is that porting it to Rust is actually clarifying my thinking on how the code works and how it ought to work. If all nodes are immutable after creation... and they are a recursive tree... which already does Porter-Duff compositing... which is associative... that sounds very parallelizable, doesn't it? (I would never dare to do it in C. Rust is making it feel quite feasible, without data races.)

    impl NodeTrait for NodeLine {
    
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            let x1 = self.x1.get ().normalize (draw_ctx);
            let y1 = self.y1.get ().normalize (draw_ctx);
            let x2 = self.x2.get ().normalize (draw_ctx);
            let y2 = self.y2.get ().normalize (draw_ctx);
    
            let mut builder = RsvgPathBuilder::new ();
            builder.move_to (x1, y1);
            builder.line_to (x2, y2);
    
            render_path_builder (&builder, draw_ctx, node.get_state (), dominate, true);
        }
    
    }
    	    

    The draw() method normalizes the x1/y1/x2/y2 values to the current viewport from the drawing context. Then it creates a path builder and feeds it commands to draw a line. It calls a helper function, render_path_builder(), which renders it using the appropriate CSS cascaded values, and which adds SVG markers like arrowheads if they were specified in the SVG file.

    Finally, our little hack for the benefit of the C code:

    impl NodeTrait for NodeLine {
    
        fn get_c_impl (&self) -> *const RsvgCNodeImpl {
            unreachable! ();
        }
    
    }
    	    

    In this purely-Rust implementation of NodeLine, there is no c_impl. So, this method asserts that nobody ever calls it. This is to guard myself against C code which may be trying to peek at an impl when there is no longer one.

    Downcasting to concrete types

    Remember rsvg_rust_cnode_get_impl(), which the C code can use to get its implementation data outside of the normal NodeTrait methods?

    Well, since I am doing a mostly straight port from C to Rust, i.e. I am not changing the code's structure yet, just changing languages — it turns out that sometimes the Rust code needs access to the Rust structs outside of the NodeTrait methods as well.

    I am using downcast-rs, a tiny crate that lets one do exactly this: go from a boxed trait object to a concrete type. Librsvg uses it like this:

    use downcast_rs::*;
    
    pub trait NodeTrait: Downcast {
        fn set_atts (...);
        ...
    }
    
    impl_downcast! (NodeTrait);
    
    impl Node {
        ...
    
        pub fn with_impl<T: NodeTrait, F: FnOnce (&T)> (&self, f: F) {
            if let Some (t) = (&self.node_impl).downcast_ref::<T> () {
                f (t);
            } else {
                panic! ("could not downcast");
            }
        }
    
        ...
    }
    	    

    The basic Node has a with_impl() method, which takes a lambda that accepts a concrete type that implements NodeTrait. The lambda gets called with your Rust-side impl, with the right type.

    Where the bloody hell is this used? Let's take markers as an example. Even though SVG markers are implemented as a node, they don't draw themselves: they can get referenced from paths/lines/etc. and there is special machinery to decide just where to draw them.

    So, in the implementation for markers, the draw() method is empty. The code that actually computes a marker's position uses this to render the marker:

    node.with_impl (|marker: &NodeMarker| marker.render (c_node, draw_ctx, xpos, ypos, computed_angle, line_width));
    	    

    That is, it calls node.with_impl() and passes it an anonymous function which calls marker.render() — a special function just for rendering markers.

    This is kind of double plus unclean, yes. I can think of a few solutions:

    • Instead of assuming that every node type is really a NodeTrait, make Node know about element implementations that can draw themselves normally and those that can't. Maybe Node can have an enum to discriminate that, and NodeTrait can lose the draw() method. Maybe there needs to be Drawable trait which "normal" elements implement, and something else for markers and other elements which are only used by reference?

    • Instead of having a single dictionary of named nodes — those that have an id="name" attribute, which is later used to reference markers, filters, etc. — have separate dictionaries for every reference-able element type. One dict for markers, one dict for patterns, one dict for filters, and so on. The SVG spec already says, for example, that it is an error to reference a marker but to specify an id for an element that is not a marker — and so on for the other reference-able types. After doing a lookup by id in the dictionary-of-all-named-nodes, librsvg checks the type of the found Node. It could avoid doing this check if the dictionaries were separate, since each dictionary would only contain objects of a known type.

    Conclusion

    Apologies for the long post! Here's a summary:

    • Node objects are reference-counted. Librsvg uses them to build up the tree of nodes.

    • There is a NodeTrait for SVG element implementations. It has methods to set an element's attributes, and to draw the element.

    • SVG elements already implemented in Rust have no problems; they just implement NodeTrait and that's it.

    • For the rest of the SVG elements, which are still implemented in C, I have a CNode which implements NodeTrait. The CNode holds an opaque pointer to the implementation data on the C side, and function pointers to the C implementations of NodeTrait.

    • There is some temporary-but-clean hackery to let C code obtain its implementation pointers outside of the normal method implementations.

    • Downcasting to concrete types is a necessary evil, but it is the easiest way to transform this C code into Rust. It may be possible to eliminate it with some refactoring.

    The git history for the conversion of nodes into Rust is interesting, I hope; I've tried to write explanatory comments in the commits. If you want to read this history, start at this commit. Before it there is a lot of C-side refactoring to turn field accesses into accesor functions, for example. In that commit and a few that follow, there are some false starts as I learned how to represent recursive data structures in Rust. Then there is a lot of refactoring to make the existing C implementations all use rsvg_rust_cnode_new(), and to hook them to the Rust-side machinery. Finally, there are commits to actually port complete elements into Rust and eliminate the C implementations. Enjoy!

  • Addendum: how did I do the refactoring above?

    To move the librsvg code towards a state where I could do mostly mechanical transformations, from the old C-based RsvgNode structures into Rust-based Node and the CNode shim, I had to do a lot of refactoring.

    The first thing was to add accessor functions for all of a node's fields: create rsvg_node_get_type(), rsvg_node_get_parent(), etc., instead of accessing node->type and node->parent directly. This is so that, once the basic Node structure was moved to Rust, the C code would have a way to access its fields. I do not want to have parallel structs in C and in Rust with the same memory layouts, although Rust makes it possible to do things that way if you wish.

    I had to change the way in which new nodes are created, when they are parsed out of the SVG file. Instead of a big switch() statement, I parametrized the node creator functions based on the name of the SVG element being created.

    I moved the node destructors around, so that they really were only callable in a single way, instead of differently all over the place.

    Some of the changes caused ripples throughout the code, and I had to review things carefully. For this, I kept a to-do list in a text file. Here is part of that to-do list, with some explanations.

    * TODO Rust node:
    
    ** DONE RsvgNode - remove struct declaration from the C code; leave as
       an opaque type.
    
    ** DONE Audit calls to rsvg_drawing_ctx_acquire_node_of_type() -
       returns opaque RsvgNode*, not a castable thing.
    
    ** DONE Audit all casts; grep for "*)".  It turns out that once
       rsvg_rust_cnode_get_impl() was in place, all the casts disappeared
       from the code!
    
    ** DONE Look for casts to (RsvgNode *) - those are most likely wrong
       now.
    
    ** DONE rsvg_node_free() - remove; replace with unref().
    
    ** TODO pattern_new() - implemented in Rust, takes a RsvgNode argument
       from C - what to do with it?
    
    ** TODO rsvg_pattern_node_has_children() - move to Rust?
    
    ** DONE Audit calls to rsvg_node_get_parent() - unref when done.
    
    ** DONE rsvg_drawing_ctx_free() - unref the nodes in drawsub_stack.
    
    ** DONE rsvg_node_add_child() - use it when creating a new node.
    
    ** DONE Audit "node_a == node_b" as node references can't be compared
       anymore; use rsvg_node_is_same().
    	    

    (You may recognize this format as Emacs org-mode - it's awesome.)

    That to-do list was especially useful when the code was in the middle of being refactored maybe it would compile, but it would definitely not be correct. I suggest that if you try to do a similar port/refactor, you keep a detailed to-do list of sweeping changes.

February 28, 2017

security things in Linux v4.10

Previously: v4.9.

Here’s a quick summary of some of the interesting security things in last week’s v4.10 release of the Linux kernel:

PAN emulation on arm64

Catalin Marinas introduced ARM64_SW_TTBR0_PAN, which is functionally the arm64 equivalent of arm’s CONFIG_CPU_SW_DOMAIN_PAN. While Privileged eXecute Never (PXN) has been available in ARM hardware for a while now, Privileged Access Never (PAN) will only be available in hardware once vendors start manufacturing ARMv8.1 or later CPUs. Right now, everything is still ARMv8.0, which left a bit of a gap in security flaw mitigations on ARM since CONFIG_CPU_SW_DOMAIN_PAN can only provide PAN coverage on ARMv7 systems, but nothing existed on ARMv8.0. This solves that problem and closes a common exploitation method for arm64 systems.

thread_info relocation on arm64

As done earlier for x86, Mark Rutland has moved thread_info off the kernel stack on arm64. With thread_info no longer on the stack, it’s more difficult for attackers to find it, which makes it harder to subvert the very sensitive addr_limit field.

linked list hardening
I added CONFIG_BUG_ON_DATA_CORRUPTION to restore the original CONFIG_DEBUG_LIST behavior that existed prior to v2.6.27 (9 years ago): if list metadata corruption is detected, the kernel refuses to perform the operation, rather than just WARNing and continuing with the corrupted operation anyway. Since linked list corruption (usually via heap overflows) are a common method for attackers to gain a write-what-where primitive, it’s important to stop the list add/del operation if the metadata is obviously corrupted.

seeding kernel RNG from UEFI

A problem for many architectures is finding a viable source of early boot entropy to initialize the kernel Random Number Generator. For x86, this is mainly solved with the RDRAND instruction. On ARM, however, the solutions continue to be very vendor-specific. As it turns out, UEFI is supposed to hide various vendor-specific things behind a common set of APIs. The EFI_RNG_PROTOCOL call is designed to provide entropy, but it can’t be called when the kernel is running. To get entropy into the kernel, Ard Biesheuvel created a UEFI config table (LINUX_EFI_RANDOM_SEED_TABLE_GUID) that is populated during the UEFI boot stub and fed into the kernel entropy pool during early boot.

arm64 W^X detection

As done earlier for x86, Laura Abbott implemented CONFIG_DEBUG_WX on arm64. Now any dangerous arm64 kernel memory protections will be loudly reported at boot time.

64-bit get_user() zeroing fix on arm
While the fix itself is pretty minor, I like that this bug was found through a combined improvement to the usercopy test code in lib/test_user_copy.c. Hoeun Ryu added zeroing-on-failure testing, and I expanded the get_user()/put_user() tests to include all sizes. Neither improvement alone would have found the ARM bug, but together they uncovered a typo in a corner case.

no-new-privs visible in /proc/$pid/status
This is a tiny change, but I like being able to introspect processes externally. Prior to this, I wasn’t able to trivially answer the question “is that process setting the no-new-privs flag?” To address this, I exposed the flag in /proc/$pid/status, as NoNewPrivs.

That’s all for now! Please let me know if you saw anything else you think needs to be called out. :) I’m already excited about the v4.11 merge window opening…

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

February 27, 2017

Interview with Guilherme Silveira Dias

Could you tell us something about yourself?

I flow between the frugal pal who can still find a creative way of making art even when the limitations get worse; And the bon vivant of me and my father at art supplies stores like they were literally our home sweet home. My current notebook is a shabby 32GB onboard flash-disk that barely allows me to draw in a screen sized canvas with Krita Gemini. Yes, I’m using a touch screen pen because my notebook ignores and treats my Intuos 2 as a mouse. So I’m having a much better experience with my tiny android phone with its drawing apps. One of those apps, Autodesk Sketchbook, under my circumstances now, is my only choice for 2D Digital Painting. Good stuff, bought it and am making comics with it!!! Hahahaha.

Do you paint professionally, as a hobby artist, or both?

I’m an artist drawing professionally since 1994, had my first watercolors exposition in 2001, india ink illustrations for an Irish literature book in 2007, and acrylics paintings expositions between 2008 and 2016.

What genre(s) do you work in?

Among other genres, as in visual storytelling, storyboarding and zines, I work in comedy (though making someone laugh is so hard they could totally mistake it for serious/not kidding.)

Whose work inspires you most — who are your role models as an artist?

The hard-work of Peleng, and so natural of Kim Jung Gi inspires me the most, though I wish Gi was ambidextrous — but the Italian designer Bruno Munari is my role model as an artist. Among many of Munari’s professional values, I’m talking about his separation between purely commercial artifacts or artifacts of complete accessibility.

How and when did you get to try digital painting for the first time?

Worst experience of my entire teeenager all nighters! Heheh. I didn’t know how to ask the right questions of the right people, and because my example of digital art came from a mag filled with some nice and detailed 2D Digital Illustrations, I, instead of understanding that all that cool DOOM game sprites were what I was supposed to draw, I tried to do the big illustrations, very similar to traditional, that I saw in the digital art magazine I had.

What makes you choose digital over traditional painting?

I don’t, it is binary, I can choose to paint either traditional or digital.

I’ve been working on my art in three very different ways: the first one aims for beauty, perfection, even if impossible to reach. The closest I’ve got was on two pieces, Find the Sacred in Humanity (the featured image) – traditional painting of singer Julie Wein, and a 2D Digital Portrait of a Civilian Baby. The second way is pure entertainment, that’s the one all done on Krita, where I give life to Character Design Model Sheets producing industry material, which then can be adapted and shared in games and movies as long as for non-commercial uses. And the third is Underground Zine Publishing (this one is under the WTFPL, a very permissive license for artistic works that offers a great degree of freedom).

How did you find out about Krita?

Fernando Michelotti told me I would work better on something totally developed artist-centered (instead of any open source image manipulation program or proprietary raster graphics editor).

What was your first impression?

“I know Kung-Fu”

What do you love about Krita?

My first impression only gets stronger.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Improvement: ideally what for more than a decade only IllustStudio (a software that you need to have an address in Japan to be eligible to pay monthly subscription) has, the surface coating feature: http://herionz.deviantart.com/art/Illuststudio-Surface-Coating-tool-tips-290615045, but at least the al.chemy.org Create > Pull Shapes. Annoys me: It’s ok that it isn’t the de facto standard for 2D digital painting on all Linux distros, but that fact annoys me. It is like a childhood without Metroid, which is my case.

I bought other softwares, one for comics and another for vector art. But for concept art I’ve chosen Krita, beyond any other competitor. And if Krita had all the features of http://al.chemy.org/ I wouldn’t need to work on my silhouettes with al.chemy. And if Krita Gemini was an app for android I’d pay any fair price, hell, I’d pay a monthly subscription. I love mobility and tiny A-6 paper books and sketchbooks, so an android phone with krita app would be one of my biggest dreams come true, especially when I get an iPad Pro I’ll probably receive mid 2017.

What sets Krita apart from the other tools that you use?

Let me stretch this a little bit. Imagine a world where almost more than half of the population is way too poor to pay not only for the de facto industry standard in raster graphics editing but anything, Then picture that somewhere nearby each group of people there is a very slow computer, and in one of the seldom hours the very slow Internet works they download Krita. Okay, we don’t need to go to that level under misery to understand Krita, see the possibility of way more artists, even paid professional. Simply imagine what happens when people whose only property is lifetime finally control the means of production. But that’s just text, very fantastic or a fantasy, because I don’t know if what I just wrote has any example in our societies. But anything close to what I wrote definitely sets Krita apart from other tools that I use.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


“Catman” (face closeup). Because I did it all with only one brush, and only kept using two features: the X key to switch fore to background brush color and the hold Ctrl + click to sample colors, witch makes the Block_brush preset kick ass.

Where can people see more of your work?

http://www.edli.work

Anything else you’d like to share?

I’d like to acknowledge my friend Fernando Michelotti without whom I’d might never heard of Krita, again, like my childhood without Metroid — thx NES emulators for fixing that childhood bug — and acknowledge Fernando for knowing so much about the Krita staff and always saying cool stuff about you guys. The digital world is very warm only when you’re not simply in front of a painting software screen but also feeling connected to a whole community that makes digital painting accessible almost to everyone.

 

Find us at SCaLE 15x


Find us at SCaLE 15x

The Southern California Linux Expo (SCaLE) 15x is returning to the Pasadena Convention Center on March 2-5, 2017. SCaLE is one of the largest community-organized conferences in North America, with some 3,500 attendees last year.

SCaLE Logo

If you’re attending the conference this year, find me, @paperdigits and lets talk shop or grab a meal!

@paperdigits Don’t judge me, it was the morning.
You can ping me on the forum, on twitter, or on Matrix/riot.im at @paperdigits:matrix.org. If meeting isn’t enough for you, I’ll have stickers!
Get yourself some stickers!

February 24, 2017

Coder Dojo: Kids Teaching Themselves Programming

We have a terrific new program going on at Los Alamos Makers: a weekly Coder Dojo for kids, 6-7 on Tuesday nights.

Coder Dojo is a worldwide movement, and our local dojo is based on their ideas. Kids work on programming projects to earn colored USB wristbelts, with the requirements for belts getting progressively harder. Volunteer mentors are on hand to help, but we're not lecturing or teaching, just coaching.

Despite not much advertising, word has gotten around and we typically have 5-7 kids on Dojo nights, enough that all the makerspace's Raspberry Pi workstations are filled and we sometimes have to scrounge for more machines for the kids who don't bring their own laptops.

A fun moment early on came when we had a mentor meeting, and Neil, our head organizer (who deserves most of the credit for making this program work so well), looked around and said "One thing that might be good at some point is to get more men involved." Sure enough -- he was the only man in the room! For whatever reason, most of the programmers who have gotten involved have been women. A refreshing change from the usual programming group. (Come to think of it, the PEEC web development team is three women. A girl could get a skewed idea of gender demographics, living here.) The kids who come to program are about 40% girls.

I wondered at the beginning how it would work, with no lectures or formal programs. Would the kids just sit passively, waiting to be spoon fed? How would they get concepts like loops and conditionals and functions without someone actively teaching them?

It wasn't a problem. A few kids have some prior programming practice, and they help the others. Kids as young as 9 with no previous programming experience walk it, sit down at a Raspberry Pi station, and after five minutes of being shown how to bring up a Python console and use Python's turtle graphics module to draw a line and turn a corner, they're happily typing away, experimenting and making Python draw great colorful shapes.

Python-turtle turns out to be a wonderful way for beginners to learn. It's easy to get started, it makes pretty pictures, and yet, since it's Python, it's not just training wheels: kids are using a real programming language from the start, and they can search the web and find lots of helpful examples when they're trying to figure out how to do something new (just like professional programmers do. :-)

Initially we set easy requirements for the first (white) belt: attend for three weeks, learn the names of other Dojo members. We didn't require any actual programming until the second (yellow) belt, which required writing a program with two of three elements: a conditional, a loop, a function.

That plan went out the window at the end of the first evening, when two kids had already fulfilled the yellow belt requirements ... even though they were still two weeks away from the attendance requirement for the white belt. One of them had never programmed before. We've since scrapped the attendance belt, and now the white belt has the conditional/loop/function requirement that used to be the yellow belt.

The program has been going for a bit over three months now. We've awarded lots of white belts and a handful of yellows (three new ones just this week). Although most of the kids are working in Python, there are also several playing music or running LED strips using Arduino/C++, writing games and web pages in Javascript, writing adventure games Scratch, or just working through Khan Academy lectures.

When someone is ready for a belt, they present their program to everyone in the room and people ask questions about it: what does that line do? Which part of the program does that? How did you figure out that part? Then the mentors review the code over the next week, and they get the belt the following week.

For all but the first belt, helping newer members is a requirement, though I suspect even without that they'd be helping each other. Sit a first-timer next to someone who's typing away at a Python program and watch the magic happen. Sometimes it feels almost superfluous being a mentor. We chat with the kids and each other, work on our own projects, shoulder-surf, and wait for someone to ask for help with harder problems.

Overall, a terrific program, and our only problems now are getting funding for more belts and more workstations as the word spreads and our Dojo nights get more crowded. I've had several adults ask me if there was a comparable program for adults. Maybe some day (I hope).

Fri 2017/Feb/24

  • Griping about parsers and shitty specifications

    The last time, I wrote about converting librsvg's tree of SVG element nodes from C to Rust — basically, the N-ary tree that makes up librsvg's view of an SVG file's structure. Over the past week I've been porting the code that actually implements specific shapes. I ran into the problem of needing to port the little parser that librsvg uses for SVG's list-of-points data type; this is what SVG uses for the points in a polygon or a polyline. In this post, I want to tell you my story of writing Rust parsers, and I also want to vent a bit.

    My history of parsing in Rust

    I've been using hand-written parsers in librsvg, basically learning how to write parsers in Rust as I go. In a rough timeline, this is what I've done:

    1. First was the big-ass parser for Bézier path data, an awkward recursive-descent monster that looks very much like what I would have written in C, and which is in dire need of refactoring (the tests are awesome, though, if you (hint, hint) want to lend a hand).

    2. Then was the simple parser for RsvgLength values, in which the most expedient thing, if not the prettiest, was to reimplement strtod() in Rust. You know how C and/or glib have a wealth of functions to write half-assed parsers quickly? Yeah, that's what I went for here. However, with this I learned that strtod()'s behavior of returning a pointer to the thing-after-the-number can be readily implemented with Rust's string slices.

    3. Then was the little parser for preserveAspectRatio values, which actually uses Rust idioms like using a split_whitespace() iterator and returing a custom error inside a Result.

    4. Lastly, I've implemented a parser for SVG's list-of-points data type. While shapes like rect and circle simply use RsvgLength values which can have units, like percentages or points:

      <rect x="5pt" y="6pt" width="7.5 pt" height="80%/">
      
      <circle cx="25.0" cy="30.0" r="10.0"/>
      		

      In contrast, polygon and polyline elements let you specify lists of points using a simple but quirky grammar that does not use units; the numbers are relative to the object's current coordinate system:

      <polygon points="100,60 120,140 140,60 160,140 180,60 180,100 100,100"/>
      		

    Quirky grammars and shitty specs

    The first inconsistency in the above is that some object types let you specify their positions and sizes with actual units (points, centimeters, ems), while others don't — for these last ones, the coordinates you specify are relative to the object's current transformation. This is not a big deal; I suspect it is either an oversight in the spec, or they didn't want to think of a grammar that would accomodate units like that.

    The second inconsistency is in SVG's quirky grammars. These are equivalent point lists:

    points="100,-60 120,-140 140,60 160,140"
    
    points="100 -60 120 -140 140 60 160 140"
    
    points="100-60,120-140,140,60 160,140"
    
    points="100-60,120, -140,140,60 160,140"
    	    

    Within an (x, y) coordinate pair, the space is optional if the y coordinate is negative. Or you can separate the x and y with a space, with an optional comma.

    But between coordinate pairs, the separator is not optional. You must have any of whitespace or a comma, or both. Even if an x coordinate is negative, it must be separated from its preceding y coordinate.

    But this is not true for path specifications! The following are paths that start with a Move_to comamnd and follow with three Line_to commands, and they are equivalent:

    M 1 2 L -3 -4 5 -6 7 -8
    
    M1,2 L-3 -4, 5,-6,7 -8
    
    M1,2L-3-4, 5-6,7-8
    
    M1 2-3,-4,5-6 7,-8
    
    M1,2-3-4 5-6 7-8
    	    

    Inside (x, y) pairs, you can have whitespace with an optional comma, or nothing if the y is negative. Between coordinate pairs you can have whitespace with an optional comma, or nothing at all! As you wish! And also, since all subpaths start with a single move_to anyway, you can compress a "M point L point point" to just "M point point point" — the line_to is implied.

    I think this is a misguided attempt by the SVG spec to allow compressibility of the path data. But they might have gotten more interesting gains, oh, I don't know, by not using XML and just nesting things with parentheses.

    Borat and XML big     data

    Also, while the grammar for the point lists in polygon and polyline clearly implies that you must have an even number of floats in your list (for the x/y coordinate pairs), the description of the polygon element says that if you have an odd number of coordinates, the element is "in error" and should be treated the same as an erroneous path description. But the implementation notes for paths say that you should "render a path element up to (but not including) the path command containing the first error in the path data specification", i.e. just ignore the orphan coordinate and somehow indicate an error.

    So which one is it? Should we a) count the floats and ignore the odd one out? Or b) should we follow the grammar and ditch the whole string if it isn't matched by the grammar? Or c) put special handling in the grammar for the last odd coordinate?

    I mean, it's no big deal to actually handle this in the code, but it's annoying to have a spec that can't make up its mind.

    The expediency of half-assed parsers

    How would you parse a string that has an array of floating-point numbers separated by whitespace? In Python you could split() the string and then apply float() to each item, and catch an exception if the value does not parse to float. Or something like that.

    The C code in librsvg has a rather horrible rsvg_css_parse_number_list() function, complete with a "/* TODO: some error checking */" comment, that is implemented in terms of an equally horrible rsvg_css_parse_list() function. This last one is the one that handles the "optional whitespace and comma" shenanigans... with strtok_r(). Between both functions there are like a million temporary copies of the whole string and the little substrings. Or something like that.

    Neither version is fully conformant to the quirks in the SVG spec. I'm replacing all that shit with "proper" parsers written in Rust. And what could be more proper than to actually use a parser-generating library instead of writing them from scratch?

    The score is Nom: 2 - Federico: 1

    This is the second time I have tried to use nom, a library for parser combinators. I like the idea of parser combinators instead of writing a grammar and then generating a parser out of that: supposedly your specification of a parser combinator closely resembles your grammar, but the whole thing is written and embedded in the same language as the rest of your code.

    There is a good number of parsers already written with nom, so clearly it works.

    But nom has been... hard. A few months ago I tried rewriting the path data parser for librsvg, from C to Rust with nom, and it was a total failure. I couldn't even parse a floating-point number out of a string. I ditched it after a week of frustration, and wrote my own parser by hand.

    Last week I tried nom again. Sebastian Dröge was very kind to jumpstart me with some nom-based code to parse floating-point numbers. I then discovered that the latest nom actually has that out of the box, but it is buggy, and after some head-scratching I was able to propose a fix. Apparently my fix won't work if nom is being used for a streaming parser, but I haven't gotten around to learning those yet.

    I've found nom code to be very hard to write. Nom works by composing macros which then call recognizer functions and other macros. To be honest, I find the compiler errors for that kind of macros quite intractable. And I'm left with not knowing if the problem is in my use of the macros, or in the macro definitions themselves.

    After much, much trial-and-error, and with the vital help of Sebastian Köln from the #nom IRC channel, I have a nom function that can match SVG's comma-whitespace construct and two others that can match coordinate pairs and lists of points for the polygon and polyline elements. This last one is not even complete: it still doesn't handle a list of points that has whitespace around the list itself (SVG allows that, as in "   1 2, 3 4   ").

    While I can start using those little nom-based parsers to actually build implementations of librsvg's objects, I don't want to battle nom anymore. The documentation is very sketchy. The error messages from macros don't make sense to me. I really like the way nom wants to handle things, with an IResult type that holds the parsing state, but it's just too hard for me to even get things to compile with it.

    So what are you gonna use?

    I don't know yet. There are other parser libraries for Rust; pom, lalrpop, and pest among them. I hope they are easier to try out than nom. If not, "when in doubt, use brute force" may apply well here — hand-written parsers in Rust are as cumbersome as in any other language, but at least they are memory-safe, and writing tests for them is a joy in Rust.

February 22, 2017

A logo for cri-o

Dan Walsh recently asked me if I could come up with a logo for a project he is involved with – cri-o.

The “cri” of cri-o stands for Container Runtime Interface. The CRI is a different project – the CRI is an API between Kubernetes (container orchestration) and various container runtimes. cri-o is a runtime – like rkt or Docker – that can run containers that are compliant with OCI (Open Containers Initiative) specification. (Some more info on this is here.)

Dan and Antonio suggested a couple of ideas at the outset:

  • Since the project means to connect to Kubernetes via the CRI, it might be neat to have some kind of nod to Kubernetes. Kubernetes’ logo is a nautical one (the wheel of a ship, with 7 spokes.)
  • If you say cri-o out loud, it kind of sounds like cyro, e.g., icy-cool like Mr. Freeze from Batman!
  • If we want to go for a mascot, a mammoth might be a neat one (from an icy time.)

So I had two initial ideas, riffing off of those:

  1. I tried to think of something nautical and frozen that might relate to Kubernetes in a reasonable way given what cri-o actually does. I kept coming back to icebergs, but they don’t relate to ships’ steering in the same way, and worse, I think they could have a bad connotation whether it’s around stranding polar bears, melting and drowning us all, or the Titanic.
  2. Better idea – not nautical, yet it related to the Kubernetes logo in a way. I was thinking a snowflake might be an interesting representation – it could have 7 spokes like the Kubernetes wheel. It relates a bit in that snowflakes are a composition of a lot of little ice crystals (containers), and kubernetes would place them on the runtime (cri-o) in a formation that made the most sense, forming something beautiful 🙂 (the snowflake.)

I abandoned the iceberg idea and went through a lot of iterations of different snowflake shapes – there are so many ways to make a snowflake! I used the cloning feature in Inkscape to set up the first spoke, then cloned it to the other 6 spokes. I was able to tweak the shapes in the first spoke and have it affect all spokes simultaneously. (It’s a neat trick I should video blog one of these days.)

This is what I came up with:

3 versions of the crio logo with different color treatments - one on a white background, one on a flat blue background, one on a blue gradient background. on the left is a 7-spoke snowflake constructed from thin lines and surrounded by a 7-sided polygon, on the right is the logotype 'cri-o'

I ended up on a pretty simple snowflake – I think it needs to be readable at small sizes, and while you can come up with some beautiful snowflake compositions in Inkscape, it’s easy to make snowflakes that are too elaborate and detailed to work well at a small size. The challenge was clarity at a small size as well as readability as a snowflake. The narrow-line drawing style seems to be pretty popular these days too.

The snowflake shape is encased in a 7-sided polygon (similar to the Kubernetes logo) – my thinking being the shape and narrowness of the line kind of make it looked like the snowflake is encased in ice (along the cryo initial idea.)

The dark blue color is a nod to the nautical theme; the bright blue highlight color is a nod to the cyro idea.

Completely symbolic, and maybe not in a clear / rational way, but I colored a little piece of each snowflake spoke using a blue highlight color, trying to make it look like those are individual pieces of the snowflake structure (eg the crystals == containers idea) getting deployed to create the larger snowflake.

Anyway! That is an idea for the cri-o logo. What do you think? Does it work for you? Do you have other, better ideas?

February 21, 2017

Help Fedora Hubs by taking this survey

Here’s a quick and easy way to help Fedora Hubs!

Our Outreachy intern, Suzanne Hillman, has put together a survey about Fedora contributors’ usage of social media to help us prioritize potential future integration with various social media platforms with Fedora Hubs. If you’d like your social media hangouts of choice to be considered for integration, please take the survey!

Take the survey now!

February 19, 2017

Stellarium 0.12.8

Stellarium 0.12.8 has been released today!

The series 0.12 is LTS for owners of old computers (old with weak graphics cards) and this is bugfix release:
- Added textures for deep-sky objects (port from the series 1.x/0.15)
- Fixed sizes of few DSO textures (LP: #1641773)
- Fixed problem with defaultStarsConfig data and loading a new star catalogs (LP: #1641803)

February 18, 2017

Highlight and remove extraneous whitespace in emacs

I recently got annoyed with all the trailing whitespace I saw in files edited by Windows and Mac users, and in code snippets pasted from sites like StackOverflow. I already had my emacs set up to indent with only spaces:

(setq-default indent-tabs-mode nil)
(setq tabify nil)
and I knew about M-x delete-trailing-whitespace ... but after seeing someone else who had an editor set up to show trailing spaces, and tabs that ought to be spaces, I wanted that too.

To show trailing spaces is easy, but it took me some digging to find a way to control the color emacs used:

;; Highlight trailing whitespace.
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "yellow")

I also wanted to show tabs, since code indented with a mixture of tabs and spaces, especially if it's Python, can cause problems. That was a little harder, but I eventually found it on the EmacsWiki: Show whitespace:

;; Also show tabs.
(defface extra-whitespace-face
  '((t (:background "pale green")))
  "Color for tabs and such.")

(defvar bad-whitespace
  '(("\t" . 'extra-whitespace-face)))

While I was figuring this out, I got some useful advice related to emacs faces on the #emacs IRC channel: if you want to know why something is displayed in a particular color, put the cursor on it and type C-u C-x = (the command what-cursor-position with a prefix argument), which displays lots of information about whatever's under the cursor, including its current face.

Once I had my colors set up, I found that a surprising number of files I'd edited with vim had trailing whitespace. I would have expected vim to be better behaved than that! But it turns out that to eliminate trailing whitespace, you have to program it yourself. For instance, here are some recipes to Remove unwanted spaces automatically with vim.

February 17, 2017

Fri 2017/Feb/17

  • How librsvg exports reference-counted objects from Rust to C

    Librsvg maintains a tree of RsvgNode objects; each of these corresponds to an SVG element. An RsvgNode is a node in an n-ary tree; for example, a node for an SVG "group" can have any number of children that represent various shapes. The toplevel element is the root of the tree, and it is the "svg" XML element at the beginning of an SVG file.

    Last December I started to sketch out the Rust code to replace the C implementation of RsvgNode. Today I was able to push a working version to the librsvg repository. This is a major milestone for myself, and this post is a description of that journey.

    Nodes in librsvg in C

    Librsvg used to have a very simple scheme for memory management and its representation of SVG objects. There was a basic RsvgNode structure:

    typedef enum {
        RSVG_NODE_TYPE_INVALID,
        RSVG_NODE_TYPE_CHARS,
        RSVG_NODE_TYPE_CIRCLE,
        RSVG_NODE_TYPE_CLIP_PATH,
        /* ... a bunch of other node types */
    } RsvgNodeType;
    	      
    typedef struct {
        RsvgState    *state;
        RsvgNode     *parent;
        GPtrArray    *children;
        RsvgNodeType  type;
    
        void (*free)     (RsvgNode *self);
        void (*draw)     (RsvgNode *self, RsvgDrawingCtx *ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNode;
    	    

    This is a no-frills base struct for SVG objects; it just has the node's parent, its children, its type, the CSS state for the node, and a virtual function table with just three methods. In typical C fashion for derived objects, each concrete object type is declared similar to the following one:

    typedef struct {
        RsvgNode super;
        RsvgLength cx, cy, r;
    } RsvgNodeCircle;
    	    

    The user-facing object in librsvg is an RsvgHandle: that is what you get out of the API when you load an SVG file. Internally, the RsvgHandle has a tree of RsvgNode objects — actually, a tree of concrete implementations like the RsvgNodeCircle above or others like RsvgNodeGroup (for groups of objects) or RsvgNodePath (for Bézier paths).

    Also, the RsvgHandle has an all_nodes array, which is a big list of all the RsvgNode objects that it is handling, regardless of their position in the tree. It also has a hash table that maps string IDs to nodes, for when the XML elements in the SVG have an "id" attribute to name them. At various times, the RsvgHandle or the drawing-time machinery may have extra references to nodes within the tree.

    Memory management is simple. Nodes get allocated at loading time, and they never get freed or moved around until the RsvgHandle is destroyed. To free the nodes, the RsvgHandle code just goes through its all_nodes array and calls the node->free() method on each of them. Any references to the nodes that remain in other places will dangle, but since everything is being freed anyway, things are fine. Before the RsvgHandle is freed, the code can copy pointers around with impunity, as it knows that the all_nodes array basically stores the "master" pointers that will need to be freed in the end.

    But Rust doesn't work that way

    Not so, indeed! C lets you copy pointers around all you wish; it lets you modify all the data at any time; and forces you to do all the memory-management bookkeeping yourself. Rust has simple and strict rules for data access, with profound implications. You may want to read up on ownership (where variable bindings own the value they refer to, there is one and only one binding to a value at any time, and values are deleted when their variable bindings go out of scope), and on references and borrowing (references can't outlive their parent scope; and you can either have one or more immutable references to a resource, or exactly one mutable reference, but not both at the same time). Together, these rules avoid dangling pointers, data races, and other common problems from C.

    So while the C version of librsvg had a sea of carefully-managed shared pointers, the Rust version needs something different.

    And it all started with how to represent the tree of nodes.

    How to represent a tree

    Let's narrow down our view of RsvgNode from C above:

    typedef struct {
        ...
        RsvgNode  *parent;
        GPtrArray *children;
        ...
    } RsvgNode;
    	    

    A node has pointers to all its children, and each child has a pointer back to the parent node. This creates a bunch of circular references! We would need a real garbage collector to deal with this, or an ad-hoc manual memory management scheme like librsvg's and its all_nodes array.

    Rust is not garbage-collected and it doesn't let you have shared pointers easily or without unsafe code. Instead, we'll use reference counting. To avoid circular references, which a reference-counting scheme cannot handle, we use strong references from parents to children, and weak references from the children to point back to parent nodes.

    In Rust, you can add reference-counting to your type Foo by using Rc<Foo> (if you need atomic reference-counting for multi-threading, you can use an Arc<Foo>). An Rc represents a strong reference count; conversely, a Weak<Foo> is a weak reference. So, the Rust Node looks like this:

    pub struct Node {
        ...
        parent:    Option<Weak<Node>>,
        children:  RefCell<Vec<Rc<Node>>>,
        ...
    }
    	    

    Let's unpack that bit by bit.

    "parent: Option<Weak<Node>>". The Weak<Node> is a weak reference to the parent Node, since a strong reference would create a circular refcount, which is wrong. Also, not all nodes have a parent (i.e. the root node doesn't have a parent), so put the Weak reference inside an Option. In C you would put a NULL pointer in the parent field; Rust doesn't have null references, and instead represents lack-of-something via an Option set to None.

    "children: RefCell<Vec<Rc<Node>>>". The Vec<Rc<Node>>> is an array (vector) of strong references to child nodes. Since we want to be able to add children to that array while the rest of the Node structure remains immutable, we wrap the array in a RefCell. This is an object that can hand out a mutable reference to the vector, but only if there is no other mutable reference at the same time (so two places of the code don't have different views of what the vector contains). You may want to read up on interior mutability.

    Strong Rc references and Weak refs behave as expected. If you have an Rc<Foo>, you can ask it to downgrade() to a Weak reference. And if you have a Weak<Foo>, you can ask it to upgrade() to a strong Rc, but since this may fail if the Foo has already been freed, that upgrade() returns an Option<Rc<Foo>> — if it is None, then the Foo was freed and you don't get a strong Rc; if it is Some(x), then x is an Rc<Foo>, which is your new strong reference.

    Handing out Rust reference-counted objects to C

    In the post about Rust constructors exposed to C, we talked about how a Box is Rust's primitive to put objects in the heap. You can then ask the Box for the pointer to its heap object, and hand out that pointer to C.

    If we want to hand out an Rc to the C code, we therefore need to put our Rc in the heap by boxing it. And going back, we can unbox an Rc and let it fall out of scope in order to free the memory from that box and decrease the reference count on the underlying object.

    First we will define a type alias, so we can write RsvgNode instead of Rc<Node> and make function prototypes closer to the ones in the C code:

    pub type RsvgNode = Rc<Node>;
    	    

    Then, a convenience function to box a refcounted Node and extract a pointer to the Rc, which is now in the heap:

    pub fn box_node (node: RsvgNode) -> *mut RsvgNode {
        Box::into_raw (Box::new (node))
    }
    	    

    Now we can use that function to implement ref():

    #[no_mangle]
    pub extern fn rsvg_node_ref (raw_node: *mut RsvgNode) -> *mut RsvgNode {
        assert! (!raw_node.is_null ());
        let node: &RsvgNode = unsafe { & *raw_node };
    
        box_node (node.clone ())
    }
    	    

    Here, the node.clone () is what increases the reference count. Since that gives us a new Rc, we want to box it again and hand out a new pointer to the C code.

    You may want to read that twice: when we increment the refcount, the C code gets a new pointer! This is like creating a hardlink to a Unix file — it has two different names that point to the same inode. Similarly, our boxed, cloned Rc will have a different heap address than the original one, but both will refer to the same Node in the end.

    This is the implementation for unref():

    #[no_mangle]
    pub extern fn rsvg_node_unref (raw_node: *mut RsvgNode) -> *mut RsvgNode {
        if !raw_node.is_null () {
            let _ = unsafe { Box::from_raw (raw_node) };
        }
    
        ptr::null_mut () // so the caller can do "node = rsvg_node_unref (node);" and lose access to the node
    }
    	    

    This is very similar to the destructor from a few blog posts ago. Since the Box owns the Rc it contains, letting the Box go out of scope frees it, which in turn decreases the refcount in the Rc. However, note that this rsvg_node_unref(), intended to be called from C, always returns a NULL pointer. Together, both functions are to be used like this:

    RsvgNode *node = ...; /* acquire a node from Rust */
    
    RsvgNode *extra_ref = rsvg_node_ref (node);
    
    /* ... operate on the extra ref; now discard it ... */
    
    extra_ref = rsvg_node_unref (extra_ref);
    
    /* Here extra_ref == NULL and therefore you can't use it anymore! */
    	    

    This is a bit different from g_object_ref(), which returns the same pointer value as what you feed it. Also, the pointer that you would pass to g_object_unref() remains usable if you didn't take away the last reference... although of course, using it directly after unreffing it is perilous as hell and probably a bug.

    In these functions that you call from C but are implemented in Rust, ref() gives you a different pointer than what you feed it, and unref() gives you back NULL, so you can't use that pointer anymore.

    To ensure that I actually used the values as intended and didn't fuck up the remaining C code, I marked the function prototypes with the G_GNUC_WARN_UNUSED_RESULT attribute. This way gcc will complain if I just call rsvg_node_ref() or rsvg_node_unref() without actually using the return value:

    RsvgNode *rsvg_node_ref (RsvgNode *node) G_GNUC_WARN_UNUSED_RESULT;
    
    RsvgNode *rsvg_node_unref (RsvgNode *node) G_GNUC_WARN_UNUSED_RESULT;
    	    

    And this actually saved my butt in three places in the code when I was converting it to reference counting. Twice when I forgot to just use the return values as intended; once when the old code was such that trivially adding refcounting made it use a pointer after unreffing it. Make the compiler watch your back, kids!

    Testing

    One of the things that makes me giddy with joy is how easy it is to write unit tests in Rust. I can write a test for the refcounting machinery above directly in my node.rs file, without needing to use C.

    This is the test for ref and unref:

    #[test]
    fn node_refs_and_unrefs () {
        let node = Rc::new (Node::new (...));
    
        let mut ref1 = box_node (node);                            // "hand out a pointer to C"
    
        let new_node: &mut RsvgNode = unsafe { &mut *ref1 };       // "bring back a pointer from C"
        let weak = Rc::downgrade (new_node);                       // take a weak ref so we can know when the node is freed
    
        let mut ref2 = unsafe { rsvg_node_ref (new_node) };        // first extra reference
        assert! (weak.upgrade ().is_some ());                      // "you still there?"
    
        ref2 = unsafe { rsvg_node_unref (ref2) };                  // drop the extra reference
        assert! (weak.upgrade ().is_some ());                      // "you still have life left in you, right?"
    
        ref1 = unsafe { rsvg_node_unref (ref1) };                  // drop the last reference
        assert! (weak.upgrade ().is_none ());                      // "you are dead, aren't you?
    }
    	    

    And this is the test for two refcounts indeed pointing to the same Node:

    #[test]
    fn reffed_node_is_same_as_original_node () {
        let node = Rc::new (Node::new (...));
    
        let mut ref1 = box_node (node);                         // "hand out a pointer to C"
    
        let mut ref2 = unsafe { rsvg_node_ref (ref1) };         // "C takes an extra reference and gets a new pointer"
    
        unsafe { assert! (rsvg_node_is_same (ref1, ref2)); }    // but they refer to the same thing, correct?
    
        ref1 = rsvg_node_unref (ref1);
        ref2 = rsvg_node_unref (ref2);
    }
    	    

    Hold on! Where did that rsvg_node_is_same() come from? Since calling rsvg_node_ref() now gives a different pointer to the original ref, we can no longer just do "some_node == tree_root" to check for equality and implement a special case. We need to do "rsvg_node_is_same (some_node, tree_root)" instead. I'll just point you to the source for this function.

Today, I:

I’ve gazed enviously at many a productivity scheme. Getting Things Done™, do one thing at a time, use a swimming desk, only use hand-hewn pencils on organic hemp paper, and so on.

I assume most of these techniques and schemes are like diets or exercise routines. There are no silver bullets, but there may be an occasional nugget of truth among the gimmicks and marketing.

Inspired by a post about daily work journals, I have found one tiny little trick that has actually worked for me. It hasn’t transformed my life or quadrupled my productivity. It has made me a touch more aware of how I spend my time.

Every weekday at 4:45pm, get a gentle reminder from Slack, the chat system we use at work. It looks like this:

The #retrospectives text is a link to a channel in Slack that is available to others to read, but where they won’t be bothered by my updates (unless they opt-in). I click the link and write a quick bullet-list summary of what I have done that day, starting with “Today, I:”. It usually looks something like this:

Screenshot of a daily work log

My first such post was on August 16, 2016. To my surprise, I have stuck with it. As of mid-February, about seven months later, I have posted 134 entries – one for every day I have worked.

What’s the point of writing about what you’ve already done each day? It serves several purposes for me. Most importantly, the ritual reminds me to pause and reflect (very briefly) on what I accomplished that day. This simple act makes me a bit more mindful of how I spend my time and energy. The log also proves useful for any kind of retroactive reporting (When did I start working on project X? How many days in October did I spend on client Y?).

It may also be helpful in 10,000 years, when aliens are trying to reconstruct what daily life was like for 2000-era web designer.

February 13, 2017

Emacs: Initializing code files with a template

Part of being a programmer is having an urge to automate repetitive tasks.

Every new HTML file I create should include some boilerplate HTML, like <html><head></head></body></body></html>. Every new Python file I create should start with #!/usr/bin/env python, and most of them should end with an if __name__ == "__main__": clause. I get tired of typing all that, especially the dunderscores and slash-greater-thans.

Long ago, I wrote an emacs function called newhtml to insert the boilerplate code:

(defun newhtml ()
  "Insert a template for an empty HTML page"
  (interactive)
  (insert "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n"
          "<html>\n"
          "<head>\n"
          "<title></title>\n"
          "</head>\n\n"
          "<body>\n\n"
          "<h1></h1>\n\n"
          "<p>\n\n"
          "</body>\n"
          "</html>\n")
  (forward-line -11)
  (forward-char 7)
  )

The motion commands at the end move the cursor back to point in between the <title> and </title>, so I'm ready to type the page title. (I should probably have it prompt me, so it can insert the same string in title and h1, which is almost always what I want.)

That has worked for quite a while. But when I decided it was time to write the same function for python:

(defun newpython ()
  "Insert a template for an empty Python script"
  (interactive)
  (insert "#!/usr/bin/env python\n"
          "\n"
          "\n"
          "\n"
          "if __name__ == '__main__':\n"
          "\n"
          )
  (forward-line -4)
  )
... I realized that I wanted to be even more lazy than that. Emacs knows what sort of file it's editing -- it switches to html-mode or python-mode as appropriate. Why not have it insert the template automatically?

My first thought was to have emacs run the function upon loading a file. There's a function with-eval-after-load which supposedly can act based on file suffix, so something like (with-eval-after-load ".py" (newpython)) is documented to work. But I found that it was never called, and couldn't find an example that actually worked.

But then I realized that I have mode hooks for all the programming modes anyway, to set up things like indentation preferences. Inserting some text at the end of the mode hook seems perfectly simple:

(add-hook 'python-mode-hook
          (lambda ()
            (electric-indent-local-mode -1)
            (font-lock-add-keywords nil bad-whitespace)
            (if (= (buffer-size) 0)
                (newpython))
            (message "python hook")
            ))

The (= (buffer-size) 0) test ensures this only happens if I open a new file. Obviously I don't want to be auto-inserting code inside existing programs!

HTML mode was a little more complicated. I edit some files, like blog posts, that use HTML formatting, and hence need html-mode, but they aren't standalone HTML files that need the usual HTML template inserted. For blog posts, I use a different file extension, so I can use the elisp string-suffix-p to test for that:

  ;; s-suffix? is like Python endswith
  (if (and (= (buffer-size) 0)
           (string-suffix-p ".html" (buffer-file-name)))
      (newhtml) )

I may eventually find other files that don't need the template; if I need to, it's easy to add other tests, like the directory where the new file will live.

A nice timesaver: open a new file and have a template automatically inserted.

February 10, 2017

Accelerated compositing in WebKitGTK+ 2.14.4

WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

  • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
  • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
  • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

We really hope this new release and the upcoming 2.16 will work much better for everybody.

From the Community Vol. 2


From the Community Vol. 2

Welcome to the second installment of From the Community, a (hopefully) quarterly-ish blog post to highlight a few of the things our community members have been doing!

Improving grain simulation

@arctic has posted some research about how to better simulate grain in our digital images and the ensuing conversation is both fascinating and way above my head! This discussion is thus far raw processor independent and more input and code is welcome!

Examples of grain from raw processing programs

A tutorial on RBG color mixing

We’ve somewhat recently welcomed the painters into the fold on the pixls’ forum and @Elle rewarded us all with a tutorial RGB color mixing. She delves into subjects such as mixing color pigments like a traditional painter and how to handle that in the digital darkroom. You can read the whole article here.

Working to support Pentax Pixel Shift files in RawTherapee

There has been a lot of on-going work to bring support for Pentax Pixel Shift files in RawTherapee; the thread has now reached 234 posts and it is inspiring to see the community and developers coming together to bring support for an interesting technology. The feature set has been evolving pretty rapidly and it will be exiting when it makes it to a stable release.

An example pixel shift file

Midi controller support for Darktable

Some preliminary work has begun to bring generic midi controller support to darktable. The funding for the midi controller to spur the development of this feature is a direct result of the members of the forum directly giving to further community causes. Once the darktable developers are finished with the midi controller, it’ll be offered to other developers to use to help implement support!

A Korg midi controller

Methods for dealing with clipped highlights

@Morgan_Hardwood has written a very nice post detailing several methods for dealing with clipped highlights in RawTherapee. These include tone-mapping, highlights and shadows, and using the CIECAM02 mode.

Working with clipped highlights

February 08, 2017

Helping new users get on IRC, Part 2

Fedora Hubs

Where our story began…

You may first want to check out the first part of this blog post, Helping new users get on IRC. We’ll wait for you here. 🙂

A simpler way to choose a nick

(Relevant ticket: https://pagure.io/fedora-hubs/issue/283)

So Sayan kindly reviewed the ticket with the irc registration mockups in it and had some points of feedback about the nick selection process (original mockup shown below:)

Critical Feedback on Original Mockup

  • The layout of a grid of nicks to choose from invited the user to click on all of them, even if that wasn’t in their best interest. It drove their attention to a multiplicity of choices rather than focused them towards one they could use to move forward.
  • If the user clicked even on just one nick, they would have to wait for us to check if it was available. If they clicked on multiple, it could take a long time to get through the dialog. They might give up and not register. (We want them to join and chat, though!)
  • To make it clear which nick they wanted to register, we had the user click on a “Register” button next to every available nick. This meant, necessarily, that the button wasn’t in the lower right corner, the obvious place to look to continue. Users might be confused as to the correct next step.
  • Overall, the screen is more cluttered than it could be.

mockup showing 9 up nick suggestion display

We thought through a couple of alternatives that would meet the goals I had with the initial design, yet still address Sayan’s concerns listed above. Those goals are:

Mo’s goals for the mockup

  • Provide the user clues as to the standard format of the nicknames (providing an acceptable example can do this.)
  • Giving the user ideas in case they just can’t think of any nickname (generating suggestions based on heuristics can help.)
  • Making it very clear which nickname the user is going to continue with and register (in the first mockup, this was achieved through having the register button right next to each nick.)
  • Making it clear to the user that we needed to check if the nick was available after they came up with one. This is important becauae many websites do this as you type – we can’t because our availability check is much more expensive (parsing /info in irc!)

New solution

We decided to instead make the screen a simple form field for nick with a button to check availability, as well as a button the user could optionally click on to generate suggested nicks that would populate the field for them. Based on whether or not an available nick was populated in the field, the “Register” button in the lower right corner would be greyed out or active.

Initial view

Nickname field is blank.

mockup of screen with a single form field for nickname input

Nickname suggestion

If you click on the suggest button, it’ll fill the form field with a suggestion for you:

mockup of screen showing the suggest nickname button populating the nickname field

Checking nick availability

Click on the “Check availability” button, and it’ll fade out and a spinner will appear letting you know that we’re checking whether or not the nick is available (in the backend, querying Freenode nickserv or doing a /info on the nick.)

mockup showing nickname availability checking spinner

Nickname not available

If the nickname’s not available, we let you know. Edit the form field or click on the suggest button to try again and have the “Check availability” button appear.

mockup showing a not available message if a nickname fails the availability check

Nickname available

Hey, your nick’s not taken! The “Register” button in the lower right lights up and you’re ready to move forward if you want.

mockup showing the register button activating when the user's input nickname is in fact available

Taking a verification code instead

I learned a lesson I already knew – I should have known better but didn’t! 🙂 I assumed that when you provide your email address to freenode, the email they send back havs a link to click on to verify your email. I knew I should go through the process myself to be sure what the email said, what it looked like, etc., but I didn’t before I designed the original screen based on a faulty assumption. Here is the faulty assumption screen:

Original version of the email confirmation mockup which tells users to click a link in the email that doesn't exist.

I knew I should go through the process to verify some things about my understanding of it, though (hey, it’s been years and years since I registered my own nick and email with freenode NickServ.) I got around to it, and here’s what I got (with some details redacted for privacy, irc nick is $IRCNICK below:)

From "freenode" <noreply.support@freenode.net>
To "$IRCNICK" <email@example.com>
Date Mon, 06 Feb 2017 19:35:35 +0000
Subject freenode Account Registration

$IRCNICK,

In order to complete your account registration, you must type the following
command on IRC:

/msg NickServ VERIFY REGISTER $IRCNICK qbgcldnjztbn

Thank you for registering your account on the freenode IRC network!

Whoopsie! You’ll note the email has no link to click. See! Assumptions that have not been verified == bad! Burned Mo, burned!

So here’s what it looks like now. I added a field for the user to provide the verification code, as well as some hints to help them identify the code from the email. In the process, I also cut down the original text significantly since there is a lot more that has to go on the screen now. I should have cut the text down without this excuse (more text, less read):

new mockp for email verification showing a field for entering the verification code

 

I still need to write up the error cases here – what happens if the verification code gets rejected by NickServ or if it’s too long, has invalid characters, etc.

Handling edge cases

(Relevant ticket: https://pagure.io/fedora-hubs/issue/318)

Both in Twitter comments and IRC messages you sent me after I blogged the first design, I realized I needed to gather some data about the nickname registration process on Freenode. (Thank you!) I was super concerned about the fragility of the approach of slapping a UI on top of a process we don’t own or control. For example, the email verification email freenode sends that a user must act on within 24 hours to keep their nick – we can’t re-send that mail, so how do we handle this in a way that doesn’t break?

Even though I was a little intimidated (I forget that freenode isn’t like EFnet,) I popped into the #freenode admin channel and asked a bunch of questions to clear things up. The admins are super-helpful and nice, and they cleared everything up. I learned a few things:

  • A user is considered active or not based on the amount of time that has passed since they have been authenticated / identified with freenode NickServ.
  • After the user initially registers with NickServ, they are sent an email from “freenode Account Registration” <noreply.support@freenode.net> with a message that contains a verification code they need to send to NickServ to verify their email address.
  • If you don’t receive the email from freenode, you can drop the nick, take it again and try again with another email address.
  • While there is no formal freenode privacy policy for the email address collection, they confirmed they are only used for password recovery purposes and are not transmitted to third parties for any purpose.
  • If a nickname hasn’t been actively identified with NickServ for 10 weeks, it is considered “expired.” Identifying to NickServ with that nick and password will still work indefinitely until either the DB is purged (a regular maintenance task) or if another user requested the nick and took it over:
    • The DB purges don’t happen right away, so an expired nick won’t be removed on day 1 of week 11, but it’s vulnerable to purge from that point forward. Purges happen a couple times a year or so.
    • If another user wants to take over an expired nick that has not been purged, they can message an admin to have the nick freed for them to claim.

This was mostly good news, because being identified to NickServ means you’re active. Since we have an IRC bouncer (ircb) under the covers keeping users identified, the likelihood of their sitting around inactive for 10 weeks is far less. The possibility that they actually lose their nick is limited to an opportunist requesting it and taking it over or bad timing with a DB purge. This is a mercifully narrow case.

So here’s the edge cases we have to worry about from this:

Lost nickname

These cases result in the user needing to register a new nickname.

  • User hasn’t logged into Hubs for > 10 weeks, and circumstances (netsplit?) kept them from being identified. Their nick was taken by another user.
  • User didn’t verify email address, and their nick was taken by another user.

Need to re-register nickname

These cases result in the user needing to re-register their nickname.

  • User hasn’t logged into Hubs for > 10 weeks, circumstances kept them from being identified. Their nick was purged from the DB but is still available.
  • User didn’t verify email address, and their nick was purged from DB but is still available.

Handling lost nicks

If we can’t authenticate the user and their nickname, we’ll disable chat hubs-wide. IRC widgets on hubs will appear like so, to prompt the user to re-register:

IRC widget with re-register nag

If the user happens to visit the user settings panel, they’ll also see a prompt to re-register with the IRC feature disabled:

mockup of the user settings panel showing irc disabled with a nag to re-register nickname

Re-registration process

The registration process should appear the same as in the initial user registration flow, with a message at the top indicating which state the user is in (if their nick was db purged and is now available, let them know so they can re-register the same nick; if someone else grabbed it, let them know so they know to make a new one.) Here’s what this will look like:

nickname registration screen with a message showing the expired nickname is still available

 

The cases I forgot

What if the user already had a registered nickname before Hubs?  (This will be the vast majority of Hubs users when we launch!) I kind of thought about this, and assumed we’d slurp the user’s nickname in from FAS and prompt them for their password at some point, and forgot about it until Sayan mentioned it in our meeting this morning. There’s two cases here, actually:

  • User has a nickname registered with Nickserv already that they’ve provided to FAS. We need to provide them a way to enter in their password so we can authenticate them using Hubs.
  • User has a nickname registered with Nickserv already that is not in FAS. We need to let them provide their nickname/password so we can authenticate them.

I haven’t mocked this up yet. Next time! 🙂

Initial set of mockups for this.

Feedback Welcome!

So there you have it in today’s installation of Hubs’ IRC feature – a pretty major iteration on a design based on feedback, some research into how freenode handles registration, some mistakes (not verifying the email registration process first-hand, forgetting some cases), and additional mockups.

Keep the feedback coming – as you can see here, it’s super-helpful and gets applied directly to the design. 🙂

Made with Krita 2016: The Artbooks Have Arrived!

Made With Krita 2016 is now available! This morning the printer delivered 250 copies of the first book filled with art created in Krita by great artists from all around the world. We immediately set to work to send out all pre-orders, including the ones that were a kickstarter reward.

The books themselves are gorgeous. The artwork is great and varied, of course, but the printer did a good job on the colors, too — helped by the excellent way the open source desktop publishing application Scribus prepares PDF’s for printing. The picture doesn’t do it justice, since it was made with an old phone…

Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist. Get your rare first edition now, an essential addition to every self-respecting bookshelf! The book is professionally printed on 130 grams paper and softcover bound in signatures.

The cover illustration is by Odysseas Stamoglou. The inner artwork features Arrianne Criseyde Pascual, Baukje Jagersma, Beelzy, Chewsome, David Revoy, Enrico Guarnieri, Eric Lee, Filipe Ferreira, Justin Nichol, Kesbet Tree, Livio Fania, Liz de Souza, Matt Preece, Melissa Lipan, Michael Bowling, Mozart Couto, Naghree Greenskin, Neotheta, Nivailis, Paolo Puggioni, R.J. Quiralta, Radian 1, Raghukamath, Ramón Miranda, Reine, Sylvain Boussiron, William Thorup, Elésiane Huve, Amelia Hamrick, Danilo Junior, Ivan Aros, Jennifer Reuter, Karen Kaye Llamas, Lucas Ribeiro, Motion Arc Foundry, Odysseas Stamoglou, Sylvia Ritter, Timothée Giet, Tony Jennison, Tyson Tan, and Wayne Parker.

Made with Krita 2016

Made with Krita 2016

Made with Krita 2016 is 19,95€ excluding VAT in the European Union, excluding shipping. Shipping is 11.25€ outside the Netherlands and 3.65€ inside the Netherlands.

International:

European Union:

 

New fwupd release, and why you should buy a Dell

This morning I released the first new release of fwupd on the 0.8.x branch. This has a number of interesting fixes, but more importantly adds the following new features:

  • Adds support for Intel Thunderbolt devices
  • Adds support for some Logitech Unifying devices
  • Adds support for Synaptics MST cascaded hubs
  • Adds support for the Altus-Metrum ChaosKey device
  • Adds Dell-specific functionality to allow other plugins turn on TBT/GPIO

Mario Limonciello from Dell has worked really hard on this release, and I can say with conviction: If you want to support a hardware company that cares about Linux — buy a Dell. They seem to be driving the importance of Linux support into their partners and suppliers. I wish other vendors would do the same.

February 07, 2017

Refugee Hope Box

I’m not sure that I’ve ever posted anything non-Fedora / Linux / tech in this blog since I started it over 10 years ago.

My daughter’s school is running a refugee hope box drive. The boxes are packed full of toiletries and other supplies for refugee children. We will drop our box off at school and it will get shipped to the Operation Refugee Child program, where its contents will be packed into a backpack and delivered to a child in the refugee camps. We decided to pack a box for a teenage girl. It includes everything from personal toiletries, to non-perishable snacks, to some fun things like gel pens, markers, a journal, and Lip Smackers.

We explained to our daughter that there are kids like her who got kicked out of their home by bad guys (“Like the Joker?” she asked. Yep, bad guys like him.) There’s one girl we are going to try to help – since she is far away from home and has almost nothing, we are going to help by sending some supplies for her. My daughter loved the idea and was really into it. We spent most of our Saturday this past weekend getting supplies out and about with the kids, until the kids kind of melted down (nap, shopping cart fatigue, etc.) and we had to head back home.

We were so close to being finished but needed a few more items to finish it up, so I set up an Amazon wishlist for the items remaining and posted it to Twitter. I figured other folks might want to go in on it with us and help, and I could always pick up anything else remaining later this week.

It seriously took 22 minutes to get all of the items left to order purchased. I’m totally floored by everyone’s generosity. Our community is awesome. Thank you.

If you would like to help, you can start your own box or buy an item off of the central organization’s Amazon wishlist.

February 06, 2017

Open Desktop Review System : One Year Review

This weekend we had the 2,000th review submitted to the ODRS review system. Every month we’re getting an additional ~300 reviews and about 500,000 requests for reviews from the system. The reviews that have been contributed are in 94 languages, and from 1387 different users.

Most reviews have come from Fedora (which installs GNOME Software as part of the default workstation) but other distros like Debian and Arch are catching up all the time. I’d still welcome KDE software center clients like Discover and Apper using the ODRS although we do have quite a lot of KDE software reviews submitted using GNOME Software.

Out of ~2000 reviews just 23 have been marked as inappropriate, of which I agreed with 7 (inappropriate is supposed to be swearing or abuse, not just being unhelpful) and those 7 were deleted. The mean time between a review being posted that is actually abuse and it being marked as such (or me noticing it in the admin panel) is just over 8 hours, which is certainly good enough. In the last few months 5523 people have clicked the “upvote” button on a review, and 1474 people clicked the “downvote” button on a review. Although that’s less voting that I hoped for, that’s certainly enough to give good quality sorting of reviews to end users in most locales. If you have a couple of hours on your hands, gnome-software --mode=moderate is a great way to upvote/downvote a lot of reviews in your locale.

So, onward to 3,000 reviews. Many thanks to those who submitted reviews already — you’re helping new users who don’t know what software they should install.

Rosy Finches

Los Alamos is having an influx of rare rosy-finches (which apparently are supposed to be hyphenated: they're rosy-finches, not finches that are rosy).

[Rosy-finches] They're normally birds of the snowy high altitudes, like the top of Sandia Crest, and quite unusual in Los Alamos. They're even rarer in White Rock, and although I've been keeping my eyes open I haven't seen any here at home; but a few days ago I was lucky enough to be invited to the home of a birder in town who's been seeing great flocks of rosy-finches at his feeders.

There are four types, of which three have ever been seen locally, and we saw all three. Most of the flock was brown-capped rosy-finches, with two each black rosy-finches and gray-capped rosy-finches. The upper bird at right, I believe, is one of the blacks, but it might be a grey-capped. They're a bit hard to tell apart. In any case, pretty birds, sparrow sized with nice head markings and a hint of pink under the wing, and it was fun to get to see them.

[Roadrunner] The local roadrunner also made a brief appearance, and we marveled at the combination of high-altitude snowbirds and a desert bird here at the same place and time. White Rock seems like much better roadrunner territory, and indeed they're sometimes seen here (though not, so far, at my house), but they're just as common up in the forests of Los Alamos. Our host said he only sees them in winter; in spring, just as they start singing, they leave and go somewhere else. How odd!

Speaking of birds and spring, we have a juniper titmouse determinedly singing his ray-gun song, a few house sparrows are singing sporadically, and we're starting to see cranes flying north. They started a few days ago, and I counted several hundred of them today, enjoying the sunny and relatively warm weather as they made their way north. Ironically, just two weeks ago I saw a group of about sixty cranes flying south -- very late migrants, who must have arrived at the Bosque del Apache just in time to see the first northbound migrants leave. "Hey, what's up, we just got here, where ya all going?"

A few more photos: Rosy-finches (and a few other nice birds).

We also have a mule deer buck frequenting our yard, sometimes hanging out in the garden just outside the house to drink from the heated birdbath while everything else is frozen. (We haven't seen him in a few days, with the warmer weather and most of the ice melted.) We know it's the same buck coming back: he's easy to recognize because he's missing a couple of tines on one antler.

The buck is a welcome guest now, but in a month or so when the trees start leafing out I may regret that as I try to find ways of keeping him from stripping all the foliage off my baby apple tree, like some deer did last spring. I'm told it helps to put smelly soap shavings, like Irish Spring, in a bag and hang it from the branches, and deer will avoid the smell. I will try the soap trick but will probably combine it with other measures, like a temporary fence.

February 03, 2017

Fri 2017/Feb/03

  • Algebraic data types in Rust, and basic parsing

    Some SVG objects have a preserveAspectRatio attribute, which they use to let you specify how to scale the object when it is inserted into another one. You know when you configure the desktop's wallpaper and you can set whether to Stretch or Fit the image? It's kind of the same thing here.

    Examples of        preserveAspectRatio from the SVG spec

    The SVG spec specifies a simple syntax for the preserveAspectRatio attribute; a valid one looks like "[defer] <align> [meet | slice]". An optional defer string, an alignment specifier, and an optional string which can be meet or slice. The alignment specifier can be any one of these strings:

    none
    xMinYMin
    xMidYMin
    xMaxYMin
    xMinYMid
    xMidYMid
    xMaxYMid
    xMinYMax
    xMidYMax
    xMaxYMax

    (Boy oh boy, I just hate camelCase.)

    The C code in librsvg would parse the attribute and encode it as a bitfield inside an int:

    #define RSVG_ASPECT_RATIO_NONE (0)
    #define RSVG_ASPECT_RATIO_XMIN_YMIN (1 << 0)
    #define RSVG_ASPECT_RATIO_XMID_YMIN (1 << 1)
    #define RSVG_ASPECT_RATIO_XMAX_YMIN (1 << 2)
    #define RSVG_ASPECT_RATIO_XMIN_YMID (1 << 3)
    #define RSVG_ASPECT_RATIO_XMID_YMID (1 << 4)
    #define RSVG_ASPECT_RATIO_XMAX_YMID (1 << 5)
    #define RSVG_ASPECT_RATIO_XMIN_YMAX (1 << 6)
    #define RSVG_ASPECT_RATIO_XMID_YMAX (1 << 7)
    #define RSVG_ASPECT_RATIO_XMAX_YMAX (1 << 8)
    #define RSVG_ASPECT_RATIO_SLICE (1 << 30)
    #define RSVG_ASPECT_RATIO_DEFER (1 << 31)

    That's probably not the best way to do it, but it works.

    The SVG spec says that the meet and slice values (represented by the absence or presence of the RSVG_ASPECT_RATIO_SLICE bit, respectively) are only valid if the value of the align field is not none. The code has to be careful to ensure that condition. Those values specify whether the object should be scaled to fit inside the given area, or stretched so that the area slices the object.

    When translating that this C code to Rust, I had two choices: keep the C-like encoding as a bitfield, while adding tests to ensure that indeed none excludes meet|slice; or take advantage of the rich type system to encode this condition in the types themselves.

    Algebraic data types

    If one were to not use a bitfield in C, we could represent a preserveAspectRatio value like this:

    typedef struct {
        defer: gboolean;
        
        enum {
            None,
            XminYmin,
            XminYmid,
            XminYmax,
            XmidYmin,
            XmidYmid,
            XmidYmax,
            XmaxYmin,
            XmaxYmid,
            XmaxYmax
        } align;
        
        enum {
            Meet,
            Slice
        } meet_or_slice;
    } PreserveAspectRatio;
    	    

    One would still have to be careful that meet_or_slice is only taken into account if align != None.

    Rust has algebraic data types; in particular, enum variants or sum types.

    First we will use two normal enums; nothing special here:

    pub enum FitMode {
        Meet,
        Slice
    }
    
    pub enum AlignMode {
        XminYmin,
        XmidYmin,
        XmaxYmin,
        XminYmid,
        XmidYmid,
        XmaxYmid,
        XminYmax,
        XmidYmax,
        XmaxYmax
    }

    And the None value for AlignMode? We'll encode it like this in another type:

    pub enum Align {
        None,
        Aligned {
            align: AlignMode,
            fit: FitMode
        }
    }

    This means that a value of type Align has two variants: None, which has no extra parameters, and Aligned, which has two extra values align and fit. These two extra values are of the "simple enum" types we saw above.

    If you "let myval: Align", you can only access the align and fit subfields if myval is in the Aligned variant. The compiler won't let you access them if myval is None. Your code doesn't need to be "careful"; this is enforced by the compiler.

    With this in mind, the final type becomes this:

    pub struct AspectRatio {
        pub defer: bool,
        pub align: Align
    }

    That is, a struct with a boolean field for defer, and an Align variant type for align.

    Default values

    Rust does not let you have uninitialized variables or fields. For a compound type like our AspectRatio above, it would be nice to have a way to create a "default value" for it.

    In fact, the SVG spec says exactly what the default value should be if a preserveAspectRatio attribute is not specified for an SVG object; it's just "xMidYMid", which translates to an enum like this:

    let aspect = AspectRatio {
        defer: false,
        align: Align::Aligned {
            align: AlignMode::XmidYmid,
    	fit: FitMode::Meet
        }
    }

    One nice thing about Rust is that it lets us define default values for our custom types. You implement the Default trait for your type, which has a single default() method, and make it return a value of your type initialized to whatever you want. Here is what librsvg uses for the AspectRatio type:

    impl Default for Align {
        fn default () -> Align {
            Align::Aligned {
                align: AlignMode::XmidYmid,
                fit: FitMode::Meet
            }
        }
    }
    
    impl Default for AspectRatio {
        fn default () -> AspectRatio {
            AspectRatio {
                defer: false,
                align: Default::default ()    // this uses the value from the trait implementation above!
            }
        }
    }

    Librsvg implements the Default trait for both the Align variant type and the AspectRatio struct, as it needs to generate default values for both types at different times. Within the implementation of Default for AspectRatio, we invoke the default value for the Align variant type in the align field.

    Simple parsing, the Rust way

    Now we have to implement a parser for the preserveAspectRatio strings that come in an SVG file.

    The Result type

    Rust has a FromStr trait that lets you take in a string and return a Result. Now that we know about variant types, it will be easier to see what Result is about:

    #[must_use]
    enum Result<T, E> {
       Ok(T),
       Err(E),
    }

    This means the following. Result is an enum with two variants, Ok and Err. The first variant contains a value of whatever type you want to mean, "this is a valid parsed value". The second variant contains a value that means, "these are the details of an error that happened during parsing".

    Note the #[must_use] tag in Result's definition. This tells the Rust compiler that return values of this type must not be ignored: you can't ignore a Result returned from a function, as you would be able to do in C. And then, the fact that you must see if the value is an Ok(my_value) or an Err(my_error) means that the only way ignore an error value is to actually write an empty stub to catch it... at which point you may actually write the error handler properly.

    The FromStr trait

    But we were talking about the FromStr trait as a way to parse strings into values! This is what it looks like for our AspectRatio:

    pub struct ParseAspectRatioError { ... };
    
    impl FromStr for AspectRatio {
        type Err = ParseAspectRatioError;
    
        fn from_str(s: &str) -> Result<AspectRatio, ParseAspectRatioError> {
            ... parse the string in s ...
    
            if parsing succeeded {
                return Ok (AspectRatio { ... fields set to the right values ... });
            } else {
                return Err (ParseAspectRatioError { ... fields set to error description ... });
            }
        }
    }

    To implement FromStr for a type, you implement a single from_str() method that returns a Result<MyType, MyErrorType>. If parsing is successful you return the Ok variant of Result with your parsed value as Ok's contents. If parsing fails, you return the Err variant with your error type.

    Once you have that implementation, you can simply call "let my_result = AspectRatio::from_str ("xMidyMid");" and piece apart the Result as with any other Rust code. The language provides facilities to chain successful results or errors so that you don't have nested if()s and such.

    Testing the parser

    Rust makes it very easy to write tests. Here are some for our little parser above.

    #[test]
    fn parsing_invalid_strings_yields_error () {
        assert_eq! (AspectRatio::from_str (""), Err(ParseAspectRatioError));
    
        assert_eq! (AspectRatio::from_str ("defer foo"), Err(ParseAspectRatioError));
    }
    
    #[test]
    fn parses_valid_strings () {
        assert_eq! (AspectRatio::from_str ("defer none"),
                    Ok (AspectRatio { defer: true,
                                      align: Align::None }));
    
        assert_eq! (AspectRatio::from_str ("XmidYmid"),
                    Ok (AspectRatio { defer: false,
                                      align: Align::Aligned { align: AlignMode::XmidYmid,
                                                              fit: FitMode::Meet } }));
    }

    Using C-friendly wrappers for those fancy Rust enums and structs, the remaining C code in librsvg now parses and uses AspectRatio values that are fully implemented in Rust. As a side benefit, the parser doesn't use temporary allocations; the old C code built up a temporary list from split()ting the string. Rust's iterators and string slices essentially let you split() a string with no temporary values in the heap, which is pretty awesome.

February 02, 2017

Introducing Neon – a way to Quickly review stuff and share with your friends

Over at silverorange, we’ve been working on a new product called Neon.The goal is to see if we can create compelling reviews with limited input (often from a phone). Our current take on this boils a review down to a few basic elements of a review:

  1. Title (what are you reviewing)
  2. Photo
  3. Pros & Cons
  4. A rating from 0 to 10
  5. An emoji to represent how you feel about it

You can also optionally add a longer description, a link to where you can buy it, and the price you paid.

For example, here’s a cutting and insightful review I wrote about a mouse pad.

Neon is in a closed alpha right now, which means that anyone can read the reviews, but to create reviews, you need to be invited to try it out. If you’re interested in trying out the alpha, or being notified when it is opened up to a larger audience, you an leave your email at neon.io.

Why I’m a Social Media Curmudgeon (oh, and follow my blog on Twitter)

I wanted to clarify for myself why it is that I don’t use Facebook or (for the most part) Twitter. Brace yourself for self-justification and equivocation.

First caveat: I actually do have a Twitter account (@sgarrity), but I don’t post anything (sort of, more on this later). I use it to follow people.

I don’t dislike Twitter or Facebook.  They are both amazing systems. They both took blogging and messaging and made them way easier on a massive scale. As a professional web designer and developer, I respect the craft with which both Facebook and Twitter have built their platforms. I regularly rely on open-source projects that both companies produce and finance (thanks!).

Messaging and communication are too important to be controlled by a private corporation. For all of their faults, our phone or text messaging services allow portability. If I have a problem with my phone company, I can take my phone number with me to another company. I can talk to someone regardless of what phone company they have chosen. The same is true of the web and of email (as long as you use your own domain name).

I’m not an extremist. I don’t think you’re doing something wrong if you use these services. I would like to see people use more open alternatives, but I understand that for many, the ease and convenience of platforms like Facebook and Twitter are worth the trade-offs.

All of this is to say that you can now follow @aov_blog on Twitter for updates on my Acts of Volition blog posts.

While I’m contradicting myself, I also have a third Twitter account, @steven_reviews, which I created to share reviews for a new site I’m helping to develop and test at work (more on that soon). While I may opt out of these services personally, if there’s a compelling reason for me to use them at work, or my reluctance proves a significant hindrance for those around me, the scales of the trade-offs may tip in a different direction.

Oh, and I also help manage the @silverorangeinc Twitter account as part of my job.

Now, get off my #lawn.

February 01, 2017

darktable 2.2.3 released

we're proud to announce the third bugfix release for the 2.2 series of darktable, 2.2.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.3.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.3.tar.xz
1b33859585bf283577680c61e3c0ea4e48214371453b9c17a86664d2fbda48a0  darktable-2.2.3.tar.xz
$ sha256sum darktable-2.2.3.dmg
1ebe9a9905b895556ce15d556e49e3504957106fe28f652ce5efcb274dadd41c  darktable-2.2.3.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.2 can be found below.

Bugfixes:

  • Fix fatal crash when generating preview for medium megapixel count (~16MP) Bayer images
  • Properly subtract black levels: respect the even/odd -ness of the raw crop origin point
  • Collection module: fix a few UI quirks

Krita 3.1.2 released!

Krita 3.1.2, released on February 1st 2017, is the first bugfix release in the 3.1 release series. But there are a few extra new features thrown in for good measure!

Audio Support for Animations

Import audio files to help with syncing voices and music. In the demo on the left, Timothée Giet shows how scrubbing and playback work when working with audio.

  • Available audio formats are WAV, MP3, OGG, and FLAC
  • A checkbox was added in the Render animation dialog to include the audio while exporting
  • See the documentation for more information on how to set it up and use the audio import feature.

Audio is not yet available in the Linux appimages. It is an experimental feature, with no guarantee that it works correctly yet — we need your feedback!

Other New Features

  • Ctrl key continue mode for Outline Selection tool: if you press ctrl while drawing an outline selection, the selection isn’t completed when you lift the stylus from the tablet. You can continue drawing the selection from an arbitrary point.
  • Allow deselection by clicking with a selection tool: you can now deselect with a single click with any selection tool.
  • Added a checkbox for enabling HiDPI to the settings dialog.
  • remove the export to PDF functionality. It is having too many issues right now. (BUG:372439)

There are also a lot of bug fixes. Check the full release notes!

Get the Book!

If you want to see what others can do with Krita, get Made with Krita 2016, the first Krita artbook, now available for pre-order!

Made with Krita 2016

Made with Krita 2016

Give us your feedback!

Almost 1000 people have already filled in the 2017 Krita Survey! Tell us how you use Krita, on what hardware and what you like and don’t like!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.2 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

January 31, 2017

Helping new users get on IRC

Fedora Hubs

Hubs and Chat Integration Basics

Hubs uses Freenode IRC for its chat feature. I talked quite a bit about the basics of how we think this could work (see “Fedora Hubs and Meetbot: A Recursive Tale” for all of the details.)

One case that we have to account for is users who are new Fedora contributors who don’t already have an IRC nick or even experience with IRC. A tricky thing is that we have to get them identified with NickServ, and continue to identify them with Nickserv seamlessly and automatically, after netsplits and other events that would cause them to lose their authentication to Nickserv, without their needing to be necessarily aware that the identification process was going on. Nickserv auth is kind of an implementation detail of IRC that I don’t think users, particularly those new to and unfamiliar with IRC, need to be concerned with.

Nickserv?

“Nickserv? What’s Nickserv?” you ask. Well. Different IRC networks have a nickserv or something similar to it.

On IRC, people chat using the same nickname and come to be known by their nickname. For example, I’ve been mizmo on freenode IRC for well over a decade and am known by that name, similarly to how people know me by my email address or phone number. IRC is from the old and trusting days of the internet, however, so there’s nothing in IRC to guarantee that I could keep the nick mizmo if I logged out and someone else logged in using ‘mizmo’ as their nickname! In fact, this is/was a common way to attack or annoy people in IRC – steal their nick.

In comes Nickserv to save the day – it’s a bot of sorts that Freenode runs on its IRC network that registers nicknames and provides an authentication system to password protect those names. Someone can still take your nick if you’re offline, but if you’ve registered it, you can use your password and Nickserv to knock them off so you can reclaim your nick.

Yes, IRC definitely has a kind of a weird and rather quaint authentication system. Our challenge is getting people through it to be able to participate without having to worry about it!

Configuration Questions

“Well, wait,” you ask. “If they aren’t even aware of it, how do they set their nickserv password? What if they want to ‘graduate’ to a non-hubs IRC client and need their nickserv password? What is they want to change their password?”

I considered having Hubs silently auto-generate a nickserv password and managing that on its own, potentially with a way of viewing / changing this password in user settings. I opted to provide a way to create their own password, ultimately deciding that silently generating it, they wouldn’t be aware it existed, and may end up confused if they tried a different client and might post their FAS password over non-SSL plaintext IRC…

(Some other config we should allow eventually that we’ve discussed in the weekly hubs meetings – allowing users to use a different IRC bouncer than Hubs’ for IRC, and offering the connection details for our bouncer so they could use Hubs as a client as well as a third party client without issue.)

The Mockups

So here is an attempt to mock up the workflow of setting up IRC for a Hubs user who has never used IRC before, either on Hubs or ever. Note these do not address the case of someone new to Hubs who hasn’t enabled the RC feature but who does have a registered nick already that they have not entered into FAS – these will need modification to address that case. (Namely a link on the first screen to fill out their Nickserv auth details in settings.)

Widget in Context

This is what the as-of-yet unactivated IRC widget would look like for a user in context. The user is visiting a team hub, and the admin of that hub configured an IRC widget to be present by default for that hub. To join the chat, the user needs to enable IRC as a feature for their account on hubs so the widget offers to do that for them.

mockup of a fedora hubs screen showing a widget on the right hand side for enabling IRC chat

Chatter thumbnails

The top of the widget has a section that has small thumbnails of the avatars of people currently in the room (my thought is in order of who spoke most recently) with a headcount for the total number of people in the room. The main idea behind this is to try to encourage people to join the conversation – maybe they will see a friend’s avatar and feel like the room could be more approachable, maybe we tap into some primal FOMO (Fear Of Missing Out) by alluding to the activity that is happening without revealing it.

Call to action

Next we have a direct call to action, “Enable chat in Hubs to chat with other people in this hub” with an action-oriented button label, “Enable Hubs Chat.” This, I hope, clearly lets the user know what would happen if they clicked the button.

Hiding control

At the bottom, a small link: “Hide this notification.” Sometimes ‘upsell’ nags can be irritating if you have no intention of participating. If someone is sure they do not want to enable IRC in Hubs (perhaps they just want to use their own client and not bother with Hubs for this,) this will let them hide it and will ‘roll up’ the IRC widget to take up less space.

Here’s a close-up of the widget:

closeup of the IRC activation widget

Registration Wizard

So once you’ve decided to enable IRC in Hubs, then what?

Since selecting and registering a nick I think needs to be a multi-step process (in part because Freenode makes it one), I thought a wizard might be the best approach so this is how I mocked it up. A rough outline of the wizard steps is as follows:

  • Figure out a nickname you want to use and make sure it’s available
  • Provide email address and password for registration
  • Register nickname
  • Verify email

Choosing a nickname

This is a little weird, because of how IRC works. Let me explain how I would like it to work, and how I think it probably will end up working because of how IRC & nickserv work.

I would like to present the user with a bunch of options to pick from for their nickname. I want to do this because I think coming up with a clever nickname involves a high cognitive load they may not be up for, and at the very least offering suggestions could be good brain food for them making up their own (we offer a way to do that on the bottom of this screen, as well.)

Ideally, you wouldn’t offer something to someone and then tell them it’s not available. Unfortunately, I don’t think there’s an easy way to check whether or not a suggested nick is available without actually trying to use it on Freenode and seeing if you’re able to use it or if Nickserv scolds you for trying to use someone else’s registered nick. I don’t know if it’s possible to check nick availability in a less expensive way? These designs are based on the assumption that this is the only way to check nick availability:

mockup: irc nick selection

The model here is that we’d use some heuristics based on your name and FAS username to suggest potential nick, with a freeform option if you have one in mind. If you click on a name, we then check it for availability, displaying a spinner and a short message while we check. This way, we only check the nicks that you’re actually interested in and not waste cycles on nicks you have no interest in.

If a nick is available, it turns green and we display a “Register” button. If not, a message to let them know it’s not available:

IRC nick availability mockup

Once it’s finished checking on all of the nicks, it might look like this:

IRC nick availability mockup - all lookups complete

Provide email and password for registration

Freenode Nickserv registration requires providing an email address and a password. This screen is where we collect that.

We offer to submit their FAS account email address, but also allow them to freeform the email address they’d prefer to be assocaited with Freenode. We provide the rationale for why the email address is needed (account recovery) and should probably refer to Freenode’s privacy policy if it has one for their usage. There’s also fields for setting the password.

Verify Email Address

This is the most fragile component of this workflow. Freenode will allow you to keep a registration for 24 hours; if you do not confirm your email address in that time, you will lose your registration. Rather than explain all of this, we just ask that users check their email (supplying them the address the gave us so they know which account to check) and verify the email address using the link provided.

Problem: We don’t control these emails, freenode does. What if the user doesn’t receive the verification email? We can’t resend the email because we didn’t send it in the first place. No easy answer here. We might need some language to talk about checking your spam folder, and how long it might take (it seems to be pretty quick in testing.) We could let them go ahead and start chatting while they wait for the email, but what would we do if it never gets verified and they lose the registration? Messy. But here’s the mockup:

Screen for user to confirm email address for freenode registration

Finish

This is just a screen to let them know they’re all set. After clicking through this screen, they should be logged into the IRC channel for the hub they initiated the registration flow from.

IRC registration finish screen

Thoughts?

This is a first-cut at mocking up this particular flow. I’m actively working on other ones (including post-set up configuration, and turning IRC on/off on individual hubs which is the same as joining or leaving a channel.) If you have any ideas for solving some of the issues I brought up or any feedback at all, I’d love to hear it!

I find Jeff Atwood’s pragmatic reaction to the Trump presidency to be hopeful in his specific plans and actions. Well said.

January 30, 2017

Do not show up late to a meeting with a coffee.

As a general rule: Do not show up late to a meeting with a coffee.

I did this today, but you never should. The message is clear.

darktable 2.2.2 released

we're proud to announce the second bugfix release for the 2.2 series of darktable, 2.2.2!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.2.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

766d7d734e7bd5a33f6a6932a43b15cc88435c64ad9a0b20410ba5b4706941c2 darktable-2.2.2.tar.xz
52fd0e9a8bb74c82abdc9a88d4c369ef181ef7fe2b946723c5706d7278ff2dfb darktable-2.2.2.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.1 can be found below.

New features:

  • color look up table module: include preset for helmholtz/kohlrausch monochrome
  • Lens module: re-enable tiling
  • Darkroom: fix some artefacts in the preview image (not the main view!)
  • DNG decoder: support reading one more white balance encoding method
  • Mac: display an error when too old OS version is detected
  • Some documentation and tooltips updates

Bugfixes:

  • Main view no longer grabs focus when mouse enters it. Prevents accidental catastrophic image rating loss.
  • OSX: fix bauhaus slider popup keyboard input
  • Don't write all XMP when detaching tag
  • OSX: don't do PPD autodetection, gtk did their thing again.
  • Don't show database lock popup when DBUS is used to start darktable
  • Actually delete duplicate's XMP when deleting duplicated image
  • Ignore UTF-8 BOM in GPX files
  • Fix import of LR custom tone-curve
  • Overwrite Xmp rating from raw when exporting
  • Some memory leak fixes
  • Lua: sync XMPs after some tag manipulations
  • Explicitly link against math library

Base Support:

  • Canon PowerShot SX40 HS (dng)
  • Fujifilm X-E2S
  • Leica D-LUX (Typ 109) (4:3, 3:2, 16:9, 1:1)
  • Leica X2 (dng)
  • Nikon LS-5000 (dng)
  • Nokia Lumia 1020 (dng)
  • Panasonic DMC-GF6 (16:9, 3:2, 1:1)
  • Pentax K-5 (dng)
  • Pentax K-r (dng)
  • Pentax K10D (dng)
  • Sony ILCE-6500

Noise Profiles:

  • Fujifilm X-M1
  • Leica X2
  • Nikon Coolpix A
  • Panasonic DMC-G8
  • Panasonic DMC-G80
  • Panasonic DMC-G81
  • Panasonic DMC-G85

January 27, 2017

Making aliases for broken fonts

A web page I maintain (originally designed by someone else) specifies Times font. On all my Linux systems, Times displays impossibly tiny, at least two sizes smaller than any other font that's ostensibly the same size. So the page is hard to read. I'm forever tempted to get rid of that font specifier, but I have to assume that other people in the organization like the professional look of Times, and that this pathologic smallness of Times and Times New Roman is just a Linux font quirk.

In that case, a better solution is to alias it, so that pages that use Times will choose some larger, more readable font on my system. How to do that was in this excellent, clear post: How To Set Default Fonts and Font Aliases on Linux .

It turned out Times came from the gsfonts package, while Times New Roman came from msttcorefonts:

$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(dpkg -S doesn't find the file because msttcorefonts is a package that downloads a bunch of common fonts from Microsoft. Debian can't distribute the font files directly due to licensing restrictions.)

Removing gsfonts fonts isn't an option; aside from some documents and web pages possibly not working right (if they specify Times or Times New Roman and don't provide a fallback), removing gsfonts takes gnumeric and abiword with it, and I do occasionally use gnumeric. And I like having the msttcorefonts installed (hey, gotta have Comic Sans! :-) ). So aliasing the font is a better bet.

Following Chuan Ji's page, linked above, I edited ~/.config/fontconfig/fonts.conf (I already had one, specifying fonts for the fantasy and cursive web families), and added these stanzas:

    <match>
        <test name="family"><string>Times New Roman</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>
    <match>
        <test name="family"><string>Times</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>

The page says to log out and back in, but I found that restarting firefox was enough. Now I could load up a page that specified Times or Times New Roman and the text is easily readable.

Install openSUSE Tumbleweed + KDE on MacBook 2015

It is pretty easy to install openSUSE Linux on a MacBook as operating system. However there are some pitfalls, which can cause trouble. The article gives some hints about a dual boot setup with OS X 10.10 and at time of writing current openSUSE Tumbleweed 20170104 (oS TW) on a MacBookPro from early 2015. A recent Linux kernel, like in TW, is advisable as it provides better hardware support.

The LiveCD can be downloaded from www.opensuse.org and written with ImageWriter GUI to a USB stick ~1GB. I’ve choose the Live KDE one and it run well on a first test. During boot after the first sound and display light switches on hold Option/alt key and wait for the disk selection icon. Put the USB key with Linux in a USB port and wait until the removable media icon appears and select it for boot. For me all went fine. The internal display, sound, touchpad and keyboard where detected and worked well. After that test. It was a good time to backup all data from the internal flash drive. I wrote a compressed disk image to a stick using the unix dd command. With that image and the live media I was able to recover, in case anything went wrong. It is not easy to satisfy OS X for it’s journaled HFS and the introduced logical volume layout, which comes with a separate repair partition directly after the main OS partition. That combination is pretty fragile, but should not be touched. The rescue partition can be booted with the command key + r pressed. External tools failed for me. So I booted into rescue mode and took the OS X diskutil or it’s Disk Utility GUI counter part. The tool allows to split the disk into several partitions. The EFI and the rescue ones are hidden in the GUI. The newly created additional partitions can be formatted to exfat and later be modified for the Linux installation. One additional HFS partition was created for sharing data between OS X and Linux with the comfortable Unix attributes. The well know exfat used by many bigger USB sticks, is a possible option as well, but needs the exfat-kmp kernel module installed, which is not by default installed due to Microsofts patent license policy for the file system. In order to write to HFS from Linux, any HFS partition must have switched off the journal feature. This can be done inside the OS X Disk Utility GUI, by selecting the data partition and holding the alt key and searching in the menu for the disable journaling entry. After rebooting into the Live media, I clicked on the Install icon on the desktop background and started openSUSE’s Yast tool. Depending on the available space, it might be a good idea to disable the btrfs filesystem snapshot feature, as it can eat up lots of disk space during each update. An other pitfall is the boot stage. Select there secure GrubEFI mode, as Grub needs special handling for the required EFI boot process. That’s it. Finish install and you should be able to reboot into Linux with the alt key.

My MacBook has unfortunedly a defect. It’s Boot Manager is very slow. Erasing and reinstalling OS X did not fix that issue. To circumvent it, I need to reset NVRAM by pressing alt+cmd+r+p at boot start for around 14 second, until the display gets dark, hold alt on the soon comming next boot sound, select the EFI TW disk in Apple Boot Manager and can then fluently go through the boot process. Without that extra step, the keyboard and mouse might not respond in Linux at all, except the power button. Hot reboot from Linux works fine. OS X does a cold reboot and needs the extra sequence.

KDE’s Plasma needs some configuration to run properly on a high resolution display. Otherwise additional monitors can be connected and easily configured with the kscreen SystemSettings module. Hibernate works fine. Currently the notebooks SD slot is ignored and the facetime camera has no ready oS packages. Battery run time can be extended by spartan power consumption (less brightness, less USB devices and pulseaudio -k, check with powertop), but is not too far from OS X anyway.

January 26, 2017

FreeCAD Arch development news

A long time I didn't post here. Writing regularly on a blog proves more difficult than I thought. I have this blog since a long time, but never really tried to constrain myself to write regularly. You look elsewhere a little bit, and when you get back to it, two months have gone by... Since this post is aimed...

Bullet 2.86 with pybullet for robotics, deep learning, VR and haptics

The Bullet 2.86 has improved Python bindings, pybullet, for robotics, machine learning and VR, see the pybullet quickstart guide.

Furthermore, the PGS LCP constraint solver has a new option to terminate as soon as the residual (error) is below a specified tolerance (instead of terminating after a fixed number of iterations). There is preliminary support to load some MuJoCo MJCF xml files (see data/mjcf), and haptic experiments with a VR glove. Get the latest release from github here.App_SharedMemoryPhysics_VR_vs20 2017-01-26 10-12-45-16

.C2guE9TUcAAeMOw

 

 

January 25, 2017

One worry less

This year, we've got elections in the Netherlands. Which means, I have to choose where my vote goes. And that can be a trifle difficult.

After fifteen years in the free software world, I'm a certified leftie. I'm certainly not going to vote for the conservative party (CDA, formally Christan, been moving into Tea Party territory for a couple of years now), I'm not going to vote for the Liberal Party (VVD) -- that's only the right party for someone who has got more than half a million in the bank. Let's not even begin to talk about the Dutch Fascist Movement (PVV). The left-liberals (D66) are a bit too much anti-religion, and, shockingly, being a sub-deacon in the local Orthodox Church, I don't feel at home there. That leaves, more or less, the Socialist Party, the Labour Party and the United Christan party. The Socialist Party has never impressed me with their policies. That leaves two...

Yeah, you know, I'm a Christan. If someone's got a problem with that, that's their problem. I'm also a socialist. If someone's got a problem with that, that's their problem. If someone thinks I'm an ignorant idiot because of either, their problem.

But today, the Labour Party minister for international cooperation, Lilianne Ploumen, has announced an effort to create a fund to counter Trump's so-called "global gag rule". That means that any United States-funded organization which so much as cooperates with any organization involved in so-called "family planning" will lose its funding. She is working to restore the funding.

News headlines make this all about abortion... Which is in any case not something anyone with testicles should concern themselves with. But it isn't that, and just talking about abortion makes it narrow and easy to attack. As did our local United Christans party, which will never again receive my vote. It's also about education, it's also about contraceptives, it's about helping those Nepali teenage girls who are locked in a cow shed because they're menstruating. It's about helping those girls who get raped by their family get back to school.

It's about making the world a better and safer and healthier place for the girls and women who cannot defend themselves.

And I don't have to worry about my vote anymore. That's settled.

Artistic Constraints

I have moved most of the sharing with the world to the walled gardens of Facebook, Google+ and others because of their convenience, but for an old fart like me it’s way more appropriate to do it the old way. So the thing to share today is quite topical. Mark Ferrari (of Lucasarts fame) shares his experience with 8bit art and the creative constraint. There isn’t as much gold in what he says as much as the art he shares that he made over the years that flourished in those constraints.

8 Bit Constraints

Mark is clearly a master in lighting and none of this trickery would have any appeal if he wasn’t so great in mixing the secondary lights so well, but check out these amazing color cycling demos.

Actual image I found explaining how I anti-aliased in GIMP. Cca 2002.

As far as I ever got with 8bit animation.

January 24, 2017

Changing a website using the developer console

If you need to quickly change a website, you can use a combination of CSS/XPath selectors and a function to hide/remove DOM nodes. I had to find my way through a long list of similar items which was really hard to go through by simply looking at it.

For example, you can simply delete all links you’re not interested in by a simple combination of selector and function:

$x('//li/a[contains(., "not-interesting")]').map(function(n) { n.parentNode.removeChild(n) })

If you’ve made a mistake, reload the website.


We’re doing a User Survey!

While we’re still working on Vector, Text and Python Scripting, we’ve already decided: This year, we want to spend on stabilizing and polishing Krita!

Now, one of the important elements in making Krita stable is bug reports. And we’ve got a lot of those! But with some bug reports, we’re kind of stuck. We cannot figure out what type of hardware or drivers it is that is causing these bugs, so we’re asking for you help.

We’ve made a Krita user survey.

In it, we ask things like what type of hardware you have, and whether you have trouble with certain hardware. That way we can figure out which drivers and hardware are problematic and maybe get workarounds. There’s also some other questions, like what you make with Krita and how you get your Krita news.

January 23, 2017

Testing a GitHub Pull Request

Several times recently I've come across someone with a useful fix to a program on GitHub, for which they'd filed a GitHub pull request.

The problem is that GitHub doesn't give you any link on the pull request to let you download the code in that pull request. You can get a list of the checkins inside it, or a list of the changed files so you can view the differences graphically. But if you want the code on your own computer, so you can test it, or use your own editors and diff tools to inspect it, it's not obvious how. That this is a problem is easily seen with a web search for something like download github pull request -- there are huge numbers of people asking how, and most of the answers are vague unclear.

That's a shame, because it turns out it's easy to pull a pull request. You can fetch it directly with git into a new branch as long as you have the pull request ID. That's the ID shown on the GitHub pull request page:

[GitHub pull request screenshot]

Once you have the pull request ID, choose a new name for your branch, then fetch it:

git fetch origin pull/PULL-REQUEST_ID/head:NEW-BRANCH-NAME
git checkout NEW-BRANCH-NAME

Then you can view diffs with something like git difftool NEW-BRANCH-NAME..master

Easy! GitHub should give a hint of that on its pull request pages.

Fetching a Pull Request diff to apply it to another tree

But shortly after I learned how to apply a pull request, I had a related but different problem in another project. There was a pull request for an older repository, but the part it applied to had since been split off into a separate project. (It was an old pull request that had fallen through the cracks, and as a new developer on the project, I wanted to see if I could help test it in the new repository.)

You can't pull a pull request that's for a whole different repository. But what you can do is go to the pull request's page on GitHub. There are 3 tabs: Conversation, Commits, and Files changed. Click on Files changed to see the diffs visually.

That works if the changes are small and only affect a few files (which fortunately was the case this time). It's not so great if there are a lot of changes or a lot of files affected. I couldn't find any "Raw" or "download" button that would give me a diff I could actually apply. You can select all and then paste the diffs into a local file, but you have to do that separately for each file affected. It might be, if you have a lot of files, that the best solution is to check out the original repo, apply the pull request, generate a diff locally with git diff, then apply that diff to the new repo. Rather circuitous. But with any luck that situation won't arise very often.

Update: thanks very much to Houz for the solution! (In the comments, below.) Just append .diff or .patch to the pull request URL, e.g. https://github.com/OWNER/REPO/pull/REQUEST-ID.diff which you can view in a browser or fetch with wget or curl.

Interview with Adam

Could you tell us something about yourself?

Good day. My name is Adam and I am a 26-year-old person who is trying to learn how to draw…

Do you paint professionally, as a hobby artist, or both?

Hobby 🙂

What genre(s) do you work in?

I try to draw everything, I don’t want to get stuck in drawing only one thing over and over again and leave behind everything else.

Whose work inspires you most — who are your role models as an artist?

People who inspired me when i was younger … much younger … were Satoshi Urushihara, Masamune Shirow and DragonBall artists.

How and when did you get to try digital painting for the first time?

My first adventure with digital painting was about 4-5 years ago, when I bought my first small Wacom Bamboo tablet that I am still using.

How did you find out about Krita?

A friend of mine mentioned it.

What was your first impression?

I uninstalled it and then came back after a while 😉

What do you love about Krita?

Everything!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe make it less laggy, but that can be the fault of my laptop.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

The featured image. Not really my favourite, but I don’t have anything else worth showing!

What techniques and brushes did you use in it?

It was all random without any technique! I used Pencil 2B and pencil texture, nothing more or less.

Anything else you’d like to share?

Have a nice day everyone and let the Krita grow 😀

January 19, 2017

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

January 18, 2017

Comics page…

A little post to tell you that I finally added a page on my website with all my comics. Better late than never.
They were all released previously on my blog, and some of them were missing the license info which are now on this page. Also I re-licensed some pages from CC BY-NC-ND to CC BY-SA some time ago in a blog post, this page makes it more obvious.

Link to my comics, enjoy 🙂

(… yes, I know, I really should update my website…)

January 14, 2017

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 12, 2017

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

rawsamples.ch replacement

Rawsamples.ch is a website with the goal to:

…provide RAW-Files of nearly all available Digitalcameras mainly to software-developers. [sic]

It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with an SQL-injection that ended up corrupting the database for the Joomla install that hosted the site. To compound the pain, there were no database backups … :(

Luckily, Kees Guequierre (dtstyle.net) decided to build a site where contributors could upload sample raw files from their cameras for everyone to use – particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and raw.pixls.us is the licensing. The existing files, and the preference for any new contributions, are licensed as Creative Commons Zero – Public Domain (as opposed to CC-BY-NC-SA).

After some hacking, with input and guidance from darktable developer Roman Lebedev, the site was finally ready.

raw.pixls.us

The site is now live at https://raw.pixls.us.

You can look at the submitted files and search/sort through all of them (and download the ones you want).

In addition to browsing the archive, it would be fantastic if you're able to supplement the database by upload sample images. Many of the images from the rawsamples.ch archive are licensed CC-BY-NC-SA, but we'd rather have the files licensed CC0. CC0 is preferable to CC-BY-NC-SA because if the sample raw files are separated from the database, they can safely be redistributed without attribution (attribution is required by CC-BY-NC-SA). So if you have a camera that is already in the list with the more restrictive Creative Commons license, then please consider uploading a replacement for us!

We are looking for shots that are:

  • Lens mounted on the camera
  • Lens cap off
  • In focus
  • With normal exposure, not underexposed and not overexposed
  • Landscape orientation
  • Licensed under the Creative Commons Zero

We are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, or lighting
    (Even if it's a shot of a color target)
  • DNG files created with Adobe DNG Converter

Please take a moment and see if you can provide samples to help the developers!

This post has been written in collaboration with pixls.us

New Year, New Raw Samples Website


New Year, New Raw Samples Website

A replacement for rawsamples.ch

Happy New Year, and I hope everyone has had a wonderful holiday!

We’ve been busy working on various things ourselves, including migrating RawPedia to a new server as well as building a replacement raw sample database/website to alleviate the problems that rawsamples.ch was having…

rawsamples.ch Replacement

Rawsamples.ch is a website with the goal to:

…provide RAW-Files of nearly all available Digitalcameras mainly to software-developers. [sic]

It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with a SQL-injection that ended up corrupting the database for the Joomla install that hosted the site. To compound the pain, there were no database backups… :(

On the good side, the PIXLS.US community has some dangerous folks with idle hands. Our friendly, neighborhood @andabata (Kees Guequierre) had some time off at the end of the year and a desire to build something. You may know @andabata as the fellow responsible for the super-useful dtstyle website, which is chock full of darktable styles to peruse and download (if you haven’t heard of it before – you’re welcome!). He’s also my go-to for macro photography and is responsible for this awesome image used on a slide for the Libre Graphics Meeting:

PIXLS.US LGM Slide

Luckily, he decided to build a site where contributors could upload sample raw files from their cameras for everyone to use – particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and raw.pixls.us is the licensing. The existing files, and the preference for any new contributions, are licensed as Creative Commons Zero - Public Domain (as opposed to CC-BY-NC-SA).

After some hacking, with input and guidance from darktable developer Roman Lebedev, the site was finally ready. The repository for it can be found on GitHub: raw.pixls.us repo.

raw.pixls.us

The site is now live at https://raw.pixls.us.

You can look at the submitted files and search/sort through all of them (and download the ones you want).

In addition to browsing the archive, it would be fantastic if you were able to supplement the database by uploading sample images. Many of the files from the rawsamples.ch archive are licensed CC-BY-NC-SA, but we’d rather have the files licensed Creative Commons Zero - Public Domain. CC0 is preferable because if the sample raw files are separated from the database, they can safely be redistributed without attribution. So if you have a camera that is already in the list with the more restrictive license, then please consider uploading a replacement for us!

We are looking for shots that are:

  • Lens mounted on the camera
  • Lens cap off
  • In focus
  • Properly exposed (not over/under)
  • Landscape orientation
  • Licensed under the Creative Commons Zero

We are not looking for:

  • Series of images with different ISO, aperture, shutter, wb, or lighting
    (Even if it’s a shot of a color target)
  • DNG files created with Adobe DNG Converter

Please take a moment and see if you can provide samples to help the developers!

Wed 2017/Jan/11

  • Reproducible font rendering for librsvg's tests

    The official test suite for SVG 1.1 consists of a bunch of SVG test files that use many of the features in the SVG specification. The test suite comes with reference PNGs: your SVG renderer is supposed to produce images that look like those PNGs.

    I've been adding test files from that test suite to librsvg as I convert things to Rust, and also when I refactor code that touches code for a particular kind of SVG element or filter.

    The SVG test suite is not a drop-in solution, however. The spec does not specify pixel-exact rendering. It doesn't mandate any specific kind of font rendering, either. The test suite is for eyeballing that tests render correctly, and each test has instructions on what to look for; it is not meant for automatic testing.

    The test files include text elements, and the font for those texts is specified in an interesting way. SVG supports referencing "SVG fonts": your image_with_text_in_it.svg can specify that it will reference my_svg_font.svg, and that file will have individual glyphs defined as normal SVG objects. "You draw an a with this path definition", etc.

    Librsvg doesn't support SVG fonts yet. (Patches appreciated!) As a provision for renderers which don't support SVG fonts, the test suite specifies fallbacks with well-known names like "sans-serif" and such.

    In the GNOME world, "sans-serif" resolves to whatever Fontconfig decides. Various things contribute to the way fonts are resolved:

    • The fonts that are installed on a particular machine.

    • The Fontconfig configuration that is on a particular machine: each distro may decide to resolve fonts in slightly different ways.

    • The user's personal ~/.fonts, and whether they are running gnome-settings-daemon and whether it monitors that directory for Fontconfig's perusal.

    • Phase of the moon, checksum of the clouds, polarity of the yak fields, etc.

    For silly reasons, librsvg's "make distcheck" doesn't work when run as a user; I need to run it as root. And as root, my personal ~/.fonts doesn't get picked up and also my particular font rendering configuration is different from the system's default (why? I have no idea — maybe I selected specific hinting/antialiasing at some point?).

    It has taken a few tries to get reproducible font rendering for librsvg's tests. Without reproducible rendering, the images that get rendered from the test suite may not match the reference images, depending on the font renderer's configuration and the available fonts.

    Currently librsvg does two things to get reproducible font rendering for the test suite:

    • We use a specific cairo_font_options_t on our PangoContext. These options specify what antialiasing, hinting, and hint metrics to use, so that the environment's or user's configuration does not affect rendering.

    • We create a specific FcConfig and a PangoFontMap for testing, with a single font file that we ship. This will cause any font description, no matter if it is "sans-serif" or whatever, to resolve to that single font file. Special thanks to Christian Hergert for providing the relevant code from Gnome-builder.

    • We ship a font file as mentioned above, and just use it for the test suite.

    This seems to work fine. I can run "make check" both as my regular user with my private ~/.fonts stash, or as root with the system's configuration, and the test suite passes. This means that the rendered SVGs match the reference PNGs that get shipped with librsvg — this means reproducible font rendering, at least on my machine. I'd love to know if this works on other people's boxes as well.

January 09, 2017

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 08, 2017

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 04, 2017

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 03, 2017

Interview with Ismail Tarchoun

Could you tell us something about yourself?

My name is Ismael. I’m a self-taught artist from Tunisia, but I now live and study in Germany.

Do you paint professionally, as a hobby artist, or both?

I’m now painting only as a hobby, it’s a really fun and stress relieving activity. But I might do some freelancing work in the future.

What genre(s) do you work in?

I usually paint portraits and manga-styled characters, but I paint other stuff as well. I always try to expand my horizon and learn new things.

Whose work inspires you most — who are your role models as an artist?

Well, there is a long list of artists who inspired me. For example: Kuvshinov-Ilya and Laovaan Kite, I really like their style and their work always looks great. David Revoy is also one of my favorite artists, I really like his art and his web comic.

How and when did you get to try digital painting for the first time?

I actually only started last summer (2016). Before that, I mainly drew pencil portraits, which was limiting in nature. After seeing some amazing digital paintings on the internet, I wanted to be able to draw like that, and so it was decided. I bought a Wacom intuos art and tried it. It needed some getting used to, but I eventually fell in love with the infinite range of possibilities digital painting offers.

What makes you choose digital over traditional painting?

Well, I still paint traditionally from time to time. But I like digital painting more now, since it offers more tools which help me achieve good results with minimal effort. I also love the Ctrl+z shortcut (I wish real life had that!) so I’m not worried about ruining my work, and I can make more daring decisions which allow me to express myself more freely.

How did you find out about Krita?

I actually learned about form Blender forums, some users there recommended it over Gimp as a painting program, so I tried it and fell in love with it.

What was your first impression?

I was amazed by the sheer amount of features it offered, and the user interface looked good (I like dark-themed programs). For free software it was great, it even has features Photoshop doesn’t have. So in general, I had a positive first impression.

What do you love about Krita?

I really love the various brushes and the way they’re rendered, they felt so organic, and like real brushes. I also like the non-destructive filters and transformations, that is pretty rare in free software, and it really encourages you to try new and different stuff, and if you don’t like it, you can change it later (more freedom with minimal consequences).

What do you think needs improvement in Krita? Is there anything that really annoys you?

There are some features I want to see in Krita, for example: a small preview window: it’s essential to get a feeling of the painting in general, otherwise it might turn out weird. I also wish Krita could import more brushes from other programs. But nothing is really that bothersome about Krita, there are some bugs, but they are constantly being fixed by the awesome devs.

What sets Krita apart from the other tools that you use?

Canvas tilting, rulers, transformation and filter layers, and the Multibrush also. Quite neat features.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think I’d choose the stylized portrait at the top of this interview, which doesn’t have a name (I really suck at naming things). It started as a simple painting exercise, but it ended up looking pretty good, or at least better than my previous works, which is a good sign of improvement. But I hope it doesn’t stay my favorite painting for long. In other words, I hope I’ll be able to put it to shame in the near future.

What techniques and brushes did you use in it?

First, I made a rough sketch, then I started laying in some general colors using a large soft brush (deevad 4a airbrush by David Revoy) without caring about the details, only basic colors and a basic idea of how the painting is lit. Then I started going into details using a smaller sized brush (deevad 1f draw brush). I usually paint new details in a separate layer, then merge it down if I’m happy with the results, if not I, I delete the layer and paint a new one. I use the liquify tool a lot to fix the proportions or any anomaly. For the hair I used the brush (deevad 2d flat old) and the hair brush (vb3BE by Vasco Alexander Basque) which I also used for the hat. When the painting is done I use filters to adjust the colors and contrast, I then make a new layer for final and minor tweaks here and there.

Where can people see more of your work?

You can find me on DeviantArt (not everything is made using Krita): http://tarchoun.deviantart.com/

Anything else you’d like to share?

I just hope that Krita will get even better in the future and more people start using it and appreciating it.

January 02, 2017

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

darktable 2.2.1 released

we're proud to announce the first bugfix release for the 2.2 series of darktable, 2.2.1!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.1.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.1.tar.xz
da843190f08e02df19ccbc02b9d1bef6bd242b81499494c7da2cccdc520e24fc  darktable-2.2.1.tar.xz
$ sha256sum darktable-2.2.1.3.dmg
9a86ed2cff453dfc0c979e802d5e467bc4974417ca462d6cbea1c3aa693b08de  darktable-2.2.1.3.dmg

and the changelog as compared to 2.2.0 can be found below.

New features:

  • Show a dialog window that tells when locking the database/library failed
  • Ask before deleting history stack from lightable.
  • preferences: make features that are not available (greyed out) more obvious

Bugfixes:

  • Always cleanup undo list before entering darkroom view. Fixes crash when using undo after re-entering darkroom
  • Darkroom: properly delete module instances. Fixes rare crashes after deleting second instance of module.
  • Levels and tonecurve modules now also use 256 bins.
  • Rawoverexposed module: fix visualization when a camera custom white balance preset is used

Base Support:

  • Canon EOS M5