May 23, 2017

Python help from the shell -- greppable and saveable

I'm working on a project involving PyQt5 (on which, more later). One of the problems is that there's not much online documentation, and it's hard to find out details like what signals (events) each widget offers.

Like most Python packages, there is inline help in the source, which means that in the Python console you can say something like

>>> from PyQt5.QtWebEngineWidgets import QWebEngineView
>>> help(QWebEngineView)
The problem is that it's ordered alphabetically; if you want a list of signals, you need to read through all the objects and methods the class offers to look for a few one-liners that include "unbound PYQT_SIGNAL".

If only there was a way to take help(CLASSNAME) and pipe it through grep!

A web search revealed that plenty of other people have wished for this, but I didn't see any solutions. But when I tried running python -c "help(list)" it worked fine -- help isn't dependent on the console.

That means that you should be able to do something like

python -c "from sys import exit; help(exit)"

Sure enough, that worked too.

From there it was only a matter of setting up a zsh function to save on complicated typing. I set up separate aliases for python2, python3 and whatever the default python is. You can get help on builtins (pythonhelp list) or on objects in modules (pythonhelp sys.exit). The zsh suffixes :r (remove extension) and :e (extension) came in handy for separating the module name, before the last dot, and the class name, after the dot.

#############################################################
# Python help functions. Get help on a Python class in a
# format that can be piped through grep, redirected to a file, etc.
# Usage: pythonhelp [module.]class [module.]class ...
pythonXhelp() {
    python=$1
    shift
    for f in $*; do
        if [[ $f =~ '.*\..*' ]]; then
            module=$f:r
            obj=$f:e
            s="from ${module} import ${obj}; help($obj)"
        else
            module=''
            obj=$f
            s="help($obj)"
        fi
        $python -c $s
    done
}
alias pythonhelp="pythonXhelp python"
alias python2help="pythonXhelp python2"
alias python3help="pythonXhelp python3"

So now I can type

python3help PyQt5.QtWebEngineWidgets.QWebEngineView | grep PYQT_SIGNAL
and get that list of signals I wanted.

May 22, 2017

Updating Logitech Hardware on Linux

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

  • Pair new devices with the receiver without user prompting
  • Inject keystrokes, covering various scenarios
  • Inject raw HID commands

This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day.

Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable.

The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue.

It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch.

This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora.

Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware.

If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible.

ESA space debris movie by ONiRiXEL

ONiRiXEL 3D Animation Studio is a small startup based in Toulouse, France. In this article by Jean-Gabriel Loquet, they share how they used Blender to create a high-end stereoscopic video for the European Space Agenty. ONiRiXEL is a Corporate member of the Blender Network.

We are specialized in the production of 3D CGI animation films, mainly for corpororate or institutional films and commercials, but we also love to work on fiction and documentaries, shoot live action, create VFX, perform film preproduction and/or postproduction, or create 3D VR or AR apps as well.

Blender is at the heart of our pipeline, thanks to its “all-in-one” functionality and overall awesomeness: we use it for concept and previs, 3D modeling and texturing, animation, simulation, lighting and rendering with Cycles, compositing and video editing, for monoscopic and stereoscopic projects.

The European Space Agency ESA


The ISS (partly operated by ESA) 3D render by ONiRiXEL, for the space debris movie.

ESA is the European equivalent of NASA: its purpose is to provide for, and to promote, for exclusively peaceful purposes, cooperation among European States in space research and technology and their space applications, with a view to their being used for scientific purposes and for operational space applications systems.

Amongst many, one of ESA’s missions is to investigate about space debris, communicate about this issue with the space operations community, find and implement solutions to mitigate the risks associated therewith.

To raise awareness about this issue, ESA hosted the 7th European Conference on Space Debris Risks and Mitigation in April 2017 in ESOC (the European Space Operations Center) in Darmstadt, Germany. According to ESA: “3D animation is the most suitable way to explain technical principles and to give a clear picture of the situation in space”. Thus, the agency enthrusted the creation of a 3D animation movie about space debris to ONiRiXEL 3D Studio, along with the french consulting startup (also based in Toulouse) ID&SENSE.

The projection of the movie was the main act during the opening ceremony of the conference.

Orbitography simulation with Orekit

The accuracy of spacecraft position, speed and attitude was paramount for ESA so the project team got in touch with the developers of the open source flight dynamics computation library Orekit, in order to implement correct representation of the orbital data provided by ESA.

The input data were the Kepler orbital parameters for each object (active satellite, defunct satellite, launcher upper stage, and spacecraft fragment):

The Orekit team developed a software interface that read these parameters, translated them into ephemeris data (location and attitude of each object for any point in time), which was eventually written to files in Blender’s particle cache format (one for each object and for every frame), and thus already animated in Blender with extremely good accuracy!
Active spacecraft use controlled attitude depending on their orbit (typically nadir pointing with yaw compensation for LEO, LOF aligned for GEO) and other categories use a tumbling mode with random initial attitude and angular velocity. Attitude for the solar arrays is also computed with respect to the body attitude to ensure a proper orientation in the direction of the sun in each movie scene.


Screen capture of Blender reading the Orekit computed spacefraft animation

The next step was to create relatively simple 3D models for the Orekit-simulated objects, with a reasonnable amount of polygons as there were over 20.000 duplications instanciated by Blender’s particle engine. Finally we setup materials, lighting, and camera animation, and thanks to Cycles’ magic: voilà!


Rendered frame of the movie by ONiRiXEL with Orekit computed spacefraft animation

The interface developped by the Orekit team also allowed the visualization of the object’s trajectory as curves:


Rendered frame of the movie by ONiRiXEL with Orekit computed spacefraft trajectory as a curve

Some scenes concerned only one piece of spacecraft, and allowed for more detailed modelling:


Screen capture of ONiRiXEL’s 3D model of Herschel Space Telescope in Blender

Blender’s stereoscopic pipeline

Another of Blender’s strengths on this project was also to support a solid, end-to-end strereoscopic pipeline from conception to delivery of the movie.

We used stereoscopic preview of animation:


Screen capture of ONiRiXEL’s stereoscopic animation preview for the movie in Blender

Stereoscopic rendering and compositing:


Screen capture of ONiRiXEL’s stereoscopic compositing for the movie in Blender

And even stereoscopic editing and encoding, all within Blender:


Screen capture of ONiRiXEL’s stereoscopic video editing for the movie in Blender

ESA’s space debris movie 2017: “A Journey to Earth”

The resulting 12 minutes 3D animation film was realeased by ESA under the CC-BY-SA licence, and is available for download on their website.


Rendered frame of ESA’s space debris movie by ONiRiXEL 3D

It is also available on Youtube in monoscopic or stereoscopic versions.

In the end, this was a challenging but very exciting project, with an extremely efficient production pipeline, with a very high satisfaction of the final client (ESA),
all made possible by open source software in general, and Blender in particular.

You can get in touch with us on the ONiRiXEL 3D Animation Studio website for more info, or check our Blender Network profile.

May 19, 2017

ICC Examin 1.0 on Android

ICC Examin allows since version 1.0 ICC Color Profile viewing on the Android mobile platform. ICC Examin shows ICC color profile elements graphically. This way it is much easier to understand the content. Color primaries, white point, curves, tables and color lists are displayed both numerically and as graphics. Matrices, international texts, Metadata are much easier to read.

Features:
* most profile elements from ICC specification version 2 and version 4
* additionally some widely used non standard tag are understood

ICC color profiles are used in photography, print and various operating systems for improving the visual appearance. A ICC profile describes the color response of a color device. Read more about ISO 15076-1:2010 Standard / Specification ICC.1:2010-12 (Profile version 4.3.0.0), color profiles and ICC color management under www.color.org .

The ICC Examin App is completely rewritten in Qt/QML. QML is a declarative language, making it easy to define GUI elements and write layouts with fewer code. In recent years the Qt project extended support from desktop platforms to mobiles like Nokias Meego, Sailfish OS, iOS, Android, embedded devices and more. ICC Examin is available as a paid app in the Google Play Store. Sources are currently closed in order to financially support further development. This ICC Examin version continues to use Oyranos CMS. New is the dependency to RefIccMAX for parsing ICC Profile binaries. In the process both the RefIccMAX library and the Oyranos Color Management System obtained changes and fixes in git for cross compilation with Android libraries. Those changes will be in the next respective releases.

The FLTK Toolkit, as used in previous versions, was not ported to the Android or other mobile platforms. Thus a complete rewrite was unavoidable. The old FLTK based version is still maintained by the same author.

Pot Sherd

Wandering the yard chasing invasive weeds, Dave noticed an area that had been disturbed recently by some animal -- probably a deer, but there were no clear prints so we couldn't be sure.

But among the churned soil, he noticed something that looked different from the other rocks.

[Pot sherd we found in the yard] A pot sherd, with quite a nice pattern on it!

(I'm informed that fragments of ancient pots are properly called "sherds"; a "shard" is a fragment of anything other than a pot.)

Our sherd fairly large as such things go: the longest dimension is about two inches.

Of course, we wanted to know how old it was, and whether it was "real". We've done a lot of "archaeology" in our yard since we moved in, digging up artifacts ranging from bits of 1970s ceramic and plastic dinnerware to old tent pegs to hundreds of feet of old rotting irrigation tubing and black plastic sheeting. We even found a small fragment of obsidian that looked like it had been worked (and had clearly been brought here: we're on basalt, with the nearest obsidian source at least fifteen miles away). We've also eyed some of the rock rings and other formations in the yard with some suspicion, though there's no way to prove how long ago rocks were moved. But we never thought we'd find anything older than the 1970s when the house was built, or possibly the 1940s when White Rock was a construction camp for the young Los Alamos lab.

So we asked a friend who's an expert in such matters. She tells us it's a Santa Fe black-on-white, probably dated somewhere between 1200-1300 AD. Santa Fe black-on-white comes in many different designs, and is apparently the most common type of pottery found in the Los Alamos/Santa Fe area. We're not disappointed by that; we're excited to find that our pot sherd is "real", and that we could find something that old in the yard of a house that's been occupied since 1975.

It's not entirely a surprise that the area was used 700 years ago, or even earlier. We live in a community called La Senda, meaning "The Path". A longtime resident told us the name came from a traditional route that used to wind down Pajarito Canyon to the site of the current Red Dot trail, which descends to the Rio Grande passing many ancient petroglyphs along the way. So perhaps we live on a path that was commonly used when migrating between the farmland along the Rio and the cliff houses higher up in the canyons.

What fun! Of course we'll be keeping our eyes open for more sherds and other artifacts.

May 16, 2017

Google Summer of Code 2017: Krita’s Students Introduce themselves

This year, four students, from three separate continents, will participate, within the wider KDE project, in Google’s Summer of Code for Krita. We’d like to say “welcome!” to all four, and give them a chance to introduce themselves!

Aniketh Girish: Integrate Krita With share.krita.org

I’m Aniketh Girish. I’m a first-year Computer Science and Engineering student pursuing my bachelor’s from Amrita University. I’m an active member of the FOSS club in our university (FOSS@Amrita). I started contributing to KDE in September 2016. I was selected for Season of KDE(KDE-SoK) 2016-17, in which I worked on the project Kstars. I was invited as a speaker for KDE India Conference 2017 in IIT Guwahati, where I gave a talk on the topic “Object tracking using OpenCV and Qt”. I have also contributed to other projects of KDE like Konsole, KIO and several more.

My project for GSoC 2017 aims to integrate share.krita.org with our Krita application. I intend to accomplish this project using the Open Collaboration Services called libattica. I will create a user friendly GUI for it as well.

The second part of the project is to make the GUI of resource manager much more appealing. The existing resource manager is quite buggy and not very user-friendly, so it requires a lot of triaging. Therefore I will make the resource manager’s GUI much better, using the open collaboration service to share the bundles and other resources available from share.krita.org. I’ll also fix any issues I find.

Blogs:

Eliakin Costa: Showcasing Python Scripting in Krita

I’m Eliakin Costa, 23 years old, from Brazil. My project for GSoC is “Develop a showcase of Krita’s new scripting support”, a continuation of my recent work on Krita. I have to implement scripts to be executed in the Scripter (a GUI to execute and save python scripts inside Krita) and plugins to be added to the Krita GUI. A really important part of my work is to talk with more users to compile a good group of scripts and plugins to be implemented during this period and define how my work will be available for the users. I’m really excited to code and to help Krita’s community.

Григорий Танцевов: Watercolor Brushes in Krita

I’m a third year student of Bauman Moscow State Technical University. I am interested in programming, music and art. I did not have the opportunity to seriously engage in programming during my school years because I grew up in a small city. But I compensated for this by studying mathematics and physics.

My first programming experience was in grade 6, when I did a primitive drawing program using Pascal. Later, I began to learn the basics of programming in special courses at the College of my city. As part of the university’s curriculum, I studied programming languages like Lisp, C, C++, C#. Now my main programming language is C++.

As a part of GSoC i’m going to make watercolor brush for Krita based on the procedural brush.

Alexey Kapustin: Telemetry for Understanding Which Functions in Krita Are Used Most.

I’m Alexey Kapustin, a 3rd year student at the Software Engineering Faculty of Bauman Moscow State Technical University. I’m interested in C ++, Python, Javascript, web development. I also study at the Technopark Mail.ru in the 3rd semester. (Russian School of Web Architects).

This summer I will be doing telemetry for Krita. I think this is very useful for developers. Also it will help me to get new skills.

Recently released applications in GNOME Software

By popular request, there is now a list of recently updated applications in the gnome-software overview page.

Upstream applications have to do two things to be featured here:

  1. Have an upstream release within the last 2 months (will be reduced as the number of apps increases)
  2. Have upstream release notes in the AppData file

Quite a few applications from XFCE, GNOME and KDE have already been including <release> tags and this visibility should hopefully encourage other projects to do the same. As a small reminder, the release notes should be small, and easily understandable by end users. You can see lots of examples in the GNOME Software AppData file. Any questions, please just email me or leave comments here. Thanks!

May 15, 2017

Teaser for Agent 327: Operation Barbershop

Blender brings cult comic Agent 327 to life in 3D animation

The studio of Blender Institute releases ambitious three-minute teaser for full-length animated feature based on Dutch artist Martin Lodewijk’s classic comics

Amsterdam, NetherlandsMay 15, 2017) – It has created a string of award-winning shorts, raised over a million dollars in crowdfunding, and helped to shape development of the world’s most popular 3D application. But now Blender Institute has embarked on its most ambitious project to date. The studio has just releasedAgent 327: Operation Barbershop: a three-minute animation based on Martin Lodewijk’s cult comics, and co-directed by former Pixar artist Colin Levy – and the proof of concept for what it hopes will become a major international animated feature created entirely in open-source software.
Watch it here: http://agent327.com

A history of award-winning open movies

Since its foundation in 2007, Blender Institute has created eight acclaimed animated and visual effects shorts, culminating in 2015’s Cosmos Laundromat, winner of the Jury Prize at the SIGGRAPH Computer Animation Festival. Each ‘open movie’ has been created entirely in open-source tools – including Blender, the world’s most widely used 3D software, whose development is overseen by the Institute’s sister organization, the Blender Foundation.

Assets from the films are released to the public under a Creative Commons license, most recently via Blender Cloud, the Institute’s crowdfunding platform, also used to raise the €300,000 budget for Agent 327: Operation Barbershop.Based on Martin Lodewijk’s cult series of comics, the three-minute movie teaser brings the Dutch artist’s underdog secret agent vividly to life.

Creating a cult spy thriller

“Agent 327 is the Netherlands’ answer to James Bond,” said producer Ton Roosendaal, original creator of the Blender 3D software. “He’s fighting international supervillains, but the underfunded Dutch secret service agency doesn’t have the resources of MI6. Rather than multi-million-dollar gadgets, he has to rely on his own resourcefulness to get things done – and in the end, he always pulls it off. To me, that also reflects the spirit of Blender itself.”

Created by a core team of 10 artists and developers over the course of a year, Operation Barbershop sees Agent 327 going undercover in an attempt to uncover a secret criminal lair. Confronted first by the barbershop’s strangely sinister owner, then his old adversary Boris Kloris, Agent 327 becomes embroiled in a life-or-death struggle – only to confront an even more deadly peril in the shop’s hidden basement.

Translating a 1970s comic icon into 3D

A key artistic challenge on the project was translating the stylized look of the original 1970s comics into 3D. “Agent 327 doesn’t really fit the mainstream design template for animated characters,” says Blender Institute production coordinator Francesco Siddi. “His facial features are designed and exaggerated in a very Europan way. How many Disney movies do you see with characters like that?”

For the work, the Institute’s modeling and design artists, led by Blender veteran Andy Goralczyk, carried out a series of look development tests. Concept designs were created in open-source 2D painting software Krita, while test models were created in Blender itself, and textured in GIMP.

Another issue was balancing action and storytelling. Although a richly detailed piece, Operation Barbershop isn’t a conventional animated short, but a proof of concept for a movie. It’s designed to introduce Agent 327’s universe, and to leave the viewer wanting more. To achieve the right mix of narrative and exposition, Colin Levy and co-director and lead animator Hjalti Hjálmarsson ping-ponged ideas off one another, mixing animated storyboards, live action, and 3D previs.

Building an open-source feature animation pipeline

As with all of the Institute’s open movies, technical development on the project feeds back into public builds of Blender. In the case of Operation Barbershop, the work done on Cycles, the software’s physically based render engine – which now renders scenes with hair and motion blur 10 times faster – was rolled out in Blender 2.78b in February. Work on Blender’s dependency graph, which controls the way a character rig acts upon the geometry of the model, will follow in the upcoming Blender 2.8. “For users, it’s going to mean much better performance, enabling much more complex animation set-ups than are possible now,” says Siddi.

Other development work focused on the Blender Institute’s open-source pipeline tools: render manager Flamenco and production-tracking system Attract. “The pipeline for making shorts in Blender is already super-solid, but we wanted to build a workflow that could be used on a feature film,” says Siddi. “Operation Barbershop was great for identifying areas for improvement, like the way we manage asset libraries.”

Joining Hollywood’s A-list

For the Agent 327 movie itself, the Blender Institute is establishing Blender Animation Studio, a separate department devoted to feature animation, for which it aims to recruit a team of 80 artists and developers from its international network. To help raise the film’s proposed budget of €14 million, the Institute has signed with leading talent agency WME, which also represents A-list Hollywood directors like Martin Scorsese, Ridley Scott, and Michael Bay.

“Blender Animation Studio is devoted to producing feature animation with world-class visuals and storytelling, created entirely in free and open-source software,” says founder and producer Ton Roosendaal, “We’ve proved that Blender can create stunning short films. Now we aim to create stunning features, while building and sharing a free software production pipeline.”

Although the Agent 327 movie isn’t the first film to be created in Blender – a distinction that belongs to 2010 Argentinean animated comedy Plumíferos – it will be by far the largest and most ambitious, and one that the Blender Institute hopes will revolutionize feature animation.

“As an independent studio, we’re in the unique position of being in complete control of the tools we use in production,” says Roosendaal. “That’s a luxury enjoyed only by the world’s largest animation facilities. We intend to create movies that redefine the concept of independent animated feature production.”

May 12, 2017

Happy 2nd Birthday Discuss


Happy 2nd Birthday Discuss

Time keeps on slippin'

I was idling in our IRC chat room earlier when @Morgan_Hardwood wished us all a “Happy Discuss Anniversary”. Wouldn’t you know it, another year slipped right by! (Surely there’s no way it could already be a year since the last birthday post? Where does the time go?)

We’ve had a bunch of neat things happen in the community over the past year! Let’s look at some of the highlights.

Support

I want to start with this topic because it’s the perfect opportunity to recognize some folks who have been supporting the community financially…

When I started all of this I decided that I definitely didn’t want ads to be on the site anywhere. I had gotten enough donations from my old blog and GIMP tutorials that I could cover costs for a while entirely from those funds (I also re-did my personal blog recently and removed all ads from there as well).

I don’t like ads. You don’t like ads. We’re a big enough community that we can keep things going without having to bring those crappy things into our lives. So to reiterate, we’re not going to run ads on the site.

We are hosting the main website on Stablehost, the forums (discuss) are on a VPS at Digital Ocean, and our file storage for discuss is out on Amazon S3(see below). All told our costs are about $30 per month. Not so bad!

Thank You!

Even so, we have had some folks who have donated to help us offset these costs and I want to take a moment to recognize their generosity and graciousness!

Dimitrios Psychogios has been a supporter of the site since the beginning. This past year he covered (more than) our hosting costs for the entire year, and for that I am infinitely grateful (yes, I have infinite gratitude). It also helps that based on his postings on G+ our musical tastes are very similarly aligned. As soon as I get the supporters page up you’re going to the top of the list! Thank you, Dimitrios, for your support of the community!

Jonas Wagner (@JonasWagner) and McCap (@McCap) both donated this past year as well. Which is doubly-awesome because they are both active in the community and have written some great content for everyone as well (@McCap is the author of the article A Masashi Wakui look with GIMP_, and has been active in the community since the beginning as well).

Mica (@paperdigits) and Luka are both recurring donators which I am particularly grateful for. It really helps for planning to know we have some recurring support like that.

I have a bunch of donations where the donators didn’t leave me a name to use for attribution and I don’t want to just assume it’s ok. If you know you donated and see your first name in the list below (and are ok with me using your full name and a link if you want) then please let me know and I’ll update this post (and for the donators page later).

These are the folks who are really making a difference by taking the time and being gracious enough to support us. Even if you don’t want your full name out here, I know who you are and am very, very grateful and humbled by your generosity and kindness. Thank you all so much!

  • Marc W. (you rock!)
  • Ulrich P.
  • Luc V.
  • Ben E.
  • Keith A.
  • Philipp H.
  • Christian M.
  • Matthieu M.
  • Christian M.
  • Christian K.
  • Maria J.
  • Kevin P.
  • Maciej D.
  • Christian K.
  • Egbert G.
  • Michael H.
  • Jörn H.
  • Boris H.
  • Norman S.
  • David O.
  • Walfrido C.
  • Philip S.
  • David S.
  • Keith B.
  • Andrea V.
  • Stephan R.
  • David M.
  • Bastian H.
  • Chance J.
  • Luka S.
  • Nathanael S.
  • Sven K.
  • Pepijn V.
  • Benjamin W.
  • Jörg W.
  • Patrick B.
  • Joop K.
  • Alain V.
  • Egor S.
  • Samuel S.

On that note. If anyone wanted to join the folks above in supporting what we’re up to, we have a page specifically for that:

https://pixls.us/support/

Remember, no amount is too small!

Libre Graphics Meeting Rio

Forte de Copacabana, Rio By Gabriel Heusi/Brasil2016.gov.br

I wasn’t able to attend LGM this year, being held down in Rio (but the GIMP team did). That’s not to say that we didn’t have folks from the community there: Farid (@frd) from Estúdio Gunga was there!

I was able to help coordinate a presentation by Robin Mills (@clanmills) about the state (and future) of Exiv2. They’re looking for a maintainer to join the project, as Robin will be stepping down at the end of the year for studies. If you think you’d be interested in helping out, please get in touch with Robin on the forums and let him know!

I also put together (quickly) a few slides on the community that were included in the “State of the Libre Graphics” presentation that kicks off the meeting (presented this year by GIMPer Simon Budig):

2017 LGM/Rio PIXLS.US State Of Slide 0 2017 LGM/Rio PIXLS.US State Of Slide 1 2017 LGM/Rio PIXLS.US State Of Slide 2 This slide deck is availabe in our Github repo.

This was just a short overview of the community and I think it makes sense to include it here was well. Since we stood the forum up two years ago we’ve seen about 3.2 million pageviews and have just under 1,400 users in the community. Which is just awesome to me.

@LebedevRI was also going to be mad if I didn’t take the time to at least let folks know about raw.pixls.us, where we currently have 693 raw files across 477 cameras. Please, take a moment to check raw.pixls.us and see if we are missing (or need better) files from a camera you may have, and get us samples for testing!

raw.pixls.us

We set up raw.pixls.us so we can gather camera raw samples for regression testing of rawspeed as well to have a place for any other project that might need raw files to test with. As we blogged about previously, the new site is also a replacement for the now defunct rawsamples.ch website.

Stop in and see if we’re missing a sample you can provide, or if you can provide a better (or better licensed) version for your camera. We’re focusing specifically on CC0 contributions.

Welcome digiKam!

digiKam Logo

As I mentioned in my last blog post, we learned that the digiKam team was looking for a new webmaster through a post on discuss. @Andrius posted a heads up on the digiKam 5.5.0 release in this thread.

Needless to say, less than a month or so later, @paperdigits had already finished up a nice new website for them! This is something we’re really trying to help out the community with and are super glad to be able to help out the digiKam team with this. The less time they have to worry about web infrastructure and security for it, the more time they can spend on awesome new features for their project and users.

Yes, we used a static site generator (Hugo in this case), and we were also able to move their commenting system to use discuss as its back-end! This is the same way we’re doing comments for PIXLS.US right now (scroll to the bottom of this post).

They’ve got their own category on discuss for both general digiKam discussion as well as their linked comments from their website.

Speaking of using discourse as a commenting system…

Discourse upstream

We’ve been using discourse as our forum software from the beginning. It’s a modern, open, and full-featured forum software that I think works incredibly well as a modern web application.

The ability to embed comments in a website that are part of the forum was one of the main reasons I went with it. I didn’t want to expose users to unnecessary privacy concerns by embedding a third-party commenting system (cough, disqus, cough). If I was going to go through the trouble of setting up a way to comment on things, I wanted to homogenize it with a full community-building effort.

This past year they (the discourse devs) added the ability to embed comments in multiple hosts (it was only one host when we first stood things up). This means that we can now manage the comments for anyone else thay may need them! Of course, building out a new website for digiKam meant that this was a perfect time to test things.

It all works beautifully, with one minor nitpick. The ability to style the embedded comments was limited to a single style for all the places that they might be embedded. This may be fine if all of the sites look similar, but if you visit www.digikam.org and compare it to here, you can see they are a little bit different… (we’re on white, digikam.org is on a dark background).

We needed a way to isolate the styling on a per-host basis, which after much help from @darix (yet again :)) I was able to finally hack something together that worked and get it pushed upstream (and merged finally)!

Discourse embed class name I made this!

Play Raw

When RawTherapee migrated their official forums over to pixls they brought something really fun with them: Play Raw. They would share a single raw file amongst the community and then have everyone process and share their results (including their processing steps and associated .pp3 settings file).

If you haven’t seen it yet, we’ve had quite a few Play Raw posts over the past year with all sorts of wonderful images to practice on and share! There are portraits, children, dogs, cats, landscapes, HDR, and phở! There’s over 19 different raw files being shared right now, so come try your hand at processing (or even share a file of your own)!

The full list of play_raw posts can always be found here:
https://discuss.pixls.us/tags/play_raw

Amazon S3

We are a photography forum, so it only made sense that we made it as easy as possible for community members to upload and share images (raw files, and more). It’s one of the things I love about discourse that it’s so easy to add these things to your posts (simply drag-and-drop into the post editor) and upload them.

While this is easy to do, it does mean that we have to store all of this data. The VPS we use from Digital Ocean only has a 40GB SSD and it has to include all of the main forum running on it. We did have a little space for a while, but to help alleviate the local storage as a possible problem down the line, I moved our file storage out to Amazon S3.

This means that we can upload all we want and won’t really hit a wall with actual available storage space. It costs more each month than trying to store it all on local storage for the site, but then we don’t have to worry about expansion (or migration) later. Plus our current upload size limit per file is 100MB!

Amazon S3 costs

As you can see, we’re only looking at about $5USD/month on average in storage and transfer costs for the site with Amazon.

We’re also averaging about $22usd/month in hosting costs with Digital Ocean, so we’re still only about $27/month in total hosting costs. Maybe $30 if we include the hosting for the main website which is at Stablehost.

IRC

We’ve had an IRC room for a long time (longer than discuss I think), but I only just got around to including a link on the site for folks to be able to join through a nice web client (Kiwi IRC).

Discuss header bar

It was included as part of an oft-requested set of links to get back to various parts of the main site from the forums. I also added these links in the menu for the site as well (the header links are hidden when on mobile, so this way you can still access the links from whatever device you’re using):

Discuss menu

If you have your own IRC client then you can reach us on irc.freenode.net #pixls.us. Come and join us in the chat room! If you’re not there you are definitely missing out on a ton of stimulating conversation and enlightening discussions!

May 11, 2017

Tracing IDispatch::Invoke calls in COM applications

At Collabora Productivity we recently encountered the need to investigate calls in a third-party application to COM services offered by one or more other applications. In particular, calls through the IDispatch mechanism.

In practice, it is use of the services that Microsoft Office offers to third-party applications that we want to trace and dump symbolically.

We looked around for existing tools but did not find anything immediately suitable, especially not anything available under an Open Source license. So we decided to hack a bit on one of the closest matches we found, which is Deviare-InProc. It is on GitHub, https://github.com/nektra/Deviare-InProc.

Deviare-InProc already includes code for much of the hardest things needed, like injecting a DLL into a target process, and hooking function calls. What we needed to do was to hook COM object creation calls and have the hook functions notice when objects that implement IDispatch are created, and then hook their Invoke implementations.

The DLL injection functionality is actually "just" part of the sample code included with Deviare-InProc. The COM tracing functionality that we wrote is based on the sample DLL to be injected.

One problem we encountered was that in some cases, we would need to trace IDispatch::Invoke calls that are made in a process that has already been started (through some unclear mechanism out of our control). The InjectDLL functionality in Deviare-InProc does have the functionality to inject the DLL into an existing process. But in that case, the process might already have performed its creation of IDispatch implementing COM objects, so it is too late to get anything useful from hooking CoGetClassObject().

We solved that with a hack that works nicely in many cases, by having the injected DLL itself create an object known to implement IDispatch, and hoping its Invoke implementation is the same as that used by the interesting things we want to trace.

Here is a snippet of a sample VBScript file:

     Set objExcel = CreateObject("Excel.application")
     set objExcelBook = objExcel.Workbooks.Open(FullName)

     objExcel.application.visible=false
     objExcel.application.displayalerts=false
         
     objExcelBook.SaveAs replace(FileName, actualFileName, prefix & actualFileName) & "csv", 23

     objExcel.Application.Quit
     objExcel.Quit

And here is the corresponding output from tracing cscript executing that file. (In an actual use case, no VBScript source would obviously be available to inspect directly.)

Process #10104 successfully launched with dll injected!
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.

# CoGetClassObject({00024500-0000-0000-C000-000000000046}) (Excel.Application.15)
#   riid={00000001-0000-0000-C000-000000000046}
#   CoCreateInstance({0000032A-0000-0000-C000-000000000046}) (unknown)
#     riid={00000149-0000-0000-C000-000000000046}
#     result:95c668
#   CoCreateInstance({00000339-0000-0000-C000-000000000046}) (unknown)
#     riid={00000003-0000-0000-C000-000000000046}
#     result:98aad8
#   result:95dd8c
# Hooked Invoke 0 of 95de1c (old: 487001d) (orig: 76bafec0)
95de1c:Workbooks() -> IDispatch:98ed74
98ed74:Open({"c:\temp\b1.xls"}) : ({"c:\temp\b1.xls"}) -> IDispatch:98ea14
95de1c:Application() -> IDispatch:95de1c
95de1c:putVisible(FALSE)
95de1c:Application() -> IDispatch:95de1c
95de1c:putDisplayAlerts(FALSE)
98ea14:SaveAs(23,"c:\temp\converted_b1.csv")
95de1c:Application() -> IDispatch:95de1c
95de1c:Quit()
95de1c:Quit()

Our work on top of Deviare-InProc is available at https://github.com/CollaboraOnline/Deviare-InProc.

Binaries are available at https://people.collabora.com/~tml/injectdll/injectdll.zip (for 32-bit applications) and https://people.collabora.com/~tml/injectdll/injectdll64.zip (64-bit). The zip archive contains an executable, injectdll.exe (injectdll64.exe in the 64-bit case) and a DLL.

Unpack the zip archive somewhere. Then go there in Command Prompt, and in case the program you want to trace the IDispatch::Invoke use of is something you know how to start from the command line, you can enter this command:

injectdll.exe x:\path\to\program.exe “program arg1 arg2 …”

where program.exe is the executable you want to run, and arg1 arg2 … are command-line parameters it takes, if any.

If program.exe is a 64-bit program, instead download injectdll64.zip, and run injectdll64.exe, otherwise similar.

if you can’t start the program you want to investigate from the command line, but you need to inspect it after it has already started, just pass only the process id of the program to injectdll.exe instead. (Or injectdll64.exe) This is somewhat less likely to succeed, depending on how the program uses IDispatch.

In any case, the output (symbolic trace) will go to the standard output of the program being traced, which typically is nowhere at all, and not useful. It will not go to the standard output of the injectdll.exe program.

In order to redirect the output to a file, set an environment variable DEVIARE_LOGFILE that contains the full pathname to the log file to produce. This environment variable must be visible in the program that is being traced; it is not enough to set it in the Command Prompt window where you run injectdll.exe.

Obviously all this is a work in progress, and as needed will be hacked on further. For instance, the name "injectdll" is just the name of the original sample program in upstream Deviare-InProc; we should really rename it to something specific for this use case.

May 10, 2017

The second QDQuest Krita game art course is out!

The second premium Krita game art course, Make Cel Shaded Game Characters, is out! It contains 14 tutorials
and a full commented art time-lapse that will help you improve your lighting fundamentals and show you how to
make a game character.

The series is based on David Revoy’s webcomic, Pepper and Carrot! He paints it all using Krita, and that was
a great occasion to link our respective work. David released most of his work under the CC-By 4.0 licence,
allowing anyone to reuse it as long as you give proper credits.

Check out the course page for more information!

https://gumroad.com/l/cartoon-game-art-krita-course

May 08, 2017

3000 Reviews on the ODRS

The Open Desktop Ratings service is a simple Flask web service that various software centers use to retrieve and submit application reviews. Today it processed the 3000th review, and I thought I should mark this occasion here. I wanted to give a huge thanks to all the people who have submitted reviews; you have made life easier for people unfamiliar with installing software. There are reviews in over a hundred different languages and over 600 different applications have been reviewed.

Over 4000 people have clicked the “was this useful to you” buttons in the reviews, which affect how the reviews are ordered for a particular system. Without people clicking those buttons we don’t really know how useful a review is. Since we started this project, 37 reviews have been reported for abuse, of which 15 have been deleted for things like swearing and racism.

Here are some interesting graphs, first, showing the number of requests we’re handling per month. As you can see we’re handling nearly a million requests a month.

The second show the number of people contributing reviews. At about 350 per month this is a tiny fraction compared to the people requesting reviews, but this is to be expected.

The third shows where reviews come from; the notable absence is Ubuntu, but they use their own review system rather than the ODRS. Recently Debian has been increasing the fastest, I assume because at last they ship a gnome-software package new enough to support user reviews, but the reviews are still coming in fastest from Fedora users. Maybe Fedora users are the kindest in the open source community? Maybe we just shipped the gnome-software package first? :)

One notable thing missing from the ODRS is a community of people moderating reviews; at the moment it’s just me deciding which reviews are indeed abuse, and also fixing up common spelling errors in the submitted text. If this is something you would like to help with, please let me know and I can spend a bit of time adding a user type somewhere in-between benevolent dictator (me) and unauthenticated. Ideas welcome.

May 06, 2017

Free Airport Shoe Shine Tip

Don’t have time or money for an airport shoe shine? Use my hot tip to polish up those clogs while you’re on the move:

May 05, 2017

Moon Talk at PEEC tonight

Late notice, but Dave and I are giving a talk on the moon tonight at PEEC. It's called Moonlight Sonata, and starts at 7pm. Admission: $6/adult, $4/child (we both prefer giving free talks, but PEEC likes to charge for their Friday planetarium shows, and it all goes to support PEEC, a good cause).

We'll bring a small telescope in case anyone wants to do any actual lunar observing outside afterward, though usually planetarium audiences don't seem very interested in that.

If you're local but can't make it this time, don't worry; the moon isn't a one-time event, so I'm sure we'll give the moon show again at some point.

May 03, 2017

Welcome digiKam!


Welcome digiKam!

Lending a helping hand

One of the goals we have here at PIXLS.US is to help Free Software projects however we can, and one of those ways is to focus on things that we can do well that might help make things easier for the projects. It may not be much fun for project developers to deal with websites or community outreach necessarily. This is something I think we can help with, and recently we had an opportunity to do just that with the awesome folks over at the photo management project digiKam.

As part of a post announcing the release of digiKam 5.5.0 on discuss. we learned that they were in need of a new webmaster, and they needed something soon to migrate away from Drupal 6 for security reasons. They had a rudimentary Drupal 7 theme setup, but it was severely lacking (non-responsive and not adapted to the existing content).

Old digiKam website The previous digiKam website, running on Drupal 6.
new digiKam website The new digiKam website! Great work Mica!

Mica (@paperdigits) reached out to Gilles Caulier and the digiKam community and offered our help, which they accepted! At that point Mica gathered requirements from them and found in the end that a static website would be more than sufficient for their needs. We coordinated with the KDE folks to get a git repo setup for the new website, and rolled up our sleeves to start building!

Gilles Caulier by Alex Prokoudine Gilles Caulier by Alexandre Prokoudine (CC BY NC SA 2.0)

Mica chose to use the Hugo static-site generator to build the site with. This was something new for us, but turned out to be quite fast and fun to work with (it generates the entire digiKam site in just about 5 seconds). Coupled with a version of the Foundation 6 blog theme we were able to get a base site framework up and running fairly quickly. We scraped all of the old site content to make sure that we could port everything as well as make sure we didn’t break any urls along the way.

We iterated some design stuff along the way, ported all of the old posts to markdown files, hacked at the theme a bit, and finally included comments that are now hosted on discuss. What’s wild is that we managed to pull the entire thing together in about 6 weeks total (of part-time working on it). The digiKam team seems happy with the results so far, and we’re looking forward to continue helping them by managing this infrastructure for them.

A big kudos to Mica for driving the new site and getting everything up and running. This was really all due to his hard work and drive.

Also, speaking of discuss, we also have a new category created specifically for digiKam users and hackers: https://discuss.pixls.us/c/software/digikam.

This is the same category that news posts from the website will post in, so feel free to drop in and say hello or share some neat things you may be working on with digiKam!

WebKitGTK+ remote debugging in 2.18

WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:

$ WEBKIT_INSPECTOR_SERVER=127.0.0.1:1234 browser

This hasn’t changed with the new remote inspector. To start debugging we opened any browser and loaded

http://127.0.0.1:1234

With the new remote inspector we have to use any WebKitGTK+ based browser and load

inspector://127.0.0.1:1234

As you have already noticed, it’s no longer possible to use any web browser, you need to use a recent enough WebKitGTK+ based browser as the debugger. This is because of the way the new remote inspector works. It requires a frontend implementation that knows how to communicate with the targets. In the case of Apple that frontend implementation is Safari itself, which has a menu with the list of remote debuggable targets. In WebKitGTK+ we didn’t want to force using a particular web browser as debugger, so the frontend is implemented as a builtin custom protocol of WebKitGTK+. So, loading inspector:// URLs in any WebKitGTK+ WebView will show the remote inspector page with the list of debuggable targets.

It looks quite similar to what we had, just a list of debuggable targets, but there are a few differences:

  • A new debugger window is opened when inspector button is clicked instead of reusing the same web view. Clicking on inspect again just brings the window to the front.
  • The debugger window loads faster, because the inspector code is not served by HTTP, but locally loaded like the normal local inspector.
  • The target list page is updated automatically, without having to manually reload it when a target is added, removed or modified.
  • The debugger window is automatically closed when the target web view is closed or crashed.

How does the new remote inspector work?

The web browser checks the presence of WEBKIT_INSPECTOR_SERVER environment variable at start up, the same way it was done with the WebSockets. If present, the RemoteInspectorServer is started in the UI process running a DBus service listening in the IP and port provided. The environment variable is propagated to the child web processes, that create a RemoteInspector object and connect to the RemoteInspectorServer. There’s one RemoteInspector per web process, and one debuggable target per WebView. Every RemoteInspector maintains a list of debuggable targets that is sent to the RemoteInspector server when a new target is added, removed or modified, or when explicitly requested by the RemoteInspectorServer.
When the debugger browser loads an inspector:// URL, a RemoteInspectorClient is created. The RemoteInspectorClient connects to the RemoteInspectorServer using the IP and port of the inspector:// URL and asks for the list of targets that is used by the custom protocol handler to create the web page. The RemoteInspectorServer works as a router, forwarding messages between RemoteInspector and RemoteInspectorClient objects.

Rochlitz VR

About blendFX

blendFX is a small studio for 3D, VR, AR and VFX based in Leipzig/Germany.
We produce virtual and augmented reality applications, architectural visualizations, 3d reconstructions and animations & visual effects for TV and cinema, with a focus on high quality content for mobile VR.


Rendering of Renaissance Period

How we came to VR

We started working with virtual reality in 2014. During our first experiments it became clear quite quickly that we wanted to concentrate on mobile VR, simply because it’s more accessible and affordable. For the time being, however, smartphones are simply not capable of rendering high quality content with reflections, transparencies and nice antialiased edges. That’s why we began focusing on pre-rendered content, using stereoscopic panoramas. We developed a workflow with Blender and Unity, where we integrate interactive elements and 3d animations into stereo panoramas. The resulting apps perform very well on recent smartphones like the Samsung S6 and S7.

Rochlitz VR

“Rochlitz VR” is a virtual reality app which we created for Schlösserland Sachsen. It’s a virtual reconstruction of one of the chambers from the old castle “Schloss Rochlitz” in Saxony, Germany. Today you can see the remains of 5 different periods from the 800 years long history of the “captain’s chamber” (“Hauptmannsstube”) in that room. There are doors and traces of paint from the gothic period, the backwall of a fireplace and romanesque windows, a painted ceiling from the renaissance and on top of that plumbing, drillholes and wallpaper from the GDR (German Democratic Republic) period in Eastern Germany. It’s a really weird and crappy looking room.


The room as it looks like today

Usually rooms like this are being restored to one particular time period. But then of course you lose the chance to see what the room would have looked like in other periods. In this case the castle museum wanted to make it clear to the visitors that the castle has been in constant change throughout the ages. Traces of all these periods can be found today. So the museum needed a way to make this room accessible and understandable for visitors without destroying its current peculiar state with all the various items from different time periods. Virtual Reality turned out to be the perfect medium for this!
Together with conservators and museologists we reconstructed the captain’s chamber how it might have looked like in those 5 different periods. The Rochlitz Virtual Reality Experience will be shown on 4 Samsung Gear VR devices inside the captain’s chamber starting in spring 2017.

How

We used Blender for the entire process of modeling, texturing and rendering.
The captain’s chamber changed a lot throughout the 800 years, and so we had to heavily adjust the model for each time period.
The 3d modeling turned out to be a crucial part of the scientific reconstruction. The conservators and museologists would provide us with data, measurements and theories of how the room should have looked like back in the days, but it was only during the actual modeling process where we found that some of these theories would have been impractical or even impossible in real life. So the 3D model of the room was also a tool to test a scientific hypothesis.


Screenshot of the blendfile of the gothic period

For the gothic period Nora Pietrowski, the museum’s conservator, gave us the flowery wall paintings, which we mapped onto the four sides of the room. The painted ceiling of the renaissance period was also created by Nora. Then with Blender’s 3d painting tools we added subtle dirt and weather effects to the walls and the floor.
One of the most interesting periods was the one of the second half of the 20th century. The museum wanted to stay as scientifically accurate as possible with the reconstruction. That’s why most of the rooms we reconstructed have no furniture, because there is no detailed information of what kind of furniture actually was in the room. However, there are two photos from the GDR period that do show some the furnishing. That was a good reference for us to rebuild the room with a bit more detail.


Rendering of the GDR Period. In the back you can see the reference photo on the wall.

Rendering

For rendering we used Blender’s path tracing engine Cycles and its stereoscopic panorama rendering mode, which Dalai Felinto had added in 2015. However, we found that equirectangular rendering is wasting a lot of processing power for the poles of the spherical images, which made rendering a lot slower than it needed to be. So we hired Dalai to code a script to render cube maps, which not only render much faster but also even look better on the poles than the usual equirectangular panoramas. Rendering a cube map panorama creates 6 different images, one for each side of the cube. Each side has a resolution of 1280×1280 pixels. And because we are rendering stereo we end up with 12 images, which add up to a resolution of 15360×1280 pixels for each stereo panorama. Luckily Cycles can take advantage of GPU rendering, which made the renders a bit less painful.


Cubemap Stripe

Another thing we needed was pole merging: It’s a common problem of stereoscopic panoramas that top and bottom (zenith and nadir) look a bit weird because of left and right eye perspective overlap. One way to fix that is simply to let the parallax converge to zero right on the poles of the panorama, also known as pole merging. And since we are not Blender coders ourselves but know some very skilled programmers at the Blender Institute we simply drove to Amsterdam in spring 2016 and talked to Sergey Sharybin, one of the amazing core developers. Within hours we had a working prototype and not much later pole merging was implemented to Blender. Amazing!

For testing the VR renderings internally we used VRAIS, a VR viewer which we developed ourselves together with our colleagues from Mikavaa. To get feedback from conservators and museologists on the test renderings and panoramas we used ReViz, a VR viewer with comment and revisioning system, developed by Mikavaa.

The high resolution stereoscopic panoramas where then taken to the game engine Unity, where we put interactive elements in to the scenes in order to create an intuitive and immersive VR experience, which the visitors of the castle will be able to enjoy with a GearVR headset. The user can travel to the different time periods simply by focusing the interactive overlays on the walls or by using the time line.


Screenshot of some UI elements in the app

The Installation

Four turning chairs have been customized for this exhibition. The 4 headsets need to be continuously attached to a power supply in order to prevent discharging during constant usage. For that the museum’s constructor designed a custom solution with wiper contacts so that the USB cable that charges the GearVR headset can be attached to the turning chair without any cable clutter.

The goal is to be able to have four visitors experience RochlitzVR simultaneously without the need for museum attendants. So the GearVR itself needs to be protected against theft. Therefore a USB-C cable mount was designed that can be screwed onto the GearVR body. The cable itself is quite strong by itself. The museum figured that anyone who would bring the tools to cut those cables would also be able to bring gear to cut stronger metal cables, so the USB cable as theft protection should do it. Further, the phone itself is hidden behind the plastic GearVR cover, which is held in place by 4 screws.


The installation in the captain’s chamber

VR is still a relatively young medium (even though last year’s hype has already faded a bit). But especially for cases like this we think that it can be a brilliant addition for museums, exhibitions and education. Of course it would be even more exiting to be able to walk around in VR, like you can for example with the HTC Vive, but for exhibitions that’s not really practical: it is way more heavy, more expensive and you definitely need someone to operate it and help the users, which makes it even more expensive. It just adds too much maintenance costs for a small museum.

This project took over a year from start to finish, with some breaks in between. We think it’s quite remarkable that an old castle is open and forward thinking enough to embrace a fresh technology like VR, where there is still a lot of development and change happening all the time. Therefore it was an absolute pleasure to work with the castle’s museologist Frank Schmidt, who also initiated the whole thing.
Being able to use Blender for this project was a great experience. Over the course of the last year several features have been implemented that helped us a lot in our workflow. Cycles was improved a lot and is now even faster than when we started. So I think it’s safe to say that Blender really is a perfect tool for producing high quality VR content, and we can’t wait to see what the future brings!

May 02, 2017

security things in Linux v4.11

Previously: v4.10.

Here’s a quick summary of some of the interesting security things in this week’s v4.11 release of the Linux kernel:

refcount_t infrastructure

Building on the efforts of Elena Reshetova, Hans Liljestrand, and David Windsor to port PaX’s PAX_REFCOUNT protection, Peter Zijlstra implemented a new kernel API for reference counting with the addition of the refcount_t type. Until now, all reference counters were implemented in the kernel using the atomic_t type, but it has a wide and general-purpose API that offers no reasonable way to provide protection against reference counter overflow vulnerabilities. With a dedicated type, a specialized API can be designed so that reference counting can be sanity-checked and provide a way to block overflows. With 2016 alone seeing at least a couple public exploitable reference counting vulnerabilities (e.g. CVE-2016-0728, CVE-2016-4558), this is going to be a welcome addition to the kernel. The arduous task of converting all the atomic_t reference counters to refcount_t will continue for a while to come.

CONFIG_DEBUG_RODATA renamed to CONFIG_STRICT_KERNEL_RWX

Laura Abbott landed changes to rename the kernel memory protection feature. The protection hadn’t been “debug” for over a decade, and it covers all kernel memory sections, not just “rodata”. Getting it consolidated under the top-level arch Kconfig file also brings some sanity to what was a per-architecture config, and signals that this is a fundamental kernel protection needed to be enabled on all architectures.

read-only usermodehelper

A common way attackers use to escape confinement is by rewriting the user-mode helper sysctls (e.g. /proc/sys/kernel/modprobe) to run something of their choosing in the init namespace. To reduce attack surface within the kernel, Greg KH introduced CONFIG_STATIC_USERMODEHELPER, which switches all user-mode helper binaries to a single read-only path (which defaults to /sbin/usermode-helper). Userspace will need to support this with a new helper tool that can demultiplex the kernel request to a set of known binaries.

seccomp coredumps

Mike Frysinger noticed that it wasn’t possible to get coredumps out of processes killed by seccomp, which could make debugging frustrating, especially for automated crash dump analysis tools. In keeping with the existing documentation for SIGSYS, which says a coredump should be generated, he added support to dump core on seccomp SECCOMP_RET_KILL results.

structleak plugin

Ported from PaX, I landed the structleak plugin which enforces that any structure containing a __user annotation is fully initialized to 0 so that stack content exposures of these kinds of structures are entirely eliminated from the kernel. This was originally designed to stop a specific vulnerability, and will now continue to block similar exposures.

ASLR entropy sysctl on MIPS
Matt Redfearn implemented the ASLR entropy sysctl for MIPS, letting userspace choose to crank up the entropy used for memory layouts.

NX brk on powerpc

Denys Vlasenko fixed a long standing bug where the kernel made assumptions about ELF memory layouts and defaulted the the brk section on powerpc to be executable. Now it’s not, and that’ll keep process heap from being abused.

That’s it for now; please let me know if I missed anything. The v4.12 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

May 01, 2017

Manipulinks

This article by the Nielsen Norman Group encouraging web marketing folk to Stop Shaming Your Users for Micro Conversions uses a great term for those irritating “…or click here if you hate saving money” links: manipulinks. They credit the term to Steve Costello. I love a good new word.

Krita 3.1.3

Today we’re proud to release Krita 3.1.3. A ton of bug fixes, and some nice new features as well! Dmitry and Boud have taken a month off from implementing Kickstarter features to make Krita 3.1.3 as good and solid as we could make it. Thanks to all the people who have tested the alpha, the beta and the release candidate! Thanks to all the people who have worked on translations, too, and to Alexey Samoilov for picking up the maintenance of the Ubuntu Lime PPA.

Transform around PivotTransform around Pivot (Image by David Revoy

We also have a new version of the Windows explorer integration plugin, that makes it possible to show thumbnails of .kra and .ora files: several memory leaks were fixed, and a conflict with .ora files created by an Oracle database too was resolved. Thanks to Alvin Wong!

New Features

  • scale around pivot point added
  • implement context menu actions for default tool (cut, copy, paste, object ordering)
  • added option to allow multiple instances of krita (BUG 377199)

The full list of changes with 50+ bug fixes can be found  in the full release notes.

Download

The KDE download site has been updated to support https now.

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is also available from the Ubuntu application store. You can also use the Krita Lime PPA to install Krita 3.1.3 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

April 28, 2017

Fri 2017/Apr/28

  • gboolean is not Rust bool

    I ran into an interesting bug in my Rust code for librsvg. I had these structures in the C code and in the Rust code, respectively; they are supposed to be bit-compatible with each other.

    /* C code */
    
    /* Keep this in sync with rust/src/viewbox.rs::RsvgViewBox */
    typedef struct {
        cairo_rectangle_t rect;
        gboolean active;
    } RsvgViewBox;
    
    
    
    /* Rust code */
    
    /* Keep this in sync with rsvg-private.h:RsvgViewBox */
    #[repr(C)]
    pub struct RsvgViewBox {
        pub rect: cairo::Rectangle,
        pub active: bool
    }
    	    

    After I finished rustifying one of the SVG element types, a test started failing in an interesting way. The Rust code was generating a valid RsvgViewBox structure, but the C code was receiving it with a garbled active field.

    It turns out that Rust's bool is not guaranteed to repr(C) as anything in particular. Rust chooses to do it as a single byte, with the only possible values being 0 or 1. In contrast, C code that uses gboolean assumes that gboolean is an int (... which C allows to be zero or anything else to represent a boolean value). Both structs have the same sizeof or mem::size_of, very likely due to struct alignment.

    I'm on x86_64, which is of course a little-endian platform, so the low byte of my gboolean active field had the correct value, but the higher bytes were garbage from the stack.

    The solution is obvious in retrospect: if the C code says you have a gboolean, bite the bullet and use a glib_sys::gboolean.

    There are impl FromGlib<gboolean> for bool and the corresponding impl ToGlib for bool trait implementations, so you can do this:

    extern crate glib;
    extern crate glib_sys;
    
    use self::glib::translate::*
    
    let my_gboolean: glib_sys::gboolean = g_some_function_that_returns_gboolean ();
    
    let my_rust_bool: bool = from_glib (my_gboolean);
    
    g_some_function_that_takes_gboolean (my_rust_bool.to_glib ());
    	    

    ... Which is really no different from the other from_glib() and to_glib() conversions you use when interfacing with the basic glib types.

    Interestingly enough, when I had functions exported from Rust to C with repr(C), or C functions imported into Rust with extern "C", my naive assumption of gboolean <-> bool worked fine for passed arguments and return values. This is probably because C promotes chars to ints in function calls, and Rust was looking only at the first char in the value. Maybe? And maybe it wouldn't have worked on a big-endian platform? Either way, I was into undefined behavior (bool is not something you repr(C)), so anything goes. I didn't disassemble things to see what was actually happening.

    There is an interesting, albeit extremely pedantic discussion, in this bug I filed about wanting a compiler warning for bool in a repr(C). People suggested that bool should just be represented as C's _Bool or bool if you include <stdbool.h>. BUT! Glib's gboolean predates C99, and therefore stdbool.h.

    Instead of going into a pedantic rabbit hole of whether Glib is non-standard (I mean, it has only been around for over 20 years), or whether C99 is too new to use generally, I'll go with just using the appropriate conversions from gtk-rs.

    Alternative conclusion: C99 is the Pont Neuf of programming languages.

Krita 2017 Survey Results

A bit later than planned, but here are the 2017 Krita Survey results! We wanted to know a lot of things, like, what kind of hardware and screen resolution are most common, what drawing tablets were most common, and which ones gave most trouble. We had more than 1000 responses! Here’s a short summary, for the full report, head to Krita User Survey Report.

  • About 55% of respondents use Windows, about 40% Linux, about 10% MacOS. Web-traffic, wise, 75% browses krita.
  • Almost half of respondents have an NVidia graphics card, less than 25% AMD: the rest is Intel.
  • The most common amount of RAM is 8GB
  • The most common screen resolution 1920×1080
  • The most common tablet brand is Wacom, the next most common Huion; the tablet brand that gives people most trouble is, unsurprisingly, Genius
  • The most common image sizes are 1920×1080, then A4 at 300 DPI: 2480×3508
  • And finally, most people wish that Krita were a bit faster — something we suspect will be the case forever!

And we’ve also learned (as if we didn’t know!) that Krita users are lovely people. We got so many great messages of support in the write-in section!

April 25, 2017

Flock Cod Registration Form Design

Flock logo (plain)

We’re prepping the regcfp site for Flock to open up registrations and CFP for Flock. As a number of changes are underfoot for this year’s Flock compared to previous Flocks, we’ve needed to change up the registration form accordingly. (For those interested, the discussion has been taking place on the flock-planning list).

This is a second draft of those screens after the first round of feedback. The first screen is going to spoil the surprises herein, hopefully.

First screen – change announcements, basic details

On the first screen, we announce a few changes that will be taking place at this year’s Flock. The most notable one is that we’ll now have partial Flock funding available, in an attempt to fund as many Fedora volunteers as possible to enable them to come to Flock. Another change is the addition of a nominal (~$25 USD) registration fee. We had an unusually high number of no-shows at the last Flock, which cost us funding that could have been used to bring more people to Flock. This registration fee is meant to discourage no-shows and enable more folks to come.

Flock registration mockup.

Second screen – social details, personal requirements

This is the screen where you can fill out your badge details as well as indicate your personal requirements (T-shirt size, dietary preferences/restrictions, etc.)

Second Flock registration screen - personal details for badge and prefs (dietary, etc.)

Third screen – no funding needed

So depending, the next section may be split into a separate form or be a conditional based on whether or not the registrant is requesting funding. The reason we would want to split funding requests into a separate form is that applicants will need to do some research into cost estimates for their travel, and that could take some time, and we don’t want the form to time out while that’s going on.

Anyhow, this is what this page of the form looks like if you don’t need funding. We offer an opportunity to help out other attendees to those folks who don’t need funding here.

Third screen – travel details

This is the travel details page for those seeking financial assistance; it’s rather long, as we’ve many travel options, domestic and international.

Fourth screen – funding request review

This is a summary of the total funding request cost as well as the breakdown of partial funding options. I’d really like to hear your feedback on this, if it’s confusing or if it makes sense. Are there too many partial options?

mockup providing partial funding options

Final screen – summary

This screen is just a summary of everything submitted as well as information about next steps.

final screen - registration summary and next steps

What do you think?

Do these seem to make sense? Any confusion or issues come up as you were reading through them? Please let me know. You can drop a comment or join the convo on flock-planning.

Cheers!

(Update: Changed the language of the first questions in both of the 3rd screens; there were confusing double-negatives pointed out by Rebecca Fernandez. Thanks for the help!)

Reverse engineering ComputerHardwareIds.exe with winedbg

In an ideal world vendors could use the same GUID value for hardware matching in Windows and Linux firmware. When installing firmware and drivers in Windows vendors can always use some generated HardwareID GUIDs that match useful things like the BIOS vendor and the product SKU. It would make sense to use the same scheme as Microsoft. There are a few issues in an otherwise simple plan.

The first, solved with a simple kernel patch I wrote (awaiting review by Jean Delvare), exposes a few more SMBIOS fields into /sys/class/dmi/id that are required for the GUID calculation.

The second problem is a little more tricky. We don’t actually know how Microsoft joins the strings, what encoding is used, or more importantly the secret namespace UUID used to seed the GUID. The only thing we have got is the closed source ComputerHardwareIds.exe program in the Windows DDK. This, luckily, runs in Wine although Wine isn’t able to get the system firmware data itself. This can be worked around, and actually makes testing easier.

So, some research. All we know from the MSDN page is that Each hardware ID string is converted into a GUID by using the SHA-1 hashing algorithm which actually tells us quite a bit. Generating a GUID from a SHA-1 hash means this has to be a type 5 UUID.

The reference code for a type-5 UUID is helpfully available in the IETF RFC document so it’s quite quick to get started with research. From a few minutes of searching online, the most likely symbols the program will be using are the BCrypt* set of functions. From the RFC code, we call the checksum generation update function with first the encoded namespace (aha!) and then the encoded joined string (ahaha!). For Win32 programs, BCryptHashData is the function we want to trace.

So, to check:

wine /home/hughsie/ComputerHardwareIds.exe /mfg "To be filled by O.E.M."

…matches the reference HardwareID-14 output from Microsoft. So onto debugging, using +relay shows all the calling values and return values from each Win32 exported symbol:

WINEDEBUG=+relay winedbg --gdb ~/ComputerHardwareIds.exe
Wine-gdb> b BCryptHashData
Wine-gdb> r ~/ComputerHardwareIds.exe /mfg "To be filled by O.E.M." /family "To be filled by O.E.M."
005b:Call bcrypt.BCryptHashData(0011bab8,0033fcf4,00000010,00000000) ret=0100699d
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> 

Great, so this is the secret namespace. The first parameter is the context, the second is the data address, the third is the length (0x10 as a length is indeed SHA-1) and the forth is the flags — so lets print out the data so we can see what it is:

Wine-gdb> x/16xb 0x0033fcf4
0x33fcf4:	0x70	0xff	0xd8	0x12	0x4c	0x7f	0x4c	0x7d
0x33fcfc:	0x00	0x00	0x00	0x00	0x00	0x00	0x00	0x00

Using either the uuid in python, or uuid_unparse in libuuid, we can format the namespace to 70ffd812-4c7f-4c7d-0000-000000000000 — now this doesn’t look like a randomly generated UUID to me! Onto the next thing, the encoding and joining policy:

Wine-gdb> c
005f:Call bcrypt.BCryptHashData(0011bb90,00341458,0000005a,00000000) ret=010069b3
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> x/90xb 0x00341458
0x341458:	0x54	0x00	0x6f	0x00	0x20	0x00	0x62	0x00
0x341460:	0x65	0x00	0x20	0x00	0x66	0x00	0x69	0x00
0x341468:	0x6c	0x00	0x6c	0x00	0x65	0x00	0x64	0x00
0x341470:	0x20	0x00	0x62	0x00	0x79	0x00	0x20	0x00
0x341478:	0x4f	0x00	0x2e	0x00	0x45	0x00	0x2e	0x00
0x341480:	0x4d	0x00	0x2e	0x00	0x26	0x00	0x54	0x00
0x341488:	0x6f	0x00	0x20	0x00	0x62	0x00	0x65	0x00
0x341490:	0x20	0x00	0x66	0x00	0x69	0x00	0x6c	0x00
0x341498:	0x6c	0x00	0x65	0x00	0x64	0x00	0x20	0x00
0x3414a0:	0x62	0x00	0x79	0x00	0x20	0x00	0x4f	0x00
0x3414a8:	0x2e	0x00	0x45	0x00	0x2e	0x00	0x4d	0x00
0x3414b0:	0x2e	0x00
Wine-gdb> q

So there we go. The encoding looks like UTF-16 (as expected, much of the Windows API is this way) and the joining character seems to be &.

I’ve written some code in fwupd so that this happens:

$ fwupdmgr hwids
Computer Information
--------------------
BiosVendor: LENOVO
BiosVersion: GJET75WW (2.25 )
Manufacturer: LENOVO
Family: ThinkPad T440s
ProductName: 20ARS19C0C
ProductSku: LENOVO_MT_20AR_BU_Think_FM_ThinkPad T440s
EnclosureKind: 10
BaseboardManufacturer: LENOVO
BaseboardProduct: 20ARS19C0C

Hardware IDs
------------
{c4159f74-3d2c-526f-b6d1-fe24a2fbc881}   <- Manufacturer + Family + ProductName + ProductSku + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{ff66cb74-5f5d-5669-875a-8a8f97be22c1}   <- Manufacturer + Family + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{2e4dad4e-27a0-5de0-8e92-f395fc3fa5ba}   <- Manufacturer + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{3faec92a-3ae3-5744-be88-495e90a7d541}   <- Manufacturer + Family + ProductName + ProductSku + BaseboardManufacturer + BaseboardProduct
{660ccba8-1b78-5a33-80e6-9fb8354ee873}   <- Manufacturer + Family + ProductName + ProductSku
{8dc9b7c5-f5d5-5850-9ab3-bd6f0549d814}   <- Manufacturer + Family + ProductName
{178cd22d-ad9f-562d-ae0a-34009822cdbe}   <- Manufacturer + ProductSku + BaseboardManufacturer + BaseboardProduct
{da1da9b6-62f5-5f22-8aaa-14db7eeda2a4}   <- Manufacturer + ProductSku
{059eb22d-6dc7-59af-abd3-94bbe017f67c}   <- Manufacturer + ProductName + BaseboardManufacturer + BaseboardProduct
{0cf8618d-9eff-537c-9f35-46861406eb9c}   <- Manufacturer + ProductName
{f4275c1f-6130-5191-845c-3426247eb6a1}   <- Manufacturer + Family + BaseboardManufacturer + BaseboardProduct
{db73af4c-4612-50f7-b8a7-787cf4871847}   <- Manufacturer + Family
{5e820764-888e-529d-a6f9-dfd12bacb160}   <- Manufacturer + EnclosureKind
{f8e1de5f-b68c-5f52-9d1a-f1ba52f1f773}   <- Manufacturer + BaseboardManufacturer + BaseboardProduct
{6de5d951-d755-576b-bd09-c5cf66b27234}   <- Manufacturer

Which basically matches the output of ComputerHardwareIds.exe on the same hardware. If the kernel patch gets into the next release I’ll merge the fwupd branch to master and allow vendors to start using the Microsoft HardwareID GUID values.

Typing Greek letters

I'm taking a MOOC that includes equations involving Greek letters like epsilon. I'm taking notes online, in Emacs, using the iimage mode tricks for taking MOOC class notes in emacs that I worked out a few years back.

Iimage mode works fine for taking screenshots of the blackboard in the videos, but sometimes I'd prefer to just put the equations inline in my file. At first I was typing out things like E = epsilon * sigma * T^4 but that's silly, and of course the professor isn't spelling out the Greek letters like that when he writes the equations on the blackboard. There's got to be a way to type Greek letters on this US keyboard.

I know how to type things like accented characters using the "Multi key" or "Compose key". In /etc/default/keyboard I have XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp" which, among other things, sets the compose key to be my "Menu" key, which I never used otherwise. And there's a file, /usr/share/X11/locale/en_US.UTF-8/Compose, that includes all the built-in compose key sequences. I have a shell function in my .zshrc,

composekey() {
  grep -i $1 /usr/share/X11/locale/en_US.UTF-8/Compose
}
so I can type something like composekey epsilon and find out how to type specific codes. But that didn't work so well for Greek letters. It turns out this is how you type them:
<dead_greek> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<dead_greek> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<dead_greek> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<dead_greek> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<dead_greek> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<dead_greek> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<dead_greek> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<dead_greek> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON
... and so forth. And this <dead_greek> key isn't actually defined in most US/English keyboard layouts: you can check whether it's defined for you with: xmodmap -pke | grep dead_greek

Of course you can use xmodmap to define a key to be <dead_greek>. I stared at my keyboard for a bit, and decided that, considering how seldom I actually need to type Greek characters, I didn't see the point of losing a key for that purpose (though if you want to, here's a thread on how to map <dead_greek> with xmodmap).

I decided it would make much more sense to map it to the compose key with a prefix, like 'g', that I don't need otherwise. I can do that in ~/.XCompose like this:

<Multi_key> <g> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<Multi_key> <g> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<Multi_key> <g> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<Multi_key> <g> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<Multi_key> <g> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<Multi_key> <g> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<Multi_key> <g> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<Multi_key> <g> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON
... and so forth.

And now I can type [MENU] g e and a lovely ε appears, at least in any app that supports Greek fonts, which is most of them nowadays.

April 24, 2017

Of humans and feelings

It was a Wednesday morning. I just connected to email, to realise that something was wrong with the developer web site. People had been having issues accessing content, and they were upset. What started with “what’s wrong with Trac?” quickly escalated to “this is just one more symptom of how The Company doesn’t care about us community members”.

As I investigated the problem, I realised something horrible. It was all my fault.

I had made a settings change in the Trac instance the night before – attempting to impose some reason and structure in ACLs that had grown organically over time – and had accidentally removed a group, containing a number of community members not working for The Company, from having the access they had.

Oh, crap.

After the panic and cold sweats died down, I felt myself getting angry. These were people who knew me, who I had worked alongside for months, and yet the first reaction for at least a few of them was not to assume this was an honest mistake. It was to go straight to conspiracy theory. This was conscious, deliberate, and nefarious. We may not understand why it was done, but it’s obviously bad, and reflects the disdain of The Company.

Had I not done enough to earn people’s trust?

So I fixed the problem, and walked away. “Don’t respond in anger”, I told myself. I got a cup of coffee, talked about it with someone else, and came back 5 minutes later.

“Look at it from their side”, I said – before I started working with The Company, there had been a strained relationship with the community. Yes, they knew Dave Neary wouldn’t screw them over, but they had no way of knowing that it was Dave Neary’s mistake. I stopped taking it personally. There is deep-seated mistrust, and that takes time to heal, I said to myself.

Yet, how to respond on the mailing list thread? “We apologise for the oversight, blah blah blah” would be interpreted as “of course they fixed it, after they were caught”. But did I really want to put myself out there and admit I had made what was a pretty rookie mistake? Wouldn’t that undermine my credibility?

In the end, I bit the bullet. “I did some long-overdue maintenance on our Trac ACLs yesterday, they’re much cleaner and easier to maintain now that we’ve moved to more clearly defined roles. Unfortunately, I did not test the changes well enough before pushing them live, and I temporarily removed access from all non-The Company employees. It’s fixed now. I messed up, and I am sorry. I will be more careful in the future.” All first person – no hiding behind the corporate identity, no “we stand together”, no sugar-coating.

What happened next surprised me. The most vocal critic in the thread responded immediately to apologise, and to thank me for the transparency and honesty. Within half an hour, a number of people were praising me and The Company for our handling of the incident. The air went out of the outrage balloon, and a potential disaster became a growth opportunity – yes, the people running the community infrastructure are human too, and there is no conspiracy. The Man was not out to get us.

I no longer work for The Company, and the team has scattered to the winds. But I never forgot those cold sweats, that feeling of vulnerability, and the elation that followed the community reaction to a heartfelt mea culpa.

Part of the OSS Communities series – difficult conversations. Contribute your stories and tag them on Twitter with #osscommunities to be included.

April 21, 2017

Comb Ridge and Cedar Mesa Trip

[House on Fire ruin, Mule Canyon UT] Last week, my hiking group had its annual trip, which this year was Bluff, Utah, near Comb Ridge and Cedar Mesa, an area particular known for its Anasazi ruins and petroglyphs.

(I'm aware that "Anasazi" is considered a politically incorrect term these days, though it still seems to be in common use in Utah; it isn't in New Mexico. My view is that I can understand why Pueblo people dislike hearing their ancestors referred to by a term that means something like "ancient enemies" in Navajo; but if they want everyone to switch from using a mellifluous and easy to pronounce word like "Anasazi", they ought to come up with a better, and shorter, replacement than "Ancestral Puebloans." I mean, really.)

The photo at right is probably the most photogenic of the ruins I saw. It's in Mule Canyon, on Cedar Mesa, and it's called "House on Fire" because of the colors in the rock when the light is right.

The light was not right when we encountered it, in late morning around 10 am; but fortunately, we were doing an out-and-back hike. Someone in our group had said that the best light came when sunlight reflected off the red rock below the ruin up onto the rock above it, an effect I've seen in other places, most notably Bryce Canyon, where the hoodoos look positively radiant when seen backlit, because that's when the most reflected light adds to the reds and oranges in the rock.

Sure enough, when we got back to House on Fire at 1:30 pm, the light was much better. It wasn't completely obvious to the eye, but comparing the photos afterward, the difference is impressive: Changing light on House on Fire Ruin.

[Brain main? petroglyph at Sand Island] The weather was almost perfect for our trip, except for one overly hot afternoon on Wednesday. And the hikes were fairly perfect, too -- fantastic ruins you can see up close, huge petroglyph panels with hundreds of different creatures and patterns (and some that could only have been science fiction, like brain-man at left), sweeping views of canyons and slickrock, and the geology of Comb Ridge and the Monument Upwarp.

And in case you read my last article, on translucent windows, and are wondering how those generated waypoints worked: they were terrific, and in some cases made the difference between finding a ruin and wandering lost on the slickrock. I wish I'd had that years ago.

Most of what I have to say about the trip are already in the comments to the photos, so I'll just link to the photo page:

Photos: Bluff trip, 2017.

April 19, 2017

Visual Effects for The Man in the High Castle

About Barnstorm

Barnstorm VFX embodies the diverse skills, freewheeling spirit, and daredevil attitude of the early days stunt plane pilots. Nominated for VES award for their outstanding work on the TV series “The Man in the High Castle”, they have been using Blender as integral part of their pipeline.

The following text is an edited version of the answers of a Reddit AMA held by the heads of the team (Lawson Deming and Cory Jamieson) on February 3, 2017.

Getting into Blender

We’ve experimented with a variety of programs over the years, but for 3D work, we settled on using Blender starting about 3 years ago. It’s very unusual for VFX houses (at least in the US) to use Blender (as opposed to, say, Maya), but there are a number of great features that caused us to switch over to it. One of them was the Cycles render engine, that we’ve used for our rendering of most of the 3D elements in High Castle and other shows. In order to deal with the huge rendering needs of High Castle, we set up cloud rendering using Amazon’s own AWS servers through Deadline, which allowed us to have as many as 150 machines working at a time to render some of the big sequences.

In addition to Blender, we occasionally use other 3D programs, including Houdini for particle systems, fire, etc. Our texturing and material work is done in Substance Painter, and compositing is done in Nuke and After Effects.

The original decision to use Blender actually didn’t have anything to do with the cost (though it’s certainly helpful now that we have more people using it). We were already using Nuke and NukeX as a company (which are pretty expensive software packages) and had been using Maya for about a year. Before that, Lightwave was what we used.

Assembling a team

The real turning point came when we had to pull together a small team of freelancers to do a sequence. The process went a little bit like this:

1) We hire a 3D artist to start modeling for us. He’s an experienced modeler but his background is in a studio environment where there are a lot of departments and a pretty hefty pipeline to help deal with everything. He’s nominally a Maya guy, but the studio he was at had their own custom modeling software which he’s more familiar with, so even though he’s working in Maya, it’s not his first choice.

2) The modeling guy only does modeling, so we need to bring in a texture artist. She doesn’t actually use Maya for UV work or texturing. Instead she uses Mari (a Foundry product). She and the Modeler have some issues making the texturing work back and forth between Mari and Maya because they aren’t used to being outside of a studio pipeline that takes care of everything for them.

3) Since neither of the above are experienced in layout or rendering, we hire a third guy to do the setup of the scene. He is a Maya guy as well, but once he starts working, he says “oh, you guys don’t have VRay? I can get by in Mental Ray (Maya’s renderer at the time) but I prefer Vray.” We spend a ton of time trying to work around Mental Ray’s idiosyncrasies, including weird behavior with the HDR lighting major gamma issues with the textures.

4) We need to do some particle simulation work and smoke and create some water in the same scene… Guess who uses Maya to do these things? No one, apparently. Water and particles are Houdini in this case. Smoke is FumeFX (which at the time only existed as a 3DStudio Max plugin and had no Maya version).

So, pop quiz. What is Maya doing for us in this instance? We’ve got a modeler who is begrudgingly using it but prefers other modeling software, a texture artist who isn’t using it at all, a layout/lighter who would rather be using a third party rendering engine, and the prospect of doing SFX that will require multiple additional third party softwares totaling thousands of dollars. At the time we were attempting this, the core team of our company was just 5 people, of which I was the only one who regularly did 3D work (in Lightwave).


I consider myself a generalist and had been puttering along in Maya, but I found it very obtuse and difficult to approach from a generalist standpoint. I’d just started dabbling in Blender and found it very approachable and easy to use, with a lot of support and tutorials out there. At the same time our three freelancers were struggling with the above sequence, I managed to build and render another shot from the scene fully in Blender (a program that I was a novice in at the time), utilizing its internal smoke simulation tools and the ocean simulation toolkit (which is actually a port of the one in Houdini) to do SFX on my own, and I got a great looking render out of Cycles.

Blender has its weaknesses, and as a general 3D package, it’s not the best in any one area, but neither is Maya. Any specialty task will always be better in another program. But without a pre-existing Maya pipeline, and with the fact that Maya’s structure encourages the use of many specialists collaborating on a single task (rather than one well-rounded generalist working solo) it didn’t make sense to dump a lot of resources and money into making Maya work for such a small studio.

I ended up falling in love with working in Blender, and as we brought on and trained some other 3D artists, I encouraged them to use it. Eventually we found ourselves a Blender studio. That advantage of being good for a generalist, though, has also been a weakness as we’ve grown as a company, because it’s hard to find people who are really amazing artists in Blender. Our solution up until now has been to work hard on finding good Blender artists and to try and train others who want to learn.

Blender in production

Also, since Blender acts as a hub for VFX work, it’s still possible for specialists to contribute from their respective programs. Initial modeling, for example, can be done in almost any program. It can be difficult, but the more people from other VFX studios I talk to, the more I realize that everybody’s pipeline is pretty messy, and even the studios who are fully behind Maya use a ton of other software and have a lot of custom scripts and techniques to get everything working the way they want it to.


We use Blender for modeling, animation, and rendering. Our partners at Theory Animation have focused a lot on how to make Blender better for animation (they all came from a Maya background as well but fell in love with Blender the same way I did). We’ve used Blender’s fluid system and particle system (though both of these need work) and render everything in Cycles. We still use Houdini for the stuff that it’s good at. We used Massive to create character animations for “The Man in the High Castle”. We also started using Substance Painter and Substance Designer for texture work. Cycles is good at exporting render layers, which we composited mostly in Nuke.

One of the big hurdles that Blender has to overcome is the the fact that its licensing rules can make it difficult legally for it to interact with paid software. Most companies want to keep their code closed, so the open-source nature of Blender has made it tricky to, for example, get a Substance Designer plugin. It’s something we’re working on though.

When collaborating with other companies, we usually separate the 3d and compositing aspects of the work to keep the software issues from being a problem. It’s getting easier every day, though, especially now that Blender is starting to support Alembic. For season one, the sequence we worked on was completely separate and turnkey, so we didn’t have any issues sharing assets. For season 2, however, we did need to do a lot of conversion and re-modeling of elements. Also, many of the models we received were textured using UDIMs, which Blender does not currently support. It would be great for blender to eventually adopt the UDIM workflow for texturing.
We do get a lot of raised eyebrows from people when we tell them we use Blender professionally. Hopefully the popularity of the show (and the fact that we’ve been nominated for some VFX awards) will help remove some of the stigma that Blender has developed over the years. It’s a great program.


We’ve developed a number of in-house solutions for Blender. We use Blender solely for 3D and NukeX for tracking and compositing, but we hand camera data back and forth between Nuke and Blender using .chan files (that’s technically built into blender but we’ve developed a system to make it a bit easier). Fitting Blender into a compositing pipeline (Nuke, EXR workflow) is surprisingly easy. Layer render rendering, and the ease of setting up Blender have made it pretty fast for passing around assets between artists and vendors. We also have a custom procedure and PBR shader setup for working with materials out of Substance Painter in Blender. A mix of Shotgun, our own asset tracking, and a workflow based on Blender Linking with a handful of add-ons are needed to make sure everything works.

Production Design

We worked really hard to make it feel correct. You can also thank the Production Designer, Andrew Boughton, who designed the practical sets in the show. He has a lot of architectural knowledge and was very collaborative with us to help make sure our designs matched the feel of the rest of the stuff in the show.

Our visual bible for Germania was a book called “Albert Speer: Architecture 1932-1942”. There were extensive and detailed plans for the transformation of Berlin, including blueprints for buildings like the Volkshalle. We did take some creative liberties with the arrangement and positioning of buildings for the sake of the narrative and to better coordinate with the production designer’s aesthetic of the sets. We looked at old film reels including the famous “Triumph of the Will” for references of how Nazi rallies were organized. One video game that I remember paying attention to was “Wolfenstein: The New Order” because it presents a world that was taken over by the Nazis, though its presentation of post war Berlin (including the Volkshalle) was much more futuristic and sci-fi-ish that what we went for. Our goal in MITHC was to create a sense of the world that felt fairly mundane and grounded in reality. The more it felt like something that could really happen, the more effective the message of the show.

April 18, 2017

Flock Cod Logo Ideas

Flock logo (plain)

Ryan Lerch put together an initial cut at a Flock 2017 logo and website design (flock2017-WIP branch of fedora-websites). It was an initial cut he offered for me to play with; in trying to work on some logistics for Flock to make sure things happen on time I felt locking in on a final logo design would be helpful at this point.

Here is the initial cut of the top part of the website with the first draft logo:

Overall, this is very Cape Cod. Ryan created a beautiful piece of work in the landscape illustration and the overall palette. Honestly, this would work fine as-is, but there were a few points of critique for the logo specifically that I decided to explore –

  • There weren’t any standard Fedora fonts in it; I considered at least the date text could be in one of the standard Fedora fonts to tie back to Fedora / Flock. The standard ‘Flock’ logotype wasn’t used either; generally we try to keep stuff with logotypes in that logotype (if anything, so it seems more official.)
  • The color palette is excellent and evocative of Cape Cod, but maybe some Fedora accent colors could tie it into the broader Fedora look and feel and make it seem more like part of the family.
  • The hierarchy of the logo is very “Cape Cod”-centric and my gut told me that “Flock” should be primary and “Cape Cod” should be subordinate to that.
  • Some helpful nautically-experienced folks in the broader community (particularly Pat David) pointed out the knot wasn’t tied quite correctly.

So here were the first couple of iterations I played with (B,C) based on Ryan’s design (A), but trying to take into account the critique / ideas above, with an illustration I created of the Lewis Bay lighthouse (the closest to the conference site):

I posted this to Twitter and Mastodon, and got a ton of very useful feedback. The main points I took away:

  • The seagulls probably complicate things too much – ditch ’em.
  • The Fedora logo was liked.
  • There seemed to be a preference for having the full dates for the conference in the logo.
  • The lighthouse beams in C were sloppily / badly aligned… 🙂 I knew this and was lazy and posted it anyway.
  • Some folks liked the dark blue ones because it was a Fedora color, some folks felt A’s color palette was more “Cape Cod” like.
  • At least a couple folks felt C was reminiscent of a nuclear symbol.
  • The simplicity / cleanness of A was admired.

So here’s the next round; things I tried:

  • Took a position on the hierarchy and placed ‘Flock’ above ‘Cape Cod’ in the general hierarchy in the logo.
  • Standardized all non-Flock logotype fonts on Montserrat, which is a standard Fedora brand font.
  • Shifted to original color palette from A.
  • Properly aligned lighthosue lights.
  • Added full dates to every mockup.
  • Corrected knot tie.

One more round, based on further helpful Mastodon feedback. You can see some play with fonts and mashing up elements from other iterations together based on folks’ suggestions:

I have a few favorites. Maybe you do too. I’m not sure which to go with yet – I have been staring at these too long for today. I did some quick mockups of how they look in the website:

I’ll probably sit on this and come back with fresh eyes later. I’m happy for any / all feedback in the comments here!

3 things community managers can learn from the 50 state strategy

This is part of the opensource.com community blogging challenge: Maintaining Existing Community.

There are a lot of parallels between the world of politics and open source development. Open source community members can learn a lot about how political parties cultivate grass-roots support and local organizations, and empower those local organizations to keep people engaged. Between 2005 and 2009, Howard Dean was the chairman of the Democratic National Congress in the United States, and instituted what was known as the “50 state strategy” to grow the Democratic grass roots. That strategy, and what happened after it was changed, can teach community managers some valuable lessons about keeping community contributors. Here are three lessons community managers can learn from it.

Growing grass roots movements takes effort

The 50 state strategy meant allocating rare resources across parts of the country where there was little or no hope of electing a congressman, as well as spending some resources in areas where there was no credible opposition. Every state and electoral district had some support from the national organization. Dean himself travelled to every state, and identified and empowered young, enthusiastic activists to lead local organizations. This was a lot of work, and many senior democrats did not agree with the strategy, arguing that it was more important to focus effort on the limited number of races where the resources could make a difference between winning and losing (swing seats). Similarly, for community managers, we have a limited number of hours in the day, and investing in outreach in areas where we do not have a big community already takes attention away from keeping our current users happy. But growing the community, and keeping community members engaged, means spending time in places where the short-term return on that investment is not clear. Identifying passionate community users and empowering them to create local user groups, or to man a stand aty a small local conference, or speak at a local meet-up helps keep them engaged and feel like part of a greater community, and it also helps grow the community for the future.

Local groups mean you are part of the conversation

Because of the 50 state strategy, every political conversation in the USA had Democratic voices expressing their world-view. Every town hall meeting, local election, and teatime conversation had someone who could argue and defend the Democratic viewpoint on issues of local and national importance. This means that people were aware of what the party stood for, even in regions where that was not a popular platform. It also meant that there was an opportunity to get a feel for how national platform messaging was being received on the ground. And local groups would take that national platform and “adjust” it for a local audience – emphasizing things which were beneficial to the local community. Open source projects also benefit from having a local community presence, by raising awareness of your project to free software enthusiasts who hear about it at conferences and meet-ups. You also have an opportunity to improve your project, by getting feedback from users on their learning curve in adopting and using it. And you have an increasing number of people who can help you understand what messaging resonates with people, and which arguments for adoption are damp squibs which do not get traction, helping you promote your project more effectively.

Regular contact maintains engagement

After Howard Dean finished his term as head of the DNC in 2009, and Debbie Wasserman-Schultz took over as the DNC chair, the 50 state strategy was abandoned, in favour of a more strategic and focussed investment of efforts in swing states. While there are many possible reasons that can be put forward, it is undeniable that the local Democratic party structures which flourished under Dean have lost traction. The Democratic party has lost hundreds of state legislature seats, dozens of state senate seats, and a number of governorships  in “red” states since 2009, in spite of winning the presidency in 2012. The Democrats have lost control of the House and the Senate nationally, in spite of winning the popular vote in 2016 and 2012. For community managers, it is equally important to maintain contact with local user groups and community members, to ensure they feel empowered to act for the community, and to give the resources they need to be successful. In the absence of regular maintenance, community members are less inclined to volunteer their time to promote the project and maintain a local community.

Summary

Growing local user groups and communities is a lot of work, but it can be very rewarding. Maintaining regular contact, empowering new community members to start a meet-up or a user group in their area, and creating resources for your local community members to speak about and promote your project is a great way to grow the community, and also to make life-long friends. Political organizations have a long history of organizing people to buy into a broader vision and support and promote it in their local communities.

What other lessons can community managers and organizers learn from political organizations?

 

April 16, 2017

April 14, 2017

Profiling Haskell: Don’t chase the red herring

I’m currently working on a small Haskell tool which helps me minimize the waiting time for catching a train into the city (or out). One feature I’ve implemented recently is an automated import of aprox. 25MB compressed CSV data into an SQLite3 database, which was very slow in the beginning. Not focusing on the first results of the profiling information helped to optimize the implementation for a swift import.

Background

The data comes as a 25MB zip archive of text files in a CSV format. All imported, the SQLite database grows to about 800 MiB. My work-in-progress solution was a cruddy shell + SQL script which imports the CSV files into an SQLite database. With this solution, the import takes about 30 seconds, excluding the time you need to manually download the zip file. But this is not very portable, as I wanted to have a more user friendly solution.

The initial Haskell implementation using mostly the esqueleto and persistent DSL functions showed an abysmal performance. I had to stop the process after half an hour.

Finding the culprit

A first profiling pass showed this result summary:

COST CENTRE          MODULE                         %time %alloc                                                                                                                               
                                                                                                                                                                                               
stepError            Database.Sqlite                 77.2    0.0                                                                                                                               
concat.ts'           Data.Text                        1.8   14.5                                                                                                                               
compareText.go       Data.Text                        1.4    0.0                                                                                                                               
concat.go.step       Data.Text                        1.0    8.2                                                                                                                               
concat               Data.Text                        0.9    1.4                                                                                                                               
concat.len           Data.Text                        0.8   13.9                                                                                                                               
sumP.go              Data.Text                        0.8    2.1                                                                                                                               
concat.go            Data.Text                        0.7    2.6                                                                                                                               
singleton_           Data.Text.Show                   0.6    4.0                                                                                                                               
run                  Data.Text.Array                  0.5    3.1                                                                                                                               
escape               Database.Persist.Sqlite          0.5    7.8                                                                                                                               
>>=.\                Data.Attoparsec.Internal.Types   0.5    1.4                                                                                                                               
singleton_.x         Data.Text.Show                   0.4    2.9                                                                                                                               
parseField           CSV.StopTime                     0.4    1.6                                                                                                                               
toNamedRecord        Data.Csv.Types                   0.3    1.2                                                                                                                               
fmap.\.ks'           Data.Csv.Conversion              0.3    2.9                                                                                                                               
insertSql'.ins       Database.Persist.Sqlite          0.2    1.4                                                                                                                               
compareText.go.(...) Data.Text                        0.1    4.3                                                                                                                               
compareText.go.(...) Data.Text                        0.1    4.3

Naturally I checked the implementation of the first function, since that seemed to have the largest impact. It is a simple foreign function call to C. Fraser Tweedale made me aware, that there is not more speed to gain here, since it’s already calling a C function. With that in mind I had to focus on the next entries. It turned out that’s where I gained most of the speed to something more competitive against the crude SQL script and having it more user friendly.

It turned out that Data.Persistent uses primarily Data.Text concatenation to create the SQL statements. That being done for every insert statement is very costly, since it prepares, binds values and executes the statement for each insert (for reference see this Stack Overflow answer).

The solution

My current solution is to prepare the statement once and only bind the values for each insert.

Having done another benchmark, the import time now comes down to approximately a minute on my Thinkpad X1 Carbon.


April 13, 2017

Mailing list for fwupd and the LVFS

I’ve created a mailing list for fwupd and LVFS discussions. If you’re interested in firmware updating on Linux, or want to know what’s happening on the Linux Vendor Firmware Service you probably want to join. There are a few interesting things I’ll post in a few days.

What’s up with Graupner’s screen design?

Graupner

Graupner is a remote control model equipment company, originally founded 1930 in Germany. It went bankrupt in 2012 and was taken over by South Korean manufacturer SJ Ltd one year later. Graupner now continues as brand and sales organization. A large part of the product palette are stick-type transmitters for RC aircraft.

X-8E RC transmitter

The X-8E pistol-style transmitter for surface vehicles was first announced in 2013, but delayed until 2016. I can only assume this was caused by the change of ownership and restructuring. Given the rich feature-set and a price point of € 469.99 in their own shop, it is clearly in the high-end category and must face comparisons to Futaba 4PX, Hitec Lynx 4S, KO Propo EX1, Sanwa M12s and Spektrum DX6R.

However, the screen design seems incredibly rushed and not at all befitting to a flagship model in its category. Let’s have a look at the dashboard screen, which should be visible fairly often.

Import

I imported the dashboard screen from the PDF manual into Inkscape and scaled it to match the resolution of 320 x 480 pixels, with a little tweaking to have the raster image icons  at their original 40 x 40 and lined up on the pixel grid. A photo of the real thing and the result of these first steps:

graupner_x-8e_screen_layout_as_is

As you can see, the layout is all over the place. At least the varying corner radii seem to appear only in the PDF.

Quick and easy improvements

graupner_x-8e_screen_layout_changes

A: In the second row, I removed TX, 2x RX and 4.8V as they are absent in the photo, though their visibility seems to be conditional. There’s space left for them, anyway.

B: Making things line up within a grid. A table section with left aligned labels and units (%) in their own row.

C: Vertical steering and throttle meters (ST and TH) are aligned with the physical controls (wheel and trigger), but steering is better shown on a left-right axis and having the same orientation for all 4 channel meters tames the layout. Graupner is already written above the screen; the space can be put to much better use.

There are several deeper issues I did not touch:

  • Lack of differentiation between pure indicators, toggles and menu buttons.
  • Questionable icons, especially the two in the third row.
  • Just white outlines for some elements, where filled backgrounds would make them more defined.
  • Lacking and bad labelling with unnecessary abbreviations. The O.TIME in the bottom left is explained as model use time in the manual

Filed under: User Experience

April 10, 2017

Encouraging new community members

My friend and colleague Stormy Peters just launched a challenge to the community – to blog on a specific community related topic before the end of the week. This week, the topic is “Encouraging new contributors”.

I have written about the topic of encouraging new contributors in the past, as have many others. So this week, I am kind of cheating, and collecting some of the “Greatest Hits”, articles I have written, or which others have written, which struck a chord on this topic.

Some of my own blog posts I have particular affection for on the topic are:

I also have a few go-to articles I return to often, for the clarity of their ideas, and for their general usefulness:

  • Open Source Community, Simplified” by Max Kanat-Alexander, does a great job of communicating the core values of communities which are successful at recruiting new contributors. I particularly like his mantra at the end: “be really, abnormally, really, really kind, and don’t be mean“. That about sums it up…
  • Building Belonging“, by Jono Bacon: I love Jono’s ability to weave a narrative from personal stories, and the mental image of an 18 year old kid knocking on a stranger’s door and instantly feeling like he was with “his people” is great. This is a key concept of community for me – creating a sense of “us” where newcomers feel like part of a greater whole. Communities who fail to create a sense of belonging leave their engaged users on the outside, where there is a community of “core developers” and those outside. Communities who suck people in and indoctrinate them by force-feeding them kool-aid are successful at growing their communities.
  • I love all of “Producing Open Source Software“, but in the context of this topic, I particularly love the sentiment in the “Managing Participants” chapter: “Each interaction with a user is an opportunity to get a new participant. When a user takes the time to post to one of the project’s mailing lists, or to file a bug report, she has already tagged herself as having more potential for involvement than most users (from whom the project will never hear at all). Follow up on that potential.”

To close, one thing I think is particularly important when you are managing a team of professional developers who work together is to ensure that they understand that they are part of a team that extends beyond their walls. I have written about this before as the “water cooler” anti-pattern. To extend on what is written there, it is not enough to have a policy against internal discussion and decisions – creating a sense of community, with face to face time and with quality engagements with community members outside the company walls, can help a team member really feel like they are part of a community in addition to being a member of a development team in a company.

 

Interview with Marcos Ebrahim

Could you tell us something about yourself?

My name is Marcos Ebrahim. I’m an Egyptian artist and illustrator specialized in children’s book art, having 5 years experience with children’s animation episodes as computer graphics artist. I have just finished my first whole book as children’s illustrator on a freelance basis that will be on the market at Amazon soon. I’m also working on my own children’s book project as author and illustrator.

What genre(s) do you work in?

Children’s illustrations and concept art in general or children’s book art specifically.

Do you paint professionally, as a hobby artist, or both?

Because I’m not a member of any children’s illustration agencies, associations or publishing houses yet, I’m now doing this work on a small scale as a freelancer. I changed careers to be an illustrator a few months ago, so I can’t call myself a professional yet. I hope to achieve that soon.

Whose work inspires you most — who are your role models as an artist?

Nathan Fowkes‘ works and illustrations. I found this illustrator and concept artist on a website that called him “the master of value and colour”. I was lucky engough to study online (an art program by Schoolism) under his supervision and learn a lot. However, I’m not a brilliant student 🙂

Other great illustrators and artists I like: Goro Fujita, Marco Bucci, Patrice Barton, Will Terry, Lynne Chapman, John Manders and many others. I always look forward to seeing their art works to learn from them.

How and when did you get to try digital painting for the first time?

About three years ago, when I was trying to use my new Wacom Intuos tablet for painting and drawing, practicing studies from the great Renaissance masters as fan art.

What makes you choose digital over traditional painting?

Let me describe it like this: “The Undo-Time Machine — the great digital button”. Beside the ability to make changes in illustrations easily, I found its benefit when I tried to work with authors and they asked me to make changes that I couldn’t have made in traditional painting without redoing the illustration from scratch.

How did you find out about Krita?

I used to surf Youtube for viewing Illustrations and artists demo their work. Then I heard about Krita as open source art software. So I decided to search more and found out that the illustrations made with it could be similar to my work, so I should devote some of my time to know more about it and try it. Then I searched out more learning videos on Youtube. Frankly, the most impressive and helpful one was a long video tutorial by the art champion of the Krita community, David Revoy, making a whole comic page from scratch using Krita. He showed the whole illustration process, as well as the brushes and tools he provides for others to use (thank you very much!).

What was your first impression?

I think the Krita program has a user-friendly interface and tools that become more familiar when I configure the shortcuts similarly to most popular other art programs. This make it easier to work without the need to learn many things in a short time.

What do you love about Krita?

I think the most wonderful thing is the brush sets and the way they look like real-world tools. In addition some other tools like the transformation tool (perspective) and the pop-up tool.

Also I can say that working with Krita is the first time that I can work on one of my previous sketches and achieve a good result (according to my current art skills) that I’m happy with.

What do you think needs improvement in Krita? Is there anything that really annoys you?

As I mentioned to the Krita team, there are some issues that we could call bugs. However, I know that Krita is in development and the great Krita team makes things better from one version to another and add new features all the time. Thanks to them and hoping that they will continue their great work!

What sets Krita apart from the other tools that you use?

I think that Krita, as open-source art software, could soon compete with commercial art software if it continues on this path (fixing bugs and adding new features).

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Frankly, I’ve only recently started to use Krita and I’ve finished only two pictures. One I could call an illustration, the other is the background of my art blog, trying to use the wrap tool to make a tiled pattern. You can see this in the screenshots.

What techniques and brushes did you use in it?

I prefer to make a sketch, then work on it adding base colors and adjusting
the lighting value to reach the final details.

Where can people see more of your work?

I add new works to my art blog once or twice every month or more
often, depending on the time I have available and whether I make anything
new.

http://marcosebrahimart.tumblr.com

And also here:
https://www.behance.net/MarcosEbrahim
http://marcosebrahim.deviantart.com/gallery/
https://www.artstation.com/artist/marcosebrahim

Anything else you’d like to share?

All the best wishes to the great Krita team for continuing success in their
work in developing Krita open source software for all artists and all people.

April 07, 2017

Krita 3.1.3 beta 1 released

A week after the alpha release, we present the beta release for Krita 3.1.3. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. The final release of 3.1.3 is planned for end of April.

We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!

Things fixed in this release, compared to 3.1.3-alpha:

  • Added the credits for the 2016 Kickstarter backers to the About Krita dialog
  • Use the name of the filter when creating a filter mask from the filter dialog instead of “effect”
  • Don’t cover startup dialogs (for instance, for the pdf import filter) with the splash screen
  • Fix a race condition that made the a transform mask with a liquify transformation unreliable
  • Fix canvas blackouts when using the liquify tool at a high zoom level
  • Fix loading the playback cache
  • Use the native color selector on OSX: Krita’s custom color selector cannot pick screen colors on OSX
  • Set the default PNG compression to 3 instead of 9: this makes saving png’s much faster and the resulting size is the same.
  • Fix a crash when pressing the V shortcut to draw straight lines
  • Fix a warning when the installation is incomplete that still mentioned Calligra
  • Make dragging the guides with a tablet work correctly

Note for Windows Users

We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue. For now, if you are affected, you have to disable OpenGL in krita/settings/configure Krita/display.

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store is also available. You can also use the Krita Lime PPA to install Krita 3.1.3-beta.1 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

April 06, 2017

Clicking through a translucent window: using X11 input shapes

It happened again: someone sent me a JPEG file with an image of a topo map, with a hiking trail and interesting stopping points drawn on it. Better than nothing. But what I really want on a hike is GPX waypoints that I can load into OsmAnd, so I can see whether I'm still on the trail and how to get to each point from where I am now.

My PyTopo program lets you view the coordinates of any point, so you can make a waypoint from that. But for adding lots of waypoints, that's too much work, so I added an "Add Waypoint" context menu item -- that was easy, took maybe twenty minutes. PyTopo already had the ability to save its existing tracks and waypoints as a GPX file, so no problem there.

[transparent image viewer overlayed on top of topo map] But how do you locate the waypoints you want? You can do it the hard way: show the JPEG in one window, PyTopo in the other, and do the "let's see the road bends left then right, and the point is off to the northwest just above the right bend and about two and a half times as far away as the distance through both road bends". Ugh. It takes forever and it's terribly inaccurate.

More than once, I've wished for a way to put up a translucent image overlay that would let me click through it. So I could see the image, line it up with the map in PyTopo (resizing as needed), then click exactly where I wanted waypoints.

I needed two features beyond what normal image viewers offer: translucency, and the ability to pass mouse clicks through to the window underneath.

A translucent image viewer, in Python

The first part, translucency, turned out to be trivial. In a class inheriting from my Python ImageViewerWindow, I just needed to add this line to the constructor:

    self.set_opacity(.5)

Plus one more step. The window was translucent now, but it didn't look translucent, because I'm running a simple window manager (Openbox) that doesn't have a compositor built in. Turns out you can run a compositor on top of Openbox. There are lots of compositors; the first one I found, which worked fine, was xcompmgr -c -t-6 -l-6 -o.1

The -c specifies client-side compositing. -t and -l specify top and left offsets for window shadows (negative so they go on the bottom right). -o.1 sets the opacity of window shadows. In the long run, -o0 is probably best (no shadows at all) since the shadow interferes a bit with seeing the window under the translucent one. But having a subtle .1 shadow was useful while I was debugging.

That's all I needed: voilà, translucent windows. Now on to the (much) harder part.

A click-through window, in C

X11 has something called the SHAPE extension, which I experimented with once before to make a silly program called moonroot. It's also used for the familiar "xeyes" program. It's used to make windows that aren't square, by passing a shape mask telling X what shape you want your window to be. In theory, I knew I could do something like make a mask where every other pixel was transparent, which would simulate a translucent image, and I'd at least be able to pass clicks through on half the pixels.

But fortunately, first I asked the estimable Openbox guru Mikael Magnusson, who tipped me off that the SHAPE extension also allows for an "input shape" that does exactly what I wanted: lets you catch events on only part of the window and pass them through on the rest, regardless of which parts of the window are visible.

Knowing that was great. Making it work was another matter. Input shapes turn out to be something hardly anyone uses, and there's very little documentation.

In both C and Python, I struggled with drawing onto a pixmap and using it to set the input shape. Finally I realized that there's a call to set the input shape from an X region. It's much easier to build a region out of rectangles than to draw onto a pixmap.

I got a C demo working first. The essence of it was this:

    if (!XShapeQueryExtension(dpy, &shape_event_base, &shape_error_base)) {
        printf("No SHAPE extension\n");
        return;
    }

    /* Make a shaped window, a rectangle smaller than the total
     * size of the window. The rest will be transparent.
     */
    region = CreateRegion(outerBound, outerBound,
                          XWinSize-outerBound*2, YWinSize-outerBound*2);
    XShapeCombineRegion(dpy, win, ShapeBounding, 0, 0, region, ShapeSet);
    XDestroyRegion(region);

    /* Make a frame region.
     * So in the outer frame, we get input, but inside it, it passes through.
     */
    region = CreateFrameRegion(innerBound);
    XShapeCombineRegion(dpy, win, ShapeInput, 0, 0, region, ShapeSet);
    XDestroyRegion(region);

CreateRegion sets up rectangle boundaries, then creates a region from those boundaries:

Region CreateRegion(int x, int y, int w, int h) {
    Region region = XCreateRegion();
    XRectangle rectangle;
    rectangle.x = x;
    rectangle.y = y;
    rectangle.width = w;
    rectangle.height = h;
    XUnionRectWithRegion(&rectangle, region, region);

    return region;
}

CreateFrameRegion() is similar but a little longer. Rather than post it all here, I've created a GIST: transregion.c, demonstrating X11 shaped input.

Next problem: once I had shaped input working, I could no longer move or resize the window, because the window manager passed events through the window's titlebar and decorations as well as through the rest of the window. That's why you'll see that CreateFrameRegion call in the gist: -- I had a theory that if I omitted the outer part of the window from the input shape, and handled input normally around the outside, maybe that would extend to the window manager decorations. But the problem turned out to be a minor Openbox bug, which Mikael quickly tracked down (in openbox/frame.c, in the XShapeCombineRectangles call on line 321, change ShapeBounding to kind). Openbox developers are the greatest!

Input Shapes in Python

Okay, now I had a proof of concept: X input shapes definitely can work, at least in C. How about in Python?

There's a set of python-xlib bindings, and they even supports the SHAPE extension, but they have no documentation and didn't seem to include input shapes. I filed a GitHub issue and traded a few notes with the maintainer of the project. It turned out the newest version of python-xlib had been completely rewritten, and supposedly does support input shapes. But the API is completely different from the C API, and after wasting about half a day tweaking the demo program trying to reverse engineer it, I gave up.

Fortunately, it turns out there's a much easier way. Python-gtk has shape support, even including input shapes. And if you use regions instead of pixmaps, it's this simple:

    if self.is_composited():
        region = gtk.gdk.region_rectangle(gtk.gdk.Rectangle(0, 0, 1, 1))
        self.window.input_shape_combine_region(region, 0, 0)

My transimageviewer.py came out nice and simple, inheriting from imageviewer.py and adding only translucency and the input shape.

If you want to define an input shape based on pixmaps instead of regions, it's a bit harder and you need to use the Cairo drawing API. I never got as far as working code, but I believe it should go something like this:

    # Warning: untested code!
    bitmap = gtk.gdk.Pixmap(None, self.width, self.height, 1)
    cr = bitmap.cairo_create()
    # Draw a white circle in a black rect:
    cr.rectangle(0, 0, self.width, self.height)
    cr.set_operator(cairo.OPERATOR_CLEAR)
    cr.fill();

    # draw white filled circle
    cr.arc(self.width / 2, self.height / 2, self.width / 4,
           0, 2 * math.pi);
    cr.set_operator(cairo.OPERATOR_OVER);
    cr.fill();

    self.window.input_shape_combine_mask(bitmap, 0, 0)

The translucent image viewer worked just as I'd hoped. I was able to take a JPG of a trailmap, overlay it on top of a PyTopo window, scale the JPG using the normal Openbox window manager handles, then right-click on top of trail markers to set waypoints. When I was done, a "Save as GPX" in PyTopo and I had a file ready to take with me on my phone.

Comments be gone

We are sorry to inform you that we had to disable comments on this website. Currently there are more than 21 thousand messages in the spam queue plus another 2.6 thousand in the review queue. There is no way we can handle those. If you want to get in touch with us then head over to the contact page and find what suits you best – mailing lists, IRC, bug tracker, …
We hope to be able to get some alternative up and running, but that might take some time as it's not really a high priority for us.

darktable 2.2.4 released

we're proud to announce the fourth bugfix release for the 2.2 series of darktable, 2.2.4!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.4.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.4.tar.xz
bd5445d6b81fc3288fb07362870e24bb0b5378cacad2c6e6602e32de676bf9d8  darktable-2.2.4.tar.xz
$ sha256sum darktable-2.2.4.6.dmg
b7e4aeaa4b275083fa98b2a20e77ceb3ee48af3f7cc48a89f41a035d699bd71c  darktable-2.2.4.6.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.3 can be found below.

New features:

  • Better brush trace handing of opacity to get better control.
  • tools: Add script to purge stale thumbnails
  • tools: A script to watch a folder for new images

Bugfixes:

  • DNG: fix camera name demangling. It used to report some wrong name for some cameras.
  • When using wayland, prefer XWayland, because native Wayland support is not fully functional yet
  • EXIF: properly handle image orientation '2' and '4' (swap them)
  • OpenCL: a few fixes in profiled denoise, demosaic and colormapping
  • tiling: do not process uselessly small end tiles
  • masks: avoid assertion failure in early phase of path generation,
  • masks: reduce risk of unwanted self-finalization of small path shapes
  • Fix rare issue when expanding $() variables in import/export string
  • Camera import: fix ignore_jpg setting not having an effect
  • Picasa web exporter: unbreak after upstream API change
  • collection: fix query string for folders ( 'a' should match 'a/b' and 'a/c', but not 'ac/' )

Base Support:

  • Fujifilm X-T20 (only uncompressed raw, at the moment)
  • Fujifilm X100F (only uncompressed raw, at the moment)
  • Nikon COOLPIX B700 (12bit-uncompressed)
  • Olympus E-M1MarkII
  • Panasonic DMC-TZ61 (4:3, 3:2, 1:1, 16:9)
  • Panasonic DMC-ZS40 (4:3, 3:2, 1:1, 16:9)
  • Sony ILCE-6500

Noise Profiles:

  • Canon PowerShot G7 X Mark II
  • Olympus E-M1MarkII
  • Lge Nexus 5X

Wed 2017/Apr/05

  • The Rust+GNOME Hackfest in Mexico City, part 1

    Last week we had a Rust + GNOME hackfest in Mexico City (wiki page), kindly hosted by the Red Hat office there, in its very new and very cool office in the 22nd floor of a building and with a fantastic view of the northern part of the city. Allow me to recount the event briefly.

    View from the Red Hat office

    Inexplicably, in GNOME's 20 years of existence, there has never been a hackfest or event in Mexico. This was the perfect chance to remedy that and introduce people to the wonders of Mexican food.

    My friend Joaquín Rosales, also from Xalapa, joined us as he is working on a very cool Rust-based monitoring system for small-scale spirulina farms using microcontrollers.

    Alberto Ruiz started getting people together around last November, with a couple of video chats with Rust maintainers to talk about making it possible to write GObject implementations in Rust. Niko Matsakis helped along with the gnarly details of making GObject's and Rust's memory management play nicely with each other.

    GObject implementations in Rust

    During the hackfest, I had the privilege of sitting next to Niko to do an intensive session of pair programming to function as a halfway-reliable GObject reference while I fixed my non-working laptop (intermission: kids, never update all your laptop's software right before traveling. It will not work once you reach your destination.).

    The first thing was to actually derive a new class from GObject, but in Rust. In C there is a lot of boilerplate code to do this, starting with the my_object_get_type() function. Civilized C code now defines all that boilerplate with the G_DEFINE_TYPE() macro. You can see a bit of the expanded code here.

    What G_DEFINE_TYPE() does is to define a few functions that tell the GType system about your new class. You then write a class_init() function where you define your table of virtual methods (just function pointers in C), you register signals which your class can emit (like "clicked" for a GtkButton), and you can also define object properties (like "text" for the textual contents of a GtkEntry) and whether they are readable/writable/etc.

    You also define an instance_init() function which is responsible for initializing the memory allocated to instances of your class. In C this is quite normal: you allocate some memory, and then you are responsible for initializing it. In Rust things are different: you cannot have uninitialized memory unless you jump through some unsafe hoops; you create fully-initialized objects in a single shot.

    Finally, you define a finalize function which is responsible for freeing your instance's data and chaining to the finalize method in your superclass.

    In principle, Rust lets you do all of this in the same way that you would in C, by calling functions in libgobject. In practice it is quite cumbersome. All the magic macros we have to define the GObject implementation boilerplate in gtype.h are there precisely because doing it in "plain C" is quite a drag. Rust makes this no different, but you can't use the C macros there.

    A GObject in Rust

    The first task was to write an actual GObject-derived class in Rust by hand, just to see how it could be done. Niko took care of this. You can see this mock object here. For example, here are some bits:

    #[repr(C)]
    pub struct Counter {
        parent: GObject,
    }
    
    struct CounterPrivate {
        f: Cell<u32>,
        dc: RefCell<Option<DropCounter>>,
    }
    
    #[repr(C)]
    pub struct CounterClass {
        parent_class: GObjectClass,
        add: Option<extern fn(&Counter, v: u32) -> u32>,
        get: Option<extern fn(&Counter) -> u32>,
        set_drop_counter: Option<extern fn(&Counter, DropCounter)>,
    }
    	    

    Here, Counter and CounterClass look very similar to the GObject boilerplate you would write in C. Both structs have GObject and GObjectClass as their first fields, so when doing C casts they will have the proper size and fields within those sub-structures.

    CounterPrivate is what you would declare as the private structure with the actual fields for your object. Here, we have an f: Cell<u32> field, used to hold an int which we will mutate, and a DropCounter, an utility struct which we will use to assert that our Rust objects get dropped only once from the C-like implementation of the finalize() function.

    Also, note how we are declaring two virtual methods in the CounterClass struct, add() and get(). In C code that defines GObjects, that is how you can have overridable methods: by exposing them in the class vtable. Since GObject allows "abstract" methods by setting their vtable entries to NULL, we use an Option around a function pointer.

    The following code is the magic that registers our new type with the GObject machinery. It is what would go in the counter_get_type() function if it were implemented in C:

    lazy_static! {
        pub static ref COUNTER_GTYPE: GType = {
            unsafe {
                gobject_sys::g_type_register_static_simple(
                    gobject_sys::g_object_get_type(),
                    b"Counter\0" as *const u8 as *const i8,
                    mem::size_of::<CounterClass>() as u32,
                    Some(CounterClass::init),
                    mem::size_of::<Counter>() as u32,
                    Some(Counter::init),
                    GTypeFlags::empty())
            }
    };
    	    

    If you squint a bit, this looks pretty much like the corresponding code in G_DEFINE_TYPE(). That lazy_static!() means, "run this only once, no matter how many times it is called"; it is similar to g_once_*(). Here, gobject_sys::g_type_register_static_simple() and gobject_sys::g_object_get_type() are the direct Rust bindings to the corresponding C functions; they come from the low-level gobject-sys module in gtk-rs.

    Here is the equivalent to counter_class_init():

    impl CounterClass {
        extern "C" fn init(klass: gpointer, _klass_data: gpointer) {
            unsafe {
                let g_object_class = klass as *mut GObjectClass;
                (*g_object_class).finalize = Some(Counter::finalize);
    
                gobject_sys::g_type_class_add_private(klass, mem::size_of::<CounterPrivate>>());
    
                let klass = klass as *mut CounterClass;
                let klass: &mut CounterClass = &mut *klass;
                klass.add = Some(methods::add);
                klass.get = Some(methods::get);
                klass.set_drop_counter = Some(methods::set_drop_counter);
            }
        }
    }	    

    Again, this is pretty much identical to the C implementation of a class_init() function. We even set the standard g_object_class.finalize field to point to our finalizer, written in Rust. We add a private structure with the size of our CounterPrivate...

    ... which we later are able to fetch like this:

    impl Counter {
       fn private(&self) -> &CounterPrivate {
            unsafe {
                let this = self as *const Counter as *mut GTypeInstance;
                let private = gobject_sys::g_type_instance_get_private(this, *COUNTER_GTYPE);
                let private = private as *const CounterPrivate;
                &*private
            }
        }
    }
    	    

    I.e. we call g_type_instance_get_private(), just like C code would, to get the private structure. Then we cast it to our CounterPrivate and return that.

    But that's all boilerplate

    Yeah, pretty much. But don't worry! Niko made it possible to get rid of it in a comfortable way! But first, let's look at the non-boilerplate part of our Counter object. Here are its two interesting methods:

    mod methods {
        #[allow(unused_imports)]
        use super::{Counter, CounterPrivate, CounterClass};
    
        pub(super) extern fn add(this: &Counter, v: u32) -> u32 {
            let private = this.private();
            let v = private.f.get() + v;
            private.f.set(v);
            v
        }
    
        pub(super) extern fn get(this: &Counter) -> u32 {
            this.private().f.get()
        }
    }
    	    

    These should be familar to people who implement GObjects in C. You first get the private structure for your instance, and then frob it as needed.

    No boilerplate, please

    Niko spent the following two days writing a plugin for the Rust compiler so that we can have a mini-language to write GObject implementations comfortably. Instead of all the gunk above, you can simply write this:

    extern crate gobject_gen;
    use gobject_gen::gobject_gen;
    
    use std::cell::Cell;
    
    gobject_gen! {
        class Counter {
            struct CounterPrivate {
                f: Cell<u32>
            }
    
            fn add(&self, x: u32) -> u32 {
                let private = self.private();
                let v = private.f.get() + x;
                private.f.set(v);
                v
            }
    
            fn get(&self) -> u32 {
                self.private().f.get()
            }
        }
    }	      
    	    

    This call to gobject_gen!() gets expanded to the the necessary boilerplate code. That code knows how to register the GType, how to create the class_init() and instance_init() functions, how to register the private structure and the utility private() to get it, and how to define finalize(). It will fill the vtable as appropriate with the methods you create.

    We figured out that this looks pretty much like Vala, except that it generates GObjects in Rust, callable by Rust itself or by any other language, once the GObject Introspection machinery around this is written. That is, just like Vala, but for Rust.

    And this is pretty good! We are taking an object system in C, which we must keep around for compatibility reasons and for language bindings, and making an easy way to write objects for it in a safe, maintained language. Vala is safer than plain C, but it doesn't have all the machinery to guarantee correctness that Rust has. Finally, Rust is definitely better maintained than Vala.

    There is still a lot of work to do. We have to support registering and emitting signals, registering and notifying GObject properties, and probably some other GType arcana as well. Vala already provides nice syntax to do this, and we can probably use it with only a few changes.

    Finally, the ideal situation would be for this compiler plugin, or an associated "cargo gir" step, to emit the necessary GObject Introspection information so that these GObjects can be called from other languages automatically. We could also spit C header files to consume the Rust GObjects from C.

    Railing with horses

    And the other people in the hackfest?

    I'll tell you in the next blog post!

April 02, 2017

Stellarium 0.12.9

Stellarium 0.12.9 has been released today!

The series 0.12 is LTS for owners of old computers (old with weak graphics cards) and this is release with once fix for Solar System Editor plugin.

March 31, 2017

Show mounted filesystems

Used to be that you could see your mounted filesystems by typing mount or df. But with modern Linux kernels, all sorts are implemented as virtual filesystems -- proc, /run, /sys/kernel/security, /dev/shm, /run/lock, /sys/fs/cgroup -- I have no idea what most of these things are except that they make it much more difficult to answer questions like "Where did that ebook reader mount, and did I already unmount it so it's safe to unplug it?" Neither mount nor df has a simple option to get rid of all the extraneous virtual filesystems and only show real filesystems.

http://unix.stackexchange.com/questions/177014/showing-only-interesting-mount-p oints-filtering-non-interesting-types had some suggestions that got me started:

mount -t ext3,ext4,cifs,nfs,nfs4,zfs
mount | grep -E --color=never  '^(/|[[:alnum:]\.-]*:/)'
Another answer there says it's better to use findmnt --df, but that still shows all the tmpfs entries (findmnt --df | grep -v tmpfs might do the job).

And real mounts are always mounted on a filesystem path starting with /, so you can do mount | grep '^/'.

But it also turns out that mount will accept a blacklist of types as well as a whitelist: -t notype1,notype2... I prefer the idea of excluding a blacklist of filesystem types versus restricting it to a whitelist; that way if I mount something unusual like curlftpfs that I forgot to add to the whitelist, or I mount a USB stick with a filesystem type I don't use very often (ntfs?), I'll see it.

On my system, this was the list of types I had to disable (sheesh!):

mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl

df is easier: like findmnt, it excludes most of those filesystem types to begin with, so there are only a few you need to exclude:

df -hTx tmpfs -x devtmpfs -x rootfs

Obviously I don't want to have to type either of those commands every time I want to check my mount list. SoI put this in my .zshrc. If you call mount or df with no args, it applies the filters, otherwise it passes your arguments through. Of course, you could make a similar alias for findmnt.

# Mount and df are no longer useful to show mounted filesystems,
# since they show so much irrelevant crap now.
# Here are ways to clean them up:
mount() {
    if [[ $# -ne 0 ]]; then
        /bin/mount $*
        return
    fi

    # Else called with no arguments: we want to list mounted filesystems.
    /bin/mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl
}

df() {
    if [[ $# -ne 0 ]]; then
        /bin/df $*
        return
    fi

    # Else called with no arguments: we want to list mounted filesystems.
    /bin/df -hTx tmpfs -x devtmpfs -x rootfs
}

Update: Chris X Edwards suggests lsblk or lsblk -o 'NAME,MOUNTPOINT'. it wouldn't have solved my problem because it only shows /dev devices, not virtual filesystems like sshfs, but it's still a command worth knowing about.

Recipe Icon

Initially I was going to do a more elaborate workflow tutorial, but time flies when you’re having fun on 3.24. With the release out, I’d rather publish this than let it rot. Maybe the next one!

Recipe Icon

Krita 3.1.3 Alpha released

We’re working like crazy on the next versions of Krita — 3.1.3 and 4.0. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. This week we’ve prepared the first 3.1.3 alpha builds for testing! The final release of 3.1.3 is planned for end of April.

We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!

Note for Windows Users

We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue.

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.3-alpha.2 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

March 30, 2017

Game art course released!

Here’s Nathan with a piece of good news:

After months of work, I’m glad to announce that Make Professional Painterly Game Art with Krita is out! It is the first Game Art training for your favourite digital painting program.

In this course, you’ll learn:
1. The techniques professionals use to make beautiful sprites
2. How to create characters, background and even simple UI
3. How to build smart, reusable assets

With the pro and premium versions, you’ll also get the opportunity to improve your art fundamentals, become more efficient with Krita, and build a detailed game mockup for your portfolio.

The course page has free sample tutorials and the answers to all of your questions.

Learn more:
https://gumroad.com/l/krita-game-art-tutorial-1

 

March 29, 2017

GIMP is Going to LGM!


GIMP is Going to LGM!

Tall and tan and young and lovely...

This years Libre Graphics Meeting (2017) is going to be held in the lovely city seen above, Rio de Janeiro, Brazil! This is an important meeting for so many people in the Free/Libre art community as it’s one of the only times they have an opportunity to meet face to face.

We’ve had some folks attending the past LGM’s (Leipzig and London) and it’s a wonderful opportunity to spend some time with friends. (Also, @frd from the community will be there!)

GIMP and darktable at LGM GIMPers, some darktable folks, and even Nate Willis at the flat during LGM/London!

So in the spirit of camaraderie, I have a request…

The GIMP team will be in attendance this year. I happen to have a fondness for them so I’m asking anyone reading this to please head over and donate to the project.

GIMP Wilber

That link is for the GNOME PayPal account, but there are other ways to donate as well.

This is one of the few times that the GIMP team gets a chance to meet in person. They use the time to hack at GIMP and to manage internal business. The time they get to spend together is invaluable to the project and by extension everyone that uses GIMP.

Just look at these faces! Surely this (Brady) Bunch of folks is worth helping to get a better GIMP?

GIMPers at LGM/London Left to right, top to bottom:
Ville, Mitch, Øyvind,
Simon, Liam, João,
Aryeom, Jehan, Michael

Attending

Besides @frd I’m not sure who else from the community might be attending, so if I’ve missed you I apologize! Please feel free to use this topic to communicate and coordinate if you’d like.

It appears that personally I’m on a biennial schedule with attending LGM - so I’m looking forward to next year to be able to catch up with everyone!

March 27, 2017

Interview with Dolly

Could you tell us something about yourself?

My nickname is Dolly, I am 11 years old, I live in Cannock, Staffordshire, England. I am at Secondary school, and at the weekends I attend drama, dance and singing lessons, I like drawing and recently started using the Krita app.

How did you find out about Krita?

My dad and my friend told me about it.

Do you draw on paper too, and which is more fun, paper or computer?

I draw on paper, and I like Krita more than paper art as there’s a lot more colours instantly available than when I do paper art.

What kind of pictures do you draw?

I mostly draw my original character (called Phantom), I draw animals, trees and stars too.

What is easy to do with Krita? What is difficult to do?

I think choosing the colour is easy, its really good, I find getting the right brush size a little difficult due to the scrolling needed to select the brush size.

Which thing about Krita is most fun?

The thing most fun for me is colouring in my pictures as there is a great range of colour available, far more than in my pencil case.

Is there anything in Krita that you’d like to be different?

I think Krita is almost perfect the way it is at the moment however if the brush selection expanded automatically instead of having to scroll through it would be better for me.

Can you show us a picture you made with Krita?

I can, I have attached some of my favourites that I have done for my friends.

How did you make it?

I usually start with the a standard base line made up of a circle for the face and the ears, I normally add the hair and the other features (eyes, noses and mouth) and finally colour and shade and include any accessories.

Is there anything else you’d like to tell us?

I really enjoy Krita, I think its one of the best drawing programs there is!

March 25, 2017

Reading keypresses in Python

As part of preparation for Everyone Does IT, I was working on a silly hack to my Python script that plays notes and chords: I wanted to use the computer keyboard like a music keyboard, and play different notes when I press different keys. Obviously, in a case like that I don't want line buffering -- I want the program to play notes as soon as I press a key, not wait until I hit Enter and then play the whole line at once. In Unix that's called "cbreak mode".

There are a few ways to do this in Python. The most straightforward way is to use the curses library, which is designed for console based user interfaces and games. But importing curses is overkill just to do key reading.

Years ago, I found a guide on the official Python Library and Extension FAQ: Python: How do I get a single keypress at a time?. I'd even used it once, for a one-off Raspberry Pi project that I didn't end up using much. I hadn't done much testing of it at the time, but trying it now, I found a big problem: it doesn't block.

Blocking is whether the read() waits for input or returns immediately. If I read a character with c = sys.stdin.read(1) but there's been no character typed yet, a non-blocking read will throw an IOError exception, while a blocking read will wait, not returning until the user types a character.

In the code on that Python FAQ page, blocking looks like it should be optional. This line:

fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)
is the part that requests non-blocking reads. Skipping that should let me read characters one at a time, block until each character is typed. But in practice, it doesn't work. If I omit the O_NONBLOCK flag, reads never return, not even if I hit Enter; if I set O_NONBLOCK, the read immediately raises an IOError. So I have to call read() over and over, spinning the CPU at 100% while I wait for the user to type something.

The way this is supposed to work is documented in the termios man page. Part of what tcgetattr returns is something called the cc structure, which includes two members called Vmin and Vtime. man termios is very clear on how they're supposed to work: for blocking, single character reads, you set Vmin to 1 (that's the number of characters you want it to batch up before returning), and Vtime to 0 (return immediately after getting that one character). But setting them in Python with tcsetattr doesn't make any difference.

(Python also has a module called tty that's supposed to simplify this stuff, and you should be able to call tty.setcbreak(fd). But that didn't work any better than termios: I suspect it just calls termios under the hood.)

But after a few hours of fiddling and googling, I realized that even if Python's termios can't block, there are other ways of blocking on input. The select system call lets you wait on any file descriptor until has input. So I should be able to set stdin to be non-blocking, then do my own blocking by waiting for it with select.

And that worked. Here's a minimal example:

import sys, os
import termios, fcntl
import select

fd = sys.stdin.fileno()
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON
newattr[3] = newattr[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldterm = termios.tcgetattr(fd)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

print "Type some stuff"
while True:
    inp, outp, err = select.select([sys.stdin], [], [])
    c = sys.stdin.read()
    if c == 'q':
        break
    print "-", c

# Reset the terminal:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

A less minimal example: keyreader.py, a class to read characters, with blocking and echo optional. It also cleans up after itself on exit, though most of the time that seems to happen automatically when I exit the Python script.

March 24, 2017

RawTherapee and Pentax Pixel Shift


RawTherapee and Pentax Pixel Shift

Supporting multi-file raw formats

What is Pixel Shift?

Modern digital sensors (with a few exceptions) use an arrangement of RGB filters over a square grid of photosites. For a given 2x2 square of photosites the filters are designed to allow two green, and one each red and blue colors through to the photosite. These are arranged on a grid:

Bayer pattern on sensor

The pattern is known as a Bayer pattern (after the creator Bryce Bayer of Eastman Kodak). The resulting pattern shows how each RGB is offset into the grid.

Bayer pattern on sensor profile

Each of the pixel sites captures a single color. In order to produce a full color representation at each pixel, the other color values need to be interpolated from the surrounding grid. This interpolation and methods for calculating it are referred to as demosaicing. The methods for accomplishing this vary across different algorithms.

Bayer Interpolation Example The final RGB value for the initially Red pixel needs to be interpolated from the surrounding Blue and Green pixels.

Unfortunately, this can often result in problems. There can be chromatic aliasing problems resulting in odd color fringing and roughness on edges or a loss of detail and sharpness.

Pixel Shift

Pentax‘s Pixel Shift (Available on the K-1, K-3 II, KP, K-70) attempts to alleviate some of these problems through a novel approach of capturing four images quickly in succession and by moving the entire camera sensor a single pixel for each shot. This has the effect of capturing a full RGB value at each pixel location:

Pixel Shift Example Diagram Pixel Shift shifts the sensor by one pixel in each direction to be able to generate a full set of RGB values at each photosite.

This means a full RGB value for a pixel location can be created without having to interpolate from neighboring values.

Advantages

Less Noise

If you look carefully at the Bayer pattern, you’ll notice that when shifting to adjacent pixels there will always be two green values captured per pixel. The average of these green values helps to suppress noise that may have been interpolated and spread through a normal, single-shot raw file.

Pixel Shift Noise Reduction Example Top: single raw frame, Bottom: Pixel Shift

Less Moiré

Avoiding the interpolation of pixel colors from surrounding photosites helps to reduce the appearance of Moiré in the final result:

Pixel Shift Moiré Reduction Example Top: single raw frame, Bottom: Pixel Shift

Increased Resolution

This method is similar in concept to what was previously seen when Olympus announced their “High Resolution” mode for the OMD E-M5mkII camera (or manually as we previously described in this blog post). In that case they combine 8 frames moved by sub-pixel amounts to increase the overall resolution. The difference here is that Olympus generates a single, combined raw file from the results, while Pixel Shift gets you access to each of the four raw files before they’re combined.

In each case, a higher resolution image can be created from the results:

Pixel Shift Increased Resolution Example Top: single raw frame, Bottom: Pixel Shift

Disadvantages

Movement

As with most approaches for capturing multiple images and combining them, a particularly problematic area is when there are objects in motion between the frames being captured. This is a common problem when stitching panoramic photography, when creating image stacks for noise reduction, and when combining images using methods such as Pixel Shift.

Although…

The RawTherapee Approach

Simply combining four static frames together is really trivial, and is something that all the other Pixel Shift-capable software can do without issue. The real world is not often so accommodating as a studio setup, and that is where the recent work done by @Ingo and @Ilias on RawTherapee really begins to shine.

What they’ve been working on in RawTherapee is to improve the detection of movement in a scene. There are several types of movement possible:

  • Objects showing at different places in a scene such as fast moving cars.
  • Partly moving objects like foliage in the wind.
  • Moving objects reflecting light onto static objects in the scene
  • Changing illumination conditions such as long exposures at sunset.

All of these types of movement need to be detected to avoid the artifacts they may cause in the final shot.

One of the key features of Pixel Shift movement detection in RawTherapee is that it allows you to show the movement mask, so you get feedback on which regions of the image are detected as movement and which are static. For the regions with movement RawTherapee will then use the demosaiced frame of your choice to fill it in, and for regions without movement it will use the Pixel Shift combined image with more detail and less noise.

Pixel Shift Movement Mask from RawTherapee Unique to RawTherapee is the option to export the resulting motion mask
(for those that may want to do further blending/processing manually).

The accuracy of movement detection in RawTherapee leads to much better handling of motion artifacts that works well in places where proprietary solutions fall short. For most cases the Automatic motion correction mode works well, but you can also fine tune the parameters in custom mode to correctly detect motion in high ISO shots.

Besides being the only option (barring dcrawps possibly) to process Pixel Shift files in Linux, RawTherapee has some other neat options that aren’t found in other solutions. One of them is the ability to export the actual movement mask separate from the image. This will let users generate separate outputs from RT, and to combine them later using the movement mask. Another option is the ability to choose which of the other frames to use for filling in the movement areas on the image.

Pixel Shift Support in Other Software

Pentax’s own Digital Camera Utility (a rebranded version of SilkyPix) naturally supports Pixel Shift, but as with most vendor-bundled software it can be slow, unwieldy, and a little buggy sometimes. Having said that, the results do look good, and at least the “Motion Correction” is able to be utilized with this software.

Adobe Camera Raw (ACR) got support for Pixel Shift files in version 9.5.1 (but doesn’t utilize the “Motion Correction”). In fact, ACR didn’t have support at the time that DPReview.com looked at the feature last year, causing them to retract the article and re-post when they had a chance to use a version of ACR with support.

A recent look at Pixel Shift processing over at DPReview.com showed some interesting results.

Sample Image Raw The image used in the DPReview article. ©Chris M Williams

We’re going to look at some 100% crops from that article and compare them to the results available using RawTherapee (the latest development version, to be released as 5.1 in April). The RawTherapee versions were set to the most neutral settings with only an exposure adjustment to match other samples better.

Looking first at an area of foliage with motion, the places where there are issues becomes apparent.

For reference, here is the Adobe Camera Raw (ACR) version of a single frame from a Pixel Shift file:

Pixel Shift Comparison #1

The results with Pixel Shift on, and motion correction on, from straight-out-of-camera (SOOC), Adobe Camera Raw (ACR), SilkyPix, and RawTherapee (RT) are decidedly mixed. In all but the RT version, there’s a very clear problem with effective blending and masking of the frames in areas with motion:

Pixel Shift Comparison #2

Things look much worse for Adobe Camera Raw when looking at high-motion areas like the water spray at the foot of the waterfall, though SilkyPix does a much better job here.

The ACR version of a single frame for reference:

Pixel Shift Comparison #2

Both the SOOC and SilkyPix versions handle all of the movement well here. RawTherapee also does a great job blending the frames despite all of the movement. Adobe Camera Raw is not doing well at all…

Pixel Shift Comparison #2

Finally, in a frame full of movement, such as the surface of the water.

The ACR version of a single frame for reference:

Pixel Shift Comparison #3

In a frame full of movement the SOOC, ACR, and SilkyPix processing all struggle to combine a clean set of frames. They exhibit a pixel pattern from the processing, and the ACR version begins to introduce odd colors:

Pixel Shift Comparison #3

As mentioned earlier, a unique feature of RawTherapee is the ability to show the motion mask. Here is an example of the motion mask for this image

Pixel Shift Motion Mask The motion mask generated by RawTherapee for the above image.

Also worth mentioning is the “Smooth Transitions” feature in RawTherapee. When there are regions with and without motion, the regions with motion are masked and filled in with data from a demosaiced frame of your choice. The other regions are taken from the Pixel Shift combined image. This can occasionally lead to harsh transitions between the two.

For instance, a transition as processed in SilkyPix:

Pixel Shift Transition SilkyPix

RawTherapee’s “Smooth Transitions” feature does a much better job handling the transition:

Pixel Shift Transition RawTherapee

In Conclusion

In another example of the power and community of Free/Libre and Open Source Software we have a great enhancement to a project based on feedback and input from the users. In this case, it all started with a post on the RawTherapee forums.

Thanks to the hard work of @Ingo and @Ilias Pentax shooters now have a Pixel Shift capable software that is not only FLOSS but also produces better results than the proprietary solutions!

Not so coincidentally, community member @nosle gave permission to use one of his PS files for everyone to try processing on the Play pixelshift thread. If you’d like to practice consider heading over to get his file and feedback from others!

Pixel Shift is currently in the development branch of RawTherapee and is slated for release with version 5.1.

March 22, 2017

Industry support for Blender

Blender is a true community effort. It’s an open public project where everyone’s welcome to contribute. In the past year, a growing number of corporations started to contribute to Blender as well.

We’d like to credit the companies who helping out to make Blender 2.8 happen.

Tangent Animation

This animation studio released Ozzy last year, a feature film entirely made with Blender. They currently have 2 new films in production. The facility has two departments (Toronto, Winnipeg) and is growing to 150 people in 2017. They exclusively use Blender for 3D.

Since October 2016, Tangent supports two Blender Institute devs full time to work on the 2.8 viewport. They also hired their own Cycles developer team, who will be contributing openly.

Nimble Collective

Nimble Collective was founded by former Dreamworks animators. Their goal is to give artists access to a complete studio pipeline, accessible online by just using your browser.

Since their launch in 2016 Nimble Collective has seriously invested in integrating Blender in their platform. They currently support one full time developer position in Blender Institute to support animation tools (dependency graph) and pipelines (Alembic).

AMD

AMD is developing a prominent open source strategy, leading the way for FOSS graphics card drivers and the new open graphics standard Vulkan.
Since last summer 2016 AMD supports a developer to work on modernizing Blender OpenGL, and a developer to work on Cycles OpenCL (GPU) rendering.

Aleph Objects

Aleph Objects is the manufacturer of the popular Libre Hardware Lulzbot 3D printer.

Starting this year, Aleph Objects will support Blender Institute to hire two people to work full time on UI and Workflow topics for Blender 2.8, with as goal to deliver a release-compatible “Blender 101” + training material for occasional 3D users.

Development Fund Sponsors

The Blender Development fund is an essential instrument to keep Blender alive. Blender Foundation uses the Development fund and donations to support 2-3 full time developer positions. Big and loyal corporate sponsors to the fund are BlenderMarket , Cambridge Medical Robotics , Valve Steam Workshop , Blend4Web , CGCookie , Effetti Digitali , Insydium , Sketchfab , Wube Software , blendFX , Machinimatrix , Pepeland and RenderStreet.

Blender Institute

The studio of Blender Institute gets hardware seeds – for example we had servers from Intel and Dell, GPUs from AMD and Nvidia. Blender Institute uses Blender Cloud income, sponsoring and subsidies to support developers and artists to work on free/open movies and 3D computer graphics production pipelines. BI currently employs 14 people, including BF chairman Ton Roosendaal.

Interview with Ito Ryou-ichi

Ryou is the amazing artist from Japan who made the Kiki plastic model. Thanks to Tyson Tan, we now have an interview with him!

Can you tell us something about yourself?

I’m Ito Ryou-ichi (Ryou), a Japanese professional modeler and figure sculptor. I work for the model hobby magazine 月刊モデルグラフィックス (Model Graphics Monthly), writing columns, building guides as well as making model samples.

When did you begin making models like this?

Building plastic models has been my hobby since I was a kid. Back then I liked building robot models from anime titles like the Gundam series. When I grew up, I once worked as a manga artist, but the job didn’t work out for me, so I became a modeler/sculptor around my 30s (in the 2000s). That said, I still love drawing pictures and manga!

How do you design them?

Being a former manga artist, I like to articulate my figure design from a manga character design perspective. First I determine the character’s general impression, then collect information like clothing style and other stuff to match that impression. Using those references, I draw several iterations until I feel comfortable with the whole result.

Although I like human and robot characters in general, my favorite has to be kemono (Japanese style furry characters). A niche genre indeed, especially in the modeling scene — you don’t see many of those figures around. But to me, it feels like a challenge in which I can make the best use of my taste and skills.

How do you make the prototypes? And how were they produced?

There are many ways of prototyping a figure. I have been using epoxy putty sculpting most of the time. First I make the figure’s skeleton using metallic wires, then put epoxy putty around the skeleton to make a crude shape for the body. I then use art knives and other tools to do the sculpting work, slowly making all the details according to the design arts. A trusty old “analogue approach” if you will. In contrast, I have been trying the digital approach with ZBrushCore as well. Although I’m still learning, I can now make something like a head out of it.

In case of Kiki’s figure (and most of my figures), the final product is known as a “Garage Kit” — a box of unassembled, unpainted resin parts. The buyer builds and paints the figure by themselves. To turn the prototype into a garage kit, the finished prototype must first be broken into a few individual parts, make sure they have casting friendly shapes. Silicon-based rubber is then used to make molds out of those parts. Finally, flowing synthetic resin is injected into the molds and parts are harvested after the injected resin settled. This method is called “resin casting”. Although I can cast them at home by myself, I often commission a professional workshop to do it for me. It costs more that way, but they can produce parts of higher quality in large quantity.

How did you learn about Krita?

Some time ago I came across Tyson Tan’s character designs on Pixiv.net and immediately became a big fan of his work. His Kiki pictures caught my attention and I did some research out of curiosity, leading me to Krita. I haven’t yet learned how to use Krita, but I’ll do that eventually.

Why did you decide to make a Kiki statuette?

Ryou: Before making Kiki, I had already collaborated with a few other artists, turning their characters into figures. Tyson has a unique way of mixing the beauty of living beings and futuristic robotic mechanism that I really liked, so I contacted him on Twitter. I picked a few characters from his creations as candidates, one of them was Kiki. Although more ”glamorous” would have been great too, after some discussion we finally decided to make Kiki.

Tyson: During the discussions, we looked into many of my original characters, some cute, some sexy. We did realize the market prefer figures with glamorous bodies, but we really wanted to make something special. Kiki being Krita’s mascot, a mascot of a free and open source art software, has one more layer of meaning than “just someone’s OC”. It was very courageous for Ryou to agree on a plan like that, since producing such a figure is very expensive and he would be the one to bear the monetary risk. I really admire his decision.

Where can people order them?

The Kiki figure kit can be ordered from my personal website. I send them worldwide:  http://bmwweb3.nobody.jp/mail2.html

Anything else you want to share with us?

I plan to collaborate with other artists in the future to make more furry figures like Kiki. I will contact the artist if I like their work, but you may also commission me to make a figure for a specific character.

I hope through making this Kiki figure I can connect with more people!

Ryou’s Personal Website: http://bmwweb3.nobody.jp/

 

March 21, 2017

Presentation at Charlottetown UI/UX/Design Meetup

It’s short notice, but if you’re in Charlottetown, I’ll be giving a talk tonight at 7:30pm (March 21, 2017) at the Charlottetown UI/UX/Design Meetup about the history of silverorange and our involvement with the Firefox logo.

Stellarium 0.15.2 has been released

The Stellarium development team after three months of development is proud to announce the second correcting release of Stellarium in series 0.15.x - version 0.15.2. This version contains few closed bugs (ported from series 1.x) and some new additions and improvements.

We have updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we highly recommended reset the settings when you will install the new version (you can choose required points in the installer).

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes:
- Added new algorithm for DeltaT from Stephenson, Morrison and Hohenkerk (2016)
- Added use QOpenGLWidget
- Added new option to InfoString group
- Added orbit visualization data for asteroids
- Added calculation of extincted magnitudes of satellites
- Added new type of Solar system objects: sednoids
- Added classificator of objects into Solar System Editor plugin
- Added albedo for infostring (planets and moons)
- Added some improvements and clean up of code in Search Tool
- Added use ISO 8601 to date formatting in Date and Time Dialog (LP: #1655630)
- Added "Restore direction to initial values" in Oculars plugin (LP: #1656085)
- Added the define for GL_DOUBLE again to restore compilation on ARM.
- Added calculation and show of horizontal and vertical scales of visible field of view of CCD (LP: #1656825)
- Added caching for landscapes, including preloading and some other manipulation via scripting.
- Added binning for CCD in Oculars plugin
- Added textures for DSO
- Added a mean solar day (equals to Earth's day) on the Sun for educational purposes
- Added transmitting map of object info via RemoteControl
- Added int property handling with sliders
- Added a scriptable function to retrieve landscape brightness.
- Added Spout licence.txt to Windows installer script
- Added displaying solstices points (LP: #1670046)
- Added extension of objectInfoMaps per object, most useful for scripting (LP: #1670412)
- Added tentative fix for crash without network (LP: #1667703)
- Added separate storing of view direction/FoV and other settings.
- Added Meade MA12 Astrometric Eyepiece support in Oculars plugin
- Added option to change the prediction depth of Iridium flares (Satellites plugin)
- Added tooltips for AstroCalc features
- Fixed indirect dependency to QtOpenGL by QtMultimediaWidgets (LP: #1656525)
- Fixed text encoding in installer (LP: #1652515)
- Fixed changing value of n.dot in tooltip when ephemeris type is changed (LP: #1652762)
- Fixed mistakes in DeltaT stuff
- Fixed typos in AstroCalc tool
- Fixed visual style for spinup/spindown markers
- Fixed missing cross-id of Epsilon Lyrae (LP: #1653388)
- Fixed updating a list of Solar system bodies in AstroCalc tool when new objects added or objects was removed
- Fixed calculation of period for comets on elliptial orbits
- Fixed prediction of Iridium flares (LP: #1643311)
- Fixed saving visibility flag for Bookmarks button (LP: #1654164)
- Fixed refraction for Satellites (LP: #1654331)
- Fixed wrong parallax and distance for IC 59 (LP: #1655423)
- Fixed updating a text in Help window when shortcuts are changed (LP: #1656001)
- Fixed saving flags of visibility of Milky Way and Zodiacal Light (LP: #1656067)
- Fixed memory leaks
- Fixed few reports of Clang static analyzer
- Fixed double clicks causing crashes (LP: #1656525)
- Fixed packaging QtOpenGL in Windows/macOS packages
- Fixed handling a log lines with missing newline char
- Fixed a bad-value crash in ArchaeoLines plugin
- Fixed an invalid escape sequence in RemoteControl plugin
- Fixed bug in Search Tool (LP: #1655055)
- Fixed doing a screenshots (do it via FBO - solution for QOpenGLWidget)
- Fixed work a button for ArchaeoLines plugin
- Fixed calculation and rendering CCD frame in Oculars plugin
- Fixed an memory leak with the spheric mirror distorter, and removed stencil buffer from the effect FBO (we don't need it for our rendering) (LP: #1661375)
- Fixed tile-based render performance (always glClear all buffers at the start of the frame) (LP: #1661375)
- Fixed glClear alpha channel usage (glClear alpha to zero instead of one)
- Fixed Scenery3d cubemap rendering (restores rendering)
- Fixed crash, when location 'Sierra Nevada Observatory, Spain' is chosen (LP: #1662113)
- Fixed NetBSD and OpenBSD build by linking glues with Qt5::Gui.
- Fixed size for few DSO textures (NPOT textures for ancient GPUs) (LP: #1641773)
- Fixed crash, when missing a stars catalog from the middle of list (e.g. stars4 is missing and we tried zooming) (LP: #1653315)
- Fixed crash for configure color of generic DSO marker (LP: #1667787)
- Fixed date limit in AstroCalc tool (Set a minimum possible date limit for the range of dates for
QDateTimeEdit widgets) (LP: #1660208)
- Fixed escaping of symbols for Simbad Lookup (LP: #1669088)
- Fixed DE431 mismatch (LP: #1606583)
- Fixed overbright Sun when zooming in (LP: #1421173)
- Fixed absolute magnitude calculation of the planets, their moons, and the Pluto-Charon system (LP: #1664143)
- Fixed a long-standing bug concerning centering small-fov views in equatorial mount mode (LP: #1484976)
- Fixed influencing sky luminance/eye adaptation for bright objects covered by the landscape horizon. (LP: #1138533)
- Fixed atmospheric brightening by Earth's moon when location is on other planets (LP: #1673283)
- Fixed application of DE43x DeltaT when date outside range of the selected DE43x.
- Fixed Night mode issue for binocular mode of Oculars Plugin (LP: #1673187)
- Fixed altitude computation for landscapes
- Fixed a small error in that Zodiacal light was aligned with Ecliptic J2000, not Ecliptic of date (LP: #1628765)
- Fixed crash Stellarium in debug mode on OS X with Qt 5.7+, through clear GL error state after using QPainter (LP: #1628072)
- Fixed a problem with Qt timezone handling, when some IANA timezones have been renamed compared to entries in our location database (LP: #1662132)
- Allows SkyImages in all reference frames and deprecate the old explicit core.loadSkyImageAltAz() type commands
- Updated rules for usage of custom time zones (the custom time zone may be use in all time now) (LP: #1652763)
- Updated shortcuts
- Updated rules for source package builder
- Updated URL of DSS collection
- Updated detect of OS
- Updated deployment rules for Windows installer
- Updated script for building Stellarium User Guide
- Updated GUI for set coefficients for custom equation of DeltaT
- Updated list of contributors
- Updated ArchaeoLines and Gridlines options to RemoteControl pages
- Updated Tongan sky culture
- Updated catalog of DSO
- Updated common names of DSO
- Updated star names (LP: #1664671)
- Updated Solar System Screen Saver.
- Updated Oculars plugin
- Updated Satellites plugin (Let's start looking to the Iridium on 15 seconds before flash)
- Updated plist data for macOS
- Updated textures of minor planets
- Updated default color scheme
- Removed code for automatic tuning star scales of the view through ocular/CCD (LP: #1656940)
- Code clean-up
- Prevent an unnecessary StelProperty change
- Changed the way the OpenGL format is set once again

Animate with Krita is out!

Timothée Giet has finished his latest training course for Krita. In three parts, Timothée introduces the all-new animation feature in Krita. Animation was introduced in Krita 3.0, last year and is already used by people all over the world, for fun and for real work.

Animation in Krita is meant to recreate the glory days of hand-drawn animation, with a modern twist. It’s not a flash substitute, but allows you to pair Krita’s awesome drawing capabilities with a frame-based animation approach.

In this training course, Timothée first gives us a tour of the new animation features and panels in Krita. The second part introduces the foundation of traditional animation. The final part takes you through the production of an entire short clip, from sketching to exporting. All necessary production files are included, too!

Animate with Krita is available as a digital download and costs just €14,95 (excluding VAT in the European Union) English and French subtitles are included, as well as all project files.


Get Animate with Krita

March 20, 2017

Blender Constraints

Last time I wrote about artistic constraints being useful to remain focus and be able to push yourself to the max. In the near future I plan to dive into the new contstraint based layout of gtk4, Emeus. Today I’ll briefly touch on another type of constraint, the Blender object constraint!

So what are they and how are they useful in the context of a GNOME designer? We make quite a few prototypes and one of the things to decide whether a behavior is clear and comprehensible is motion design, particularly transitions. And while we do not use tools directly linked to out stack, it helps to build simple rigs to lower the manual labor required to make sometimes similar motion designs and limit the number of mistakes that can be done. Even simple animations usually consist of many keyframes (defined, non-computed states in time). Defining relationships between objects and createing setups, “rigs”, is a way to create of a sort of working model of the object we are trying to mock up.

Blender Constraints Blender Constraints

Constraints in Blender allow to define certain behaviors of objects in relation to others. Constraints allow you to limit movement of an object to specific ranges (a scrollbar not being able to be dragged outside of its gutter), or to convert certain motion of an object to a different transformation of another (a slider adjusting a horizon of an image, ie. rotating it).

The simplest method of defining relation is through a hierarchy. An object can become a parent of another, and thus all children will inherit movements/transforms of a parent. However there are cases — like interactions of a cursor with other objects — where this relationship is only temporary. Again, constraints help here, in particular the copy location constraint. This is because you can define the influence strength of a constraint. Like everything in Blender, this can also be keyframed, so at some point you can follow the cursor and later disengage this tight relationship. Btw if you ever though you can manualy keyframe two animations manually so they do not slide, think again.

Inverse transform in Blender Inverse transform in Blender

The GIF screencasts have been created using Peek, which is available to download as a flatpak.

Peek, a GIF screencasting app. Peek, a GIF screencasting app.

Everyone Does IT (and some Raspberry Pi gotchas)

I've been quiet for a while, partly because I've been busy preparing for a booth at the upcoming Everyone Does IT event at PEEC, organized by LANL.

In addition to booths from quite a few LANL and community groups, they'll show the movie "CODE: Debugging the Gender Gap" in the planetarium, I checked out the movie last week (our library has it) and it's a good overview of the problem of diversity, and especially the problems women face in in programming jobs.

I'll be at the Los Alamos Makers/Coder Dojo booth, where we'll be showing an assortment of Raspberry Pi and Arduino based projects. We've asked the Coder Dojo kids to come by and show off some of their projects. I'll have my RPi crittercam there (such as it is) as well as another Pi running motioneyeos, for comparison. (Motioneyeos turned out to be remarkably difficult to install and configure, and doesn't seem to do any better than my lightweight scripts at detecting motion without false positives. But it does offer streaming video, which might be nice for a booth.) I'll also be demonstrating cellular automata and the Game of Life (especially since the CODE movie uses Life as a background in quite a few scenes), music playing in Python, a couple of Arduino-driven NeoPixel LED light strings, and possibly an arm-waving penguin I built a few years ago for GetSET, if I can get it working again: the servos aren't behaving reliably, but I'm not sure yet whether it's a problem with the servos and their wiring or a power supply problem.

The music playing script turned up an interesting Raspberry Pi problem. The Pi has a headphone output, and initially when I plugged a powered speaker into it, the program worked fine. But then later, it didn't. After much debugging, it turned out that the difference was that I'd made myself a user so I could have my normal shell environment. I'd added my user to the audio group and all the other groups the default "pi" user is in, but the Pi's pulseaudio is set up to allow audio only from users root and pi, and it ignores groups. Nobody seems to have found a way around that, but sudo apt-get purge pulseaudio solved the problem nicely.

I also hit a minor snag attempting to upgrade some of my older Raspbian installs: lightdm can't upgrade itself (Errors were encountered while processing: lightdm). Lots of people on the web have hit this, and nobody has found a way around it; the only solution seems to be to abandon the old installation and download a new Raspbian image.

But I think I have all my Raspbian cards installed and working now; pulseaudio is gone, music plays, the Arduino light shows run. Now to play around with servo power supplies and see if I can get my penguin's arms waving again when someone steps in front of him. Should be fun, and I can't wait to see the demos the other booths will have.

If you're in northern New Mexico, come by Everyone Does IT this Tuesday night! It's 5:30-7:30 at PEEC, the Los Alamos Nature Center, and everybody's welcome.

WebKitGTK+ 2.16

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

New API

The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

$ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

“Hobby harder; it’ll stunt!” RC-car T-shirt design

Backstory

In 2016, RC-car company Arrma released the Outcast, calling it a stunt truck. That label lead to some joking around in the UltimateRC forum. One member had trouble getting his Outcast to stunt. Utrak said “The stunt car didn’t stunt do hobby to it, it’ll stunt “. frystomer went: “If it still doesn’t stunt, hobby harder.” and finally stewwdog was like: “I now want a shirt that reads ‘Hobby harder, it’ll stunt’.” He wasn’t alone, so I created a first, very rough sketch.

Process

After a positive response, I decided to make it look like more of a stunt in another sketch:

Meanwhile, talk went to onesies and related practical considerations. Pink was also mentioned, thus I suddenly found myself confronted with a mental image that I just had to get out:

To find the right alignment and perspective, I created a Blender scene with just the text and boxes and cylinders to represent the car. The result served as template for drawing the actual image in Krita, using my trusty Wacom Intuos tablet.

Result

hobby_harder_121_on_white_1024x0958

This design is now available for print on T-shirts, other apparel, stickers and a few other things, via Redbubble.


Filed under: Illustration, Planet Ubuntu Tagged: Apparel, Blender, Krita, RC, T-shirt

Revit and FreeCAD

Last couple of weeks I've had the opportunity to do some work in Revit. This was highly welcome as my knowledge of that application was becoming rather obsolete. Having a fresh, new look on it brought me a lot of new ideas, reinforced some ideas I already had, and changed a couple of old perceptions...

March 16, 2017

FreeCAD Arch development news - March 2017

I'll start adding a date to these "Arch development news", it will be easy to look back and motivate me to write them more regularly. A monthly report seems good I think, no? Hi all, Long time I didn't write, but that doesn't mean things have been quiet down here, it's actually more the contrary. To begin...

March 13, 2017

Interview with Sonia Bennett

Could you tell us something about yourself?

Hi I’m Sonia Bennett! Born and brought up in India and now living in Nashville, Tennessee with my husband and 3 year old daughter.

Do you paint professionally, as a hobby artist, or both?

I always loved traditional drawing and painting since childhood but became a graphic designer after college. Right now I am trying to improve my painting skills again =) I’m always open to commissions!

What genre(s) do you work in?

There isn’t a specific genre that I see myself in because exploring and looking at different art styles keeps my mind open to seeing things from different perspectives.

Whose work inspires you most — who are your role models as an artist?

From Vermeer, Fragonard, American and French Impressionism to the artists in the Krita group on Facebook… there are a lot of artists both classical and contemporary that inspire me each day. I come from a very creative family, my father was the first person to inspire me to draw. He used to draw horses for me and I would try to copy them. He is a very talented violinist and can sing a wide vocal range. My mother loved dancing, acted and directed theater, can sew almost anything and is a wonderful cook! They have always inspired and encouraged me to be creative. If it wasn’t for their encouragement, and God opening doors for me to pursue art as a way to glorify Him… I would be stuck doing something painfully uncreative.

How and when did you get to try digital painting for the first time?

Digital painting was a mystery to me until the end of 2015 (yes I may have lived under a rock until then! ) I had seen digital artwork online but didnt realise HOW exactly it was created. I asked many dumb questions. I’m sure, to finally figure out which graphics tablet I needed to get. When I finally got one as a Christmas present in 2015 and tried it out with Krita for the very first time, I was blown away at how easy it was to paint with Krita. After that, I couldn’t stop painting!

What makes you choose digital over traditional painting?

Honestly, if I had the space and could leave my messy paint equipment undisturbed (impossible with a toddler toddling at high speed) I would keep painting the traditional way, but I enjoy digital painting because I can paint without the mess and drying time for oil paint. And I can save my work and come back the next day and not see suspicious little hand prints on the canvas. =D

How did you find out about Krita?

I used Adobe software for a long time, but it was just for photo and vector layout and design and that’s about it. It wasn’t until I changed computers that I realized the older software didn’t work on my updated computer anymore. And I didn’t want to start ‘renting’ the software that wasn’t available to own directly anymore. So I searched for free painting software and somehow landed on David Revoy’s Youtube channel and it was the best introduction to Krita. I didnt need to keep looking after that!

What was your first impression?

I think my family may have heard me express my excitement rather loudly several times throughout the first day! I still say “I love Krita!”

What do you love about Krita?

The stabilizer, the different assistants, the brush engine, so many blending modes, the color to alpha filter and the different color selectors are especially cool. Krita is much more advanced this way. I also love the fact that it is made available to everyone for free. Not everyone can afford hundreds of dollars to create art. Artists from all walks of life can build up their portfolios and have a great opportunity to showcase their talent thanks to the wonderful people behind Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I struggle a bit with the lag when I paint large projects, and it would be nice to have a way to save for web versions and a better text tool, but I know with the tremendous advancements that Krita has already made in such a short time, that all these improvements and more will be made… one day Krita will be on every artist’s computer.

What sets Krita apart from the other tools that you use?

It has such a professional feel and look to it. It’s unlike any other free software that exists today. And it’s only getting better. It is extremely user friendly, the intelligent design of the interface makes it so easy to understand and get used to. Right away, when you download it and start painting, you know this has been designed by people who know what artists like.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Even though the original is not mine, the practice painting of Fragonard’s The Reader, is my favorite, because it was the first real painting that I made on Krita that showed me I could still paint, even after almost a decade of not picking up a brush. I had painted a couple of small paintings before this but they didn’t really challenge me. Trying to replicate a master’s painting is a really good training.

What techniques and brushes did you use in it?

I used David Revoy’s brushes and Ramon Miranda’s brushes and just tried to replicate the smoky, textured feel of the original work.

Where can people see more of your work?

I post my work on several social media sites. My main website is soniabennett.net and I am also on :
https://www.facebook.com/ArtfulButterfly
https://www.artstation.com/artist/soniabennett
https://www.instagram.com/theartfulbutterfly/
http://graphicker.me/application/works/index/5499
I’m also in a Krita group on Facebook that has more than 2000 members (and growing) and has wonderful artists that help and encourage each other (with the occasional joking around!)

Anything else you’d like to share?

I want to thank the people who worked so hard to create Krita and keep making it better and better. Thank you for this opportunity to show my work here and I appreciate all the encouragement and support I have received from my friends and family. I hope my art can encourage more people to paint with Krita and develop their talent and creativity. If there is any way I can contribute to making Krita better, I would be most happy to help!

 

March 10, 2017

At last! A roadrunner!

We live in what seems like wonderful roadrunner territory. For the three years we've lived here, we've hoped to see a roadrunner, and have seen them a few times at neighbors' places, but never in our own yard.

Until this morning. Dave happened to be looking out the window at just the right time, and spotted it in the garden. I grabbed the camera, and we watched it as it came out from behind a bush and went into stalk mode.

[Roadrunner stalking]

And it caught something!

[close-up, Roadrunner with fence lizard] We could see something large in its bill as it triumphantly perched on the edge of the garden wall, before hopping off and making a beeline for a nearby juniper thicket.

It wasn't until I uploaded the photo that I discovered what it had caught: a fence lizard. Our lizards only started to come out of hibernation about a week ago, so the roadrunner picked the perfect time to show up.

I hope our roadrunner decides this is a good place to hang around.

March 09, 2017

Time Claustrophobia

My friend and last-blogger-standing, Peter Rukavina, emailed me this week to ask about a concept he remembered from my blog, but couldn’t find.

He called the concept Time Claustrophobia. I immediately knew what he meant. We must have discussed it in person, because there doesn’t seem to be anything about it here on my blog. Let’s correct that.

Time Claustrophobia is the feeling of anxiety cast by an impending appointment over the free time that precedes it.

For example, when I was younger I sometimes had shift-work that started at 4pm. The entire day, thought free and open until 4pm, felt constrained by the looming commitment near the end of the day.

March 07, 2017

You Can Hear The Difference Between Hot and Cold Water

I’ve always thought I could hear the difference between hot and could water pouring, but wasn’t sure if I’d be able to prove it in a blind test. Steve Mould confirms that You Can Hear The Difference Between Hot and Cold Water on the great Tom Scott YouTube channel.

March 05, 2017

The Curious Incident of the Junco in the Night-Time

Dave called from an upstairs bedroom. "You'll probably want to see this."

He had gone up after dinner to get something, turned the light on, and been surprised by an agitated junco, chirping and fluttering on the sill outside the window. It evidently was tring to fly through the window and into the room. Occasionally it would flutter backward to the balcony rail, but no further.

There's a piñon tree whose branches extend to within a few feet of the balcony, but the junco ignored the tree and seemed bent on getting inside the room.

As we watched, hoping the bird would calm down, instead it became increasingly more desperate and stressed. I remembered how, a few months earlier, I opened the door to a deck at night and surprised a large bird, maybe a dove, that had been roosting there under the eaves. The bird startled and flew off in a panic toward the nearest tree. I had wondered what happened to it -- whether it had managed to find a perch in the thick of a tree in the dark of night. (Unlike San Jose, White Rock gets very dark at night.)

And that thought solved the problem of our agitated junco. "Turn the porch light on", I suggested. Dave flipped a switch, and the porch light over the deck illuminated not only the deck where the junco was, but the nearest branches of the nearby piñon.

Sure enough, now that it could see the branches of the tree, the junco immediately turned around and flew to a safe perch. We turned the porch light back off, and we heard no more from our nocturnal junco.

Using Krita Without a Keyboard

Recently we added a custom hotkey file to Krita to work with a hotkey application called Tablet Pro. Tablet Pro allows you to use your tablet without a keyboard by replacing the keyboard shortcuts with custom onscreen hotkeys. For our Krita users our goal has been to give digital artists the power to create at a professional level without a huge expense. Tablet Pro is working with us on that goal. We were happy to work together on this and are excited to share the results. The hotkeys they provide will give you a very similar experience to a Wacom Cintiq with expresskeys.

In order to try using your tablet with custom touch hotkeys they’ve created a custom “Artist Pad” built to work with Krita keyboard shortcuts. We have also added a profile and hotkey preset into Krita built to align the shortcut settings. You can download it here.

They give a 14 day free trial (to make sure it works on your tablet) and the Artist Pad is $9.99. I’ve talked with one of the owners named Justice and he is happy to aid in setup and help you should you have any questions. His email is justice@tabletpro.net.

Their website is www.tabletpro.net.

March 01, 2017

Tue 2017/Feb/28

  • Porting librsvg's tree of nodes to Rust

    Earlier I wrote about how librsvg exports reference-counted objects from Rust to C. That was the preamble to this post, in which I'll write about how I ported librsvg's tree of nodes from C to Rust.

    There is a lot of talk online about writing recursive data structures in Rust, as there are various approaches to representing the links between your structure's nodes. You can use shared references and lifetimes. You can use an arena allocator and indices or other kinds of identifiers into that arena. You can use reference-counting. Rust really doesn't like C's approach of pointers-to-everything-everywhere; you must be very explicit about who owns what and for how long things live.

    Librsvg keeps an N-ary tree of Node structures, each of which corresponds to an XML element in the SVG file. Nodes can be of various kinds: "normal" shapes that know how to draw themselves like Bézier paths and ellipses; "reference-only" objects like markers, which get referenced from paths to draw arrowheads and the like; "paint server" objects like gradients and fill patterns which also don't draw themselves, but only get referenced for a fill or stroke operation on other shapes; filter objects like Gaussian blurs which get referenced after rendering a sub-group of objects.

    Even though these objects are of different kinds, they have some things in common. They can be nested — a marker contains other shapes or sub-groups or shapes so you can draw multi-part arrowheads, for example; or a filter can be a Gaussian blur of the alpha channel of the referencing shape, followed by a light-source operation, to create an embossed effect on top of a shape. Objects can also be referenced by name in different places: you can declare a gradient and give it a name like #mygradient, and then use it for the fill or stroke of other shapes without having to declare it again.

    Also, all the objects maintain CSS state. This state gets inherited from an object's ancestors in the N-ary tree, and finally gets overriden with whatever specific CSS properties the object has for itself. You could declare a Group (a <g> element) with a black 4 pixel-wide outline, and then into it put a bunch of other shapes, but each with a fill of a different color. Those shapes will inherit the black outline, but use their own fill.

    Librsvg's representation of tree nodes, in C

    The old C code had a simple representation of nodes. There is an RsvgNodeType enum which just identifies the type of each node, and an RsvgNode structure for each node in the tree. Each RsvgNode also keeps a small vtable of methods.

    typedef enum {
        RSVG_NODE_TYPE_INVALID = 0,
    
        RSVG_NODE_TYPE_CHARS,
        RSVG_NODE_TYPE_CIRCLE,
        RSVG_NODE_TYPE_CLIP_PATH,
        RSVG_NODE_TYPE_COMPONENT_TRANFER_FUNCTION,
        ...
    } RsvgNodeType;
    
    
    typedef struct {
        void (*free) (RsvgNode *self);
        void (*draw) (RsvgNode *self, RsvgDrawingCtx *draw_ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNodeVtable;
    
    typedef struct _RsvgNode RsvgNode;
    
    typedef struct _RsvgNode {
        RsvgState      *state;
        RsvgNode       *parent;
        GPtrArray      *children;
        RsvgNodeType    type;
        RsvgNodeVtable *vtable;
    };
    	    

    What about memory management? A node keep an array pointers to its children, and also a pointer to its parent (which of course can be NULL for the root node). The master RsvgHandle object, which is what the caller gets from the public API, maintains a big array of pointers to all the nodes, in addition to a pointer to the root node of the tree. Nodes are created while reading an SVG file, and they don't get freed until the toplevel RsvgHandle gets freed. So, it is okay to keep shared references to nodes and not worry about memory management within the tree: the RsvgHandle will free all the nodes by itself when it is done.

    Librsvg's representation of tree nodes, in Rust

    In principle I could have done something similar in Rust: have the master handle object keep an array to all the nodes, and make it responsible for their memory management. However, when I started porting that code, I wasn't very familiar with how Rust handles lifetimes and shared references to objects in the heap. The syntax is logical once you understand things, but I didn't back then.

    So, I chose to use reference-counted structures instead. It gives me a little peace of mind that for some time I'll need to keep references from the C code to Rust objects, and I am already comfortable with writing C code that uses reference counting. Once everything is ported to Rust and C code no longer has references to Rust objects, I can probably move away from refcounts into something more efficient.

    I needed to have a way to hook the existing C implementations of nodes into Rust, so that I can port them gradually. That is, I need to have a way to have nodes implemented both in C and Rust, while I port them one by one to Rust. We'll see how this is done.

    Here is the first approach to the C code above. We have an enum that matches RsvgNodeType from C, a trait that defines methods on nodes, and the Node struct itself.

    /* Keep this in sync with rsvg-private.h:RsvgNodeType */
    #[repr(C)]
    #[derive(Debug, Copy, Clone, PartialEq)]
    pub enum NodeType {
        Invalid = 0,
    
        Chars,
        Circle,
        ClipPath,
        ComponentTransferFunction,
        ...
    }
    
    /* A *const RsvgCNodeImpl is just an opaque pointer to the C code's
     * struct for a particular node type.
     */
    pub enum RsvgCNodeImpl {}
    
    pub type RsvgNode = Rc<Node>;
    
    pub trait NodeTrait: Downcast {
        fn set_atts   (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag);
        fn draw       (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32);
        fn get_c_impl (&self) -> *const RsvgCNodeImpl;
    }
    
    impl_downcast! (NodeTrait);
    
    pub struct Node {
        node_type: NodeType,
        parent:    Option<Weak<Node>>,       // optional; weak ref to parent
        children:  RefCell<Vec<Rc<Node>>>,   // strong references to children
        state:     *mut RsvgState,
        node_impl: Box<NodeTrait>
    }
    	    

    The Node struct is analogous to the old C structure above. The parent field holds an optional weak reference to another node: it's weak to avoid circular reference counts, and it's optional because not all nodes have a parent. The children field is a vector of strong references to nodes; it is wrapped in a RefCell so that I can add children (i.e. mutate the vector) while the rest of the node remains immutable.

    RsvgState is a C struct that holds the CSS state for nodes. I haven't ported that code to Rust yet, so the state field is just a raw pointer to that C struct.

    Finally, there is a node_impl: Box<NodeTrait> field. This has a reference to a boxed object which implements NodeTrait. In effect, we are separating the "tree stuff" (the basic Node struct) from the "concrete node implementation stuff", and the Node struct has a reference inside it to the node type's concrete implemetation.

    The vtable turns into a trait, more or less

    Let's look again at the old C code for a node's vtable:

    typedef struct {
        void (*free) (RsvgNode *self);
        void (*draw) (RsvgNode *self, RsvgDrawingCtx *draw_ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNodeVtable;
    	    

    The free method is responsible for freeing the node itself and all of its inner data; this is common practice in C vtables.

    The draw method gets called, well, to draw a node. It gets passed a drawing context, plus a magic dominate argument which we can ignore for now (it has to do with CSS cascading).

    Finally, the set_atts method is just a helper at construction time: after a node gets allocated, it is initialized from its XML attributes by the set_atts method. The pbag argument is just a dictionary XML attributes for this node, represented as key-value pairs; the method pulls the key-value pairs out of the pbag and initializes its own fields from the values.

    The NodeTrait in Rust is similar, but has a few differences:

    pub trait NodeTrait: Downcast {
        fn set_atts (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag);
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32);
        fn get_c_impl (&self) -> *const RsvgCNodeImpl;
    }
    	    

    You'll note that there is no free method. Rust objects know how to free themselves and their fields automatically, so we don't need that method anymore. We will need a way to free the C data that corresponds to the C implementations of nodes — those are external resources not managed by Rust, so we need to tell it about them; see below.

    The set_atts and draw methods are similar to the C ones, but they also have an extra node argument. Read on.

    There is a new get_c_impl method. This is a temporary thing to accomodate C implementations of nodes; read on.

    Finally, what about the "NodeTrait: Downcast" in the first line? We'll get to it in the end.

    Separation of concerns?

    So, we have the basic Node struct, which forms the N-ary tree of SVG elements. We also have a NodeTrait with a few of methods that nodes must implement. The Node struct has a node_impl field, which holds a reference to an object in the heap which implements NodeTrait.

    I think I saw this pattern in the source code for Servo; I was looking at its representation of the DOM to see how to do an N-ary tree in Rust. I *think* it splits things in the same way; or maybe I'm misremembering and using the pattern from another tree-of-multiple-node-types implementation in Rust.

    How should things look from C?

    This is easy to answer: they should look exactly as they were before the conversion to Rust, or as close to that as possible, since I don't want to change all the C code at once!

    In the post about exposing reference-counted objects from Rust to C, we already saw the new rsvg_node_ref() and rsvg_node_unref() functions, which hand out pointers to boxed Rc<Node>.

    Previously I had made accessor functions for all of RsvgNode's fields, so the C code doesn't touch them directly. There are functions like rsvg_node_get_type(), rsvg_node_get_parent(), rsvg_node_foreach_child(), that the C code already uses. I want to keep them exactly the same, with only the necessary changes. For example, when the C code did not reference-count nodes, the implementation of rsvg_node_get_parent() simply returned the value of the node->parent field. The new implementation returns a strong reference to the parent (upgraded from the node's weak reference to its parent), and the caller is responsible for unref()ing it.

    Rust implementation of Node

    Let's look at two "trivial" methods of Node:

    impl Node {
        ...
    
        pub fn get_type (&self) -> NodeType {
            self.node_type
        }
    
        pub fn get_state (&self) -> *mut RsvgState {
            self.state
        }
    
        ...
    }
    	    

    Nothing special there; just accessor functions for the node's fields. Given that Rust makes those fields immutable in the presence of shared references, I'm not 100% sure that I actually need those accessors. If it turns out that I don't, I'll remove them and let the code access the fields directly.

    Now, the method that adds a child to a Node:

        pub fn add_child (&self, child: &Rc<Node>) {
            self.children.borrow_mut ().push (child.clone ());
        }
    	    

    The children field is a RefCell<Vec<Rc<Node>>>. We ask to borrow it mutably with borrow_mut(), and then push a new item into the array. What we push is a new strong reference to the child, which we get with child.clone(). Think of this as "g_ptr_array_add (self->children, g_object_ref (child))".

    And now, two quirky methods that call into the node_impl:

    impl Node {
        ...
    
        pub fn set_atts (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag) {
            self.node_impl.set_atts (node, handle, pbag);
        }
    
        pub fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            self.node_impl.draw (node, draw_ctx, dominate);
        }
    
        ...
    }
    	    

    The &self argument is implicitly a &Node. But we also pass a node: &RsvgNode argument! Remember the type declaration for RsvgNode; it is just "pub type RsvgNode = Rc<Node>". What these prototypes mean is:

        pub fn set_atts (&self: reference to the Node, node: refcounter for the Node, ...) {
            ... call the actual implementation in self.node_impl ...
        }
    	    

    This is because of the following. In objects that implement NodeTrait, the actual implementations of set_atts() and draw() still need to call into C code for a few things. And the only view that the C code has into the Rust world is through pointers to RsvgNode, that is, pointers to the Rc<Node> — the refcounting wrapper for nodes. We need to be able to pass this refcounting wrapper to C from somewhere, but once we are down in the concrete implementations of the trait, we don't have the refcounts anymore. So, we pass them as arguments to the trait's methods.

    This may look strange; at first sight it may look as if you are passing self twice to a method call, but not really! The self argument is implicit in the method call, and the first node argument is something rather different: it is a reference count to the node, not the node itself. I may be able to remove this strange argument once all the nodes are implemented in Rust and there is no interfacing to C code anymore.

    Accomodating C implementations of nodes

    Now we get to the part where the Node and NodeTrait, implemented in Rust, both need to accomodate the existing C implementations of node types.

    Instead of implementing a node type in Rust (i.e. implement NodeTrait for some struct), we will implement a Rust wrapper for node implementations in C, which implements NodeTrait. Here is the declaration of CNode:

    /* A *const RsvgCNodeImpl is just an opaque pointer to the C code's
     * struct for a particular node type.
     */
    pub enum RsvgCNodeImpl {}
    
    type CNodeSetAtts = unsafe extern "C" fn (node: *const RsvgNode, node_impl: *const RsvgCNodeImpl, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag);
    type CNodeDraw = unsafe extern "C" fn (node: *const RsvgNode, node_impl: *const RsvgCNodeImpl, draw_ctx: *const RsvgDrawingCtx, dominate: i32);
    type CNodeFree = unsafe extern "C" fn (node_impl: *const RsvgCNodeImpl);
    
    struct CNode {
        c_node_impl: *const RsvgCNodeImpl,
    
        set_atts_fn: CNodeSetAtts,
        draw_fn:     CNodeDraw,
        free_fn:     CNodeFree,
    }
    	    

    This struct CNode has essentially a void* to the C struct that will hold a node's data, and three function pointers. These function pointers (set_atts_fn, draw_fn, free_fn) are very similar to the original vtable we had, and that we turned into a trait.

    We implement NodeTrait for this CNode wrapper as follows, by just calling the function pointers to the C functions:

    impl NodeTrait for CNode {
        fn set_atts (&self, node: &RsvgNode, handle: *const RsvgHandle, pbag: *const RsvgPropertyBag) {
            unsafe { (self.set_atts_fn) (node as *const RsvgNode, self.c_node_impl, handle, pbag); }
        }
    
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            unsafe { (self.draw_fn) (node as *const RsvgNode, self.c_node_impl, draw_ctx, dominate); }
        }
    
        ...
    }
    	    

    Maybe this will make it easier to understand why we neeed that "extra" node argument with the refcount: it is the actual first argument ot the C functions, which don't get the luxury of a self parameter.

    And the free_fn()? Who frees the C implementation data? Rust's Drop trait, of course! When Rust decides to free CNode, it will see if it implements Drop. Our implementation thereof calls into the C code to free its own data:

    impl Drop for CNode {
        fn drop (&mut self) {
            unsafe { (self.free_fn) (self.c_node_impl); }
        }
    }
    	    

    What does it look like in memory?

    Node, CNode, RsvgNodeEllipse

    This is the basic layout. A Node gets created and put on the heap. Its node_impl points to a CNode in the heap (or to any other thing which implements NodeTrait, really). In turn, CNode is our wrapper for C implementations of SVG nodes; its c_node_impl field is a raw pointer to data on the C side — in this example, an RsvgNodeEllipse. We'll see how that one looks like shortly.

    So how does C create a node?

    I'm glad you asked! This is the rsvg_rust_cnode_new() function, which is implemented in Rust but exported to C. The C code uses it when it needs to create a new node.

    #[no_mangle]
    pub extern fn rsvg_rust_cnode_new (node_type:   NodeType,
                                       raw_parent:  *const RsvgNode,
                                       state:       *mut RsvgState,
                                       c_node_impl: *const RsvgCNodeImpl,
                                       set_atts_fn: CNodeSetAtts,
                                       draw_fn:     CNodeDraw,
                                       free_fn:     CNodeFree) -> *const RsvgNode {
        assert! (!state.is_null ());
        assert! (!c_node_impl.is_null ());
    
        let cnode = CNode {                                             // 1
            c_node_impl: c_node_impl,
            set_atts_fn: set_atts_fn,
            draw_fn:     draw_fn,
            free_fn:     free_fn
        };
    
        box_node (Rc::new (Node::new (node_type,                        // 2
                                      node_ptr_to_weak (raw_parent),    // 3
                                      state,
                                      Box::new (cnode))))               // put the CNode in the heap; pass it to the Node
    }
    	    

    1. We create a CNode structure and fill it in from the parameters that got passed to rsvg_rust_cnode_new().

    2. We create a new Node with Node::new(), wrap it with a reference counter with Rc::new(), box that ("put it in the heap") and return a pointer to the box's contents. The boxificator is just this; it's similar to what we used before:

    pub fn box_node (node: RsvgNode) -> *mut RsvgNode {
        Box::into_raw (Box::new (node))
    }
    	    

    3. We create a weak reference to the parent node. Here, raw_parent comes in as a pointer to a strong reference. To obtain a weak reference, we do this:

    pub fn node_ptr_to_weak (raw_parent: *const RsvgNode) -> Option<Weak<Node>> {
        if raw_parent.is_null () {
            None
        } else {
            let p: &RsvgNode = unsafe { & *raw_parent };
            Some (Rc::downgrade (&p.clone ()))               // 5
        }
    }
    	    

    5. Here, we take a strong reference to the parent with p.clone(). Then we turn it into a weak reference with Rc::downgrade(). We stuff that in an Option with the Some() — remember that not all nodes have a parent, and we represent this with an Option<Weak<Node>>.

    Creating a C implementation of a node

    This is the C code for rsvg_new_group(), the function that creates nodes for SVG's <g> element.

    RsvgNode *
    rsvg_new_group (const char *element_name, RsvgNode *parent)
    {
        RsvgNodeGroup *group;
    
        group = g_new0 (RsvgNodeGroup, 1);
    
        /* ... fill in the group struct ... */
    
        return rsvg_rust_cnode_new (RSVG_NODE_TYPE_GROUP,
                                    parent,
                                    rsvg_state_new (),
                                    group,
                                    rsvg_group_set_atts,
                                    rsvg_group_draw,
                                    rsvg_group_free);
    }
    	    

    The resulting RsvgNode*, which from C's viewpoint is an opaque pointer to something on the Rust side — the boxed Rc<Node> — gets stored in a couple of places. It gets put in the toplevel RsvgHandle's array of all-the-nodes. It gets hooked, as a child, to its parent node. It may get referenced in other places as well, for example, in the dictionary of string-to-node for nodes that have an id="name" attribute. All those are strong references created with rsvg_node_ref().

    Getting the implementation structs from C

    Let's look again at the implementation of NodeTrait for CNode. This is one of the methods:

    impl NodeTrait for CNode {
        ...
    
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            unsafe { (self.draw_fn) (node as *const RsvgNode, self.c_node_impl, draw_ctx, dominate); }
        }
    
        ...
    }
    	    

    In self.draw_fn we have a function pointer to a C function. We call it, and we pass the self.c_node_impl. This gives the function access to its own implementation data.

    But what about cases where we need to access an object's data outside of the methods, and so we don't have that c_node_impl argument? If you are cringing because This Is Not Something That Is Done in OOP, well, you are right, but this is old code with impurities. Maybe once it is Rustified thoroughly, I'll have a chance to clean up those inconsistencies. Anyway, here is a helper function that the C code can call to get ahold of its c_node_impl:

    #[no_mangle]
    pub extern fn rsvg_rust_cnode_get_impl (raw_node: *const RsvgNode) -> *const RsvgCNodeImpl {
        assert! (!raw_node.is_null ());
        let node: &RsvgNode = unsafe { & *raw_node };
    
        node.get_c_impl ()
    }
    	    

    That get_c_impl() method is the temporary thing I mentioned above. It's one of the methods in NodeTrait, and of course CNode implements it like this:

    impl NodeTrait for CNode {
        ...
    
        fn get_c_impl (&self) -> *const RsvgCNodeImpl {
            self.c_node_impl
        }
    }
    	    

    You may be thinking that is is an epic hack: not only do we provide a method in the base trait to pull an obscure field from a particular implementation; we also return a raw pointer from it! And a) you are absolutely right, but b) it's a temporary hack, and it is about the easiest way I found to shove the goddamned C implementation around. It will be gone once all the node types are implemented in Rust.

    Now, from C's viewpoint, the return value of that rsvg_rust_cnode_get_impl() is just a void*, which needs to be casted to the actual struct we want. So, the proper way to use this function is first to assert that we have the right type:

    g_assert (rsvg_node_get_type (handle->priv->treebase) == RSVG_NODE_TYPE_SVG);
    RsvgNodeSvg *root = rsvg_rust_cnode_get_impl (handle->priv->treebase);
    	    

    This is no different or more perilous than the usual downcasting one does when faking OOP with C. It's dangerous, sure, but we know how to deal with it.

    Creating a Rust implementation of a node

    Aaaah, the fun part! I am porting the SVG node types one by one to Rust. I'll show you the simple implementation of NodeLine, which is for SVG's <line> element.

    struct NodeLine {
        x1: Cell<RsvgLength>,
        y1: Cell<RsvgLength>,
        x2: Cell<RsvgLength>,
        y2: Cell<RsvgLength>
    }
    	    

    Just x1/y1/x2/y2 fields with our old friend RsvgLength, no problem. They are in Cells to make them mutable after the NodeLine is constructed and referenced all over the place.

    Now let's look at its three methods of NodeTrait.

    impl NodeTrait for NodeLine {
    
        fn set_atts (&self, _: &RsvgNode, _: *const RsvgHandle, pbag: *const RsvgPropertyBag) {
            self.x1.set (property_bag::lookup_length (pbag, "x1", LengthDir::Horizontal));
            self.y1.set (property_bag::lookup_length (pbag, "y1", LengthDir::Vertical));
            self.x2.set (property_bag::lookup_length (pbag, "x2", LengthDir::Horizontal));
            self.y2.set (property_bag::lookup_length (pbag, "y2", LengthDir::Vertical));
        }
    }
    	    

    The set_atts() method just sets the fields of the NodeLine to the corresponding values that it gets in the property bag. This pbag is of key-value pairs from XML attributes. That lookup_length() function looks for a specific key, and parses it into an RsvgLength. If the key is not available, or if parsing yields an error, the function returns RsvgLength::default() — the default value for lengths, which is zero pixels. This is where we "bump" into the C code that doesn't know how to propagate parsing errors yet. Internally, the length parser in Rust yields a proper Result value, with error information if parsing fails. Once all the code is on the Rust side of things, I'll start thinking about propagating errors to the toplevel RsvgHandle. For now, librsvg is a very permissive parser/renderer indeed.

    I'm starting to realize that set_atts() only gets called immediately after a new node is allocated. After that, I *think* that nodes don't mutate their internal fields. So, it may be possible to move the "pull stuff out of the pbag and set the fields" code from set_atts() to the actual constructors, remove the Cell wrappers around all the fields, and thus get immutable objects.

    Maybe it is that I'm actually going through all of librsvg's code, so I get to know it better; or maybe it is that porting it to Rust is actually clarifying my thinking on how the code works and how it ought to work. If all nodes are immutable after creation... and they are a recursive tree... which already does Porter-Duff compositing... which is associative... that sounds very parallelizable, doesn't it? (I would never dare to do it in C. Rust is making it feel quite feasible, without data races.)

    impl NodeTrait for NodeLine {
    
        fn draw (&self, node: &RsvgNode, draw_ctx: *const RsvgDrawingCtx, dominate: i32) {
            let x1 = self.x1.get ().normalize (draw_ctx);
            let y1 = self.y1.get ().normalize (draw_ctx);
            let x2 = self.x2.get ().normalize (draw_ctx);
            let y2 = self.y2.get ().normalize (draw_ctx);
    
            let mut builder = RsvgPathBuilder::new ();
            builder.move_to (x1, y1);
            builder.line_to (x2, y2);
    
            render_path_builder (&builder, draw_ctx, node.get_state (), dominate, true);
        }
    
    }
    	    

    The draw() method normalizes the x1/y1/x2/y2 values to the current viewport from the drawing context. Then it creates a path builder and feeds it commands to draw a line. It calls a helper function, render_path_builder(), which renders it using the appropriate CSS cascaded values, and which adds SVG markers like arrowheads if they were specified in the SVG file.

    Finally, our little hack for the benefit of the C code:

    impl NodeTrait for NodeLine {
    
        fn get_c_impl (&self) -> *const RsvgCNodeImpl {
            unreachable! ();
        }
    
    }
    	    

    In this purely-Rust implementation of NodeLine, there is no c_impl. So, this method asserts that nobody ever calls it. This is to guard myself against C code which may be trying to peek at an impl when there is no longer one.

    Downcasting to concrete types

    Remember rsvg_rust_cnode_get_impl(), which the C code can use to get its implementation data outside of the normal NodeTrait methods?

    Well, since I am doing a mostly straight port from C to Rust, i.e. I am not changing the code's structure yet, just changing languages — it turns out that sometimes the Rust code needs access to the Rust structs outside of the NodeTrait methods as well.

    I am using downcast-rs, a tiny crate that lets one do exactly this: go from a boxed trait object to a concrete type. Librsvg uses it like this:

    use downcast_rs::*;
    
    pub trait NodeTrait: Downcast {
        fn set_atts (...);
        ...
    }
    
    impl_downcast! (NodeTrait);
    
    impl Node {
        ...
    
        pub fn with_impl<T: NodeTrait, F: FnOnce (&T)> (&self, f: F) {
            if let Some (t) = (&self.node_impl).downcast_ref::<T> () {
                f (t);
            } else {
                panic! ("could not downcast");
            }
        }
    
        ...
    }
    	    

    The basic Node has a with_impl() method, which takes a lambda that accepts a concrete type that implements NodeTrait. The lambda gets called with your Rust-side impl, with the right type.

    Where the bloody hell is this used? Let's take markers as an example. Even though SVG markers are implemented as a node, they don't draw themselves: they can get referenced from paths/lines/etc. and there is special machinery to decide just where to draw them.

    So, in the implementation for markers, the draw() method is empty. The code that actually computes a marker's position uses this to render the marker:

    node.with_impl (|marker: &NodeMarker| marker.render (c_node, draw_ctx, xpos, ypos, computed_angle, line_width));
    	    

    That is, it calls node.with_impl() and passes it an anonymous function which calls marker.render() — a special function just for rendering markers.

    This is kind of double plus unclean, yes. I can think of a few solutions:

    • Instead of assuming that every node type is really a NodeTrait, make Node know about element implementations that can draw themselves normally and those that can't. Maybe Node can have an enum to discriminate that, and NodeTrait can lose the draw() method. Maybe there needs to be Drawable trait which "normal" elements implement, and something else for markers and other elements which are only used by reference?

    • Instead of having a single dictionary of named nodes — those that have an id="name" attribute, which is later used to reference markers, filters, etc. — have separate dictionaries for every reference-able element type. One dict for markers, one dict for patterns, one dict for filters, and so on. The SVG spec already says, for example, that it is an error to reference a marker but to specify an id for an element that is not a marker — and so on for the other reference-able types. After doing a lookup by id in the dictionary-of-all-named-nodes, librsvg checks the type of the found Node. It could avoid doing this check if the dictionaries were separate, since each dictionary would only contain objects of a known type.

    Conclusion

    Apologies for the long post! Here's a summary:

    • Node objects are reference-counted. Librsvg uses them to build up the tree of nodes.

    • There is a NodeTrait for SVG element implementations. It has methods to set an element's attributes, and to draw the element.

    • SVG elements already implemented in Rust have no problems; they just implement NodeTrait and that's it.

    • For the rest of the SVG elements, which are still implemented in C, I have a CNode which implements NodeTrait. The CNode holds an opaque pointer to the implementation data on the C side, and function pointers to the C implementations of NodeTrait.

    • There is some temporary-but-clean hackery to let C code obtain its implementation pointers outside of the normal method implementations.

    • Downcasting to concrete types is a necessary evil, but it is the easiest way to transform this C code into Rust. It may be possible to eliminate it with some refactoring.

    The git history for the conversion of nodes into Rust is interesting, I hope; I've tried to write explanatory comments in the commits. If you want to read this history, start at this commit. Before it there is a lot of C-side refactoring to turn field accesses into accesor functions, for example. In that commit and a few that follow, there are some false starts as I learned how to represent recursive data structures in Rust. Then there is a lot of refactoring to make the existing C implementations all use rsvg_rust_cnode_new(), and to hook them to the Rust-side machinery. Finally, there are commits to actually port complete elements into Rust and eliminate the C implementations. Enjoy!

  • Addendum: how did I do the refactoring above?

    To move the librsvg code towards a state where I could do mostly mechanical transformations, from the old C-based RsvgNode structures into Rust-based Node and the CNode shim, I had to do a lot of refactoring.

    The first thing was to add accessor functions for all of a node's fields: create rsvg_node_get_type(), rsvg_node_get_parent(), etc., instead of accessing node->type and node->parent directly. This is so that, once the basic Node structure was moved to Rust, the C code would have a way to access its fields. I do not want to have parallel structs in C and in Rust with the same memory layouts, although Rust makes it possible to do things that way if you wish.

    I had to change the way in which new nodes are created, when they are parsed out of the SVG file. Instead of a big switch() statement, I parametrized the node creator functions based on the name of the SVG element being created.

    I moved the node destructors around, so that they really were only callable in a single way, instead of differently all over the place.

    Some of the changes caused ripples throughout the code, and I had to review things carefully. For this, I kept a to-do list in a text file. Here is part of that to-do list, with some explanations.

    * TODO Rust node:
    
    ** DONE RsvgNode - remove struct declaration from the C code; leave as
       an opaque type.
    
    ** DONE Audit calls to rsvg_drawing_ctx_acquire_node_of_type() -
       returns opaque RsvgNode*, not a castable thing.
    
    ** DONE Audit all casts; grep for "*)".  It turns out that once
       rsvg_rust_cnode_get_impl() was in place, all the casts disappeared
       from the code!
    
    ** DONE Look for casts to (RsvgNode *) - those are most likely wrong
       now.
    
    ** DONE rsvg_node_free() - remove; replace with unref().
    
    ** TODO pattern_new() - implemented in Rust, takes a RsvgNode argument
       from C - what to do with it?
    
    ** TODO rsvg_pattern_node_has_children() - move to Rust?
    
    ** DONE Audit calls to rsvg_node_get_parent() - unref when done.
    
    ** DONE rsvg_drawing_ctx_free() - unref the nodes in drawsub_stack.
    
    ** DONE rsvg_node_add_child() - use it when creating a new node.
    
    ** DONE Audit "node_a == node_b" as node references can't be compared
       anymore; use rsvg_node_is_same().
    	    

    (You may recognize this format as Emacs org-mode - it's awesome.)

    That to-do list was especially useful when the code was in the middle of being refactored maybe it would compile, but it would definitely not be correct. I suggest that if you try to do a similar port/refactor, you keep a detailed to-do list of sweeping changes.

February 28, 2017

security things in Linux v4.10

Previously: v4.9.

Here’s a quick summary of some of the interesting security things in last week’s v4.10 release of the Linux kernel:

PAN emulation on arm64

Catalin Marinas introduced ARM64_SW_TTBR0_PAN, which is functionally the arm64 equivalent of arm’s CONFIG_CPU_SW_DOMAIN_PAN. While Privileged eXecute Never (PXN) has been available in ARM hardware for a while now, Privileged Access Never (PAN) will only be available in hardware once vendors start manufacturing ARMv8.1 or later CPUs. Right now, everything is still ARMv8.0, which left a bit of a gap in security flaw mitigations on ARM since CONFIG_CPU_SW_DOMAIN_PAN can only provide PAN coverage on ARMv7 systems, but nothing existed on ARMv8.0. This solves that problem and closes a common exploitation method for arm64 systems.

thread_info relocation on arm64

As done earlier for x86, Mark Rutland has moved thread_info off the kernel stack on arm64. With thread_info no longer on the stack, it’s more difficult for attackers to find it, which makes it harder to subvert the very sensitive addr_limit field.

linked list hardening
I added CONFIG_BUG_ON_DATA_CORRUPTION to restore the original CONFIG_DEBUG_LIST behavior that existed prior to v2.6.27 (9 years ago): if list metadata corruption is detected, the kernel refuses to perform the operation, rather than just WARNing and continuing with the corrupted operation anyway. Since linked list corruption (usually via heap overflows) are a common method for attackers to gain a write-what-where primitive, it’s important to stop the list add/del operation if the metadata is obviously corrupted.

seeding kernel RNG from UEFI

A problem for many architectures is finding a viable source of early boot entropy to initialize the kernel Random Number Generator. For x86, this is mainly solved with the RDRAND instruction. On ARM, however, the solutions continue to be very vendor-specific. As it turns out, UEFI is supposed to hide various vendor-specific things behind a common set of APIs. The EFI_RNG_PROTOCOL call is designed to provide entropy, but it can’t be called when the kernel is running. To get entropy into the kernel, Ard Biesheuvel created a UEFI config table (LINUX_EFI_RANDOM_SEED_TABLE_GUID) that is populated during the UEFI boot stub and fed into the kernel entropy pool during early boot.

arm64 W^X detection

As done earlier for x86, Laura Abbott implemented CONFIG_DEBUG_WX on arm64. Now any dangerous arm64 kernel memory protections will be loudly reported at boot time.

64-bit get_user() zeroing fix on arm
While the fix itself is pretty minor, I like that this bug was found through a combined improvement to the usercopy test code in lib/test_user_copy.c. Hoeun Ryu added zeroing-on-failure testing, and I expanded the get_user()/put_user() tests to include all sizes. Neither improvement alone would have found the ARM bug, but together they uncovered a typo in a corner case.

no-new-privs visible in /proc/$pid/status
This is a tiny change, but I like being able to introspect processes externally. Prior to this, I wasn’t able to trivially answer the question “is that process setting the no-new-privs flag?” To address this, I exposed the flag in /proc/$pid/status, as NoNewPrivs.

That’s all for now! Please let me know if you saw anything else you think needs to be called out. :) I’m already excited about the v4.11 merge window opening…

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

February 27, 2017

Interview with Guilherme Silveira Dias

Could you tell us something about yourself?

I flow between the frugal pal who can still find a creative way of making art even when the limitations get worse; And the bon vivant of me and my father at art supplies stores like they were literally our home sweet home. My current notebook is a shabby 32GB onboard flash-disk that barely allows me to draw in a screen sized canvas with Krita Gemini. Yes, I’m using a touch screen pen because my notebook ignores and treats my Intuos 2 as a mouse. So I’m having a much better experience with my tiny android phone with its drawing apps. One of those apps, Autodesk Sketchbook, under my circumstances now, is my only choice for 2D Digital Painting. Good stuff, bought it and am making comics with it!!! Hahahaha.

Do you paint professionally, as a hobby artist, or both?

I’m an artist drawing professionally since 1994, had my first watercolors exposition in 2001, india ink illustrations for an Irish literature book in 2007, and acrylics paintings expositions between 2008 and 2016.

What genre(s) do you work in?

Among other genres, as in visual storytelling, storyboarding and zines, I work in comedy (though making someone laugh is so hard they could totally mistake it for serious/not kidding.)

Whose work inspires you most — who are your role models as an artist?

The hard-work of Peleng, and so natural of Kim Jung Gi inspires me the most, though I wish Gi was ambidextrous — but the Italian designer Bruno Munari is my role model as an artist. Among many of Munari’s professional values, I’m talking about his separation between purely commercial artifacts or artifacts of complete accessibility.

How and when did you get to try digital painting for the first time?

Worst experience of my entire teeenager all nighters! Heheh. I didn’t know how to ask the right questions of the right people, and because my example of digital art came from a mag filled with some nice and detailed 2D Digital Illustrations, I, instead of understanding that all that cool DOOM game sprites were what I was supposed to draw, I tried to do the big illustrations, very similar to traditional, that I saw in the digital art magazine I had.

What makes you choose digital over traditional painting?

I don’t, it is binary, I can choose to paint either traditional or digital.

I’ve been working on my art in three very different ways: the first one aims for beauty, perfection, even if impossible to reach. The closest I’ve got was on two pieces, Find the Sacred in Humanity (the featured image) – traditional painting of singer Julie Wein, and a 2D Digital Portrait of a Civilian Baby. The second way is pure entertainment, that’s the one all done on Krita, where I give life to Character Design Model Sheets producing industry material, which then can be adapted and shared in games and movies as long as for non-commercial uses. And the third is Underground Zine Publishing (this one is under the WTFPL, a very permissive license for artistic works that offers a great degree of freedom).

How did you find out about Krita?

Fernando Michelotti told me I would work better on something totally developed artist-centered (instead of any open source image manipulation program or proprietary raster graphics editor).

What was your first impression?

“I know Kung-Fu”

What do you love about Krita?

My first impression only gets stronger.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Improvement: ideally what for more than a decade only IllustStudio (a software that you need to have an address in Japan to be eligible to pay monthly subscription) has, the surface coating feature: http://herionz.deviantart.com/art/Illuststudio-Surface-Coating-tool-tips-290615045, but at least the al.chemy.org Create > Pull Shapes. Annoys me: It’s ok that it isn’t the de facto standard for 2D digital painting on all Linux distros, but that fact annoys me. It is like a childhood without Metroid, which is my case.

I bought other softwares, one for comics and another for vector art. But for concept art I’ve chosen Krita, beyond any other competitor. And if Krita had all the features of http://al.chemy.org/ I wouldn’t need to work on my silhouettes with al.chemy. And if Krita Gemini was an app for android I’d pay any fair price, hell, I’d pay a monthly subscription. I love mobility and tiny A-6 paper books and sketchbooks, so an android phone with krita app would be one of my biggest dreams come true, especially when I get an iPad Pro I’ll probably receive mid 2017.

What sets Krita apart from the other tools that you use?

Let me stretch this a little bit. Imagine a world where almost more than half of the population is way too poor to pay not only for the de facto industry standard in raster graphics editing but anything, Then picture that somewhere nearby each group of people there is a very slow computer, and in one of the seldom hours the very slow Internet works they download Krita. Okay, we don’t need to go to that level under misery to understand Krita, see the possibility of way more artists, even paid professional. Simply imagine what happens when people whose only property is lifetime finally control the means of production. But that’s just text, very fantastic or a fantasy, because I don’t know if what I just wrote has any example in our societies. But anything close to what I wrote definitely sets Krita apart from other tools that I use.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


“Catman” (face closeup). Because I did it all with only one brush, and only kept using two features: the X key to switch fore to background brush color and the hold Ctrl + click to sample colors, witch makes the Block_brush preset kick ass.

Where can people see more of your work?

http://www.edli.work

Anything else you’d like to share?

I’d like to acknowledge my friend Fernando Michelotti without whom I’d might never heard of Krita, again, like my childhood without Metroid — thx NES emulators for fixing that childhood bug — and acknowledge Fernando for knowing so much about the Krita staff and always saying cool stuff about you guys. The digital world is very warm only when you’re not simply in front of a painting software screen but also feeling connected to a whole community that makes digital painting accessible almost to everyone.

 

Find us at SCaLE 15x


Find us at SCaLE 15x

The Southern California Linux Expo (SCaLE) 15x is returning to the Pasadena Convention Center on March 2-5, 2017. SCaLE is one of the largest community-organized conferences in North America, with some 3,500 attendees last year.

SCaLE Logo

If you’re attending the conference this year, find me, @paperdigits and lets talk shop or grab a meal!

@paperdigits Don’t judge me, it was the morning.
You can ping me on the forum, on twitter, or on Matrix/riot.im at @paperdigits:matrix.org. If meeting isn’t enough for you, I’ll have stickers!
Get yourself some stickers!

February 24, 2017

Coder Dojo: Kids Teaching Themselves Programming

We have a terrific new program going on at Los Alamos Makers: a weekly Coder Dojo for kids, 6-7 on Tuesday nights.

Coder Dojo is a worldwide movement, and our local dojo is based on their ideas. Kids work on programming projects to earn colored USB wristbelts, with the requirements for belts getting progressively harder. Volunteer mentors are on hand to help, but we're not lecturing or teaching, just coaching.

Despite not much advertising, word has gotten around and we typically have 5-7 kids on Dojo nights, enough that all the makerspace's Raspberry Pi workstations are filled and we sometimes have to scrounge for more machines for the kids who don't bring their own laptops.

A fun moment early on came when we had a mentor meeting, and Neil, our head organizer (who deserves most of the credit for making this program work so well), looked around and said "One thing that might be good at some point is to get more men involved." Sure enough -- he was the only man in the room! For whatever reason, most of the programmers who have gotten involved have been women. A refreshing change from the usual programming group. (Come to think of it, the PEEC web development team is three women. A girl could get a skewed idea of gender demographics, living here.) The kids who come to program are about 40% girls.

I wondered at the beginning how it would work, with no lectures or formal programs. Would the kids just sit passively, waiting to be spoon fed? How would they get concepts like loops and conditionals and functions without someone actively teaching them?

It wasn't a problem. A few kids have some prior programming practice, and they help the others. Kids as young as 9 with no previous programming experience walk it, sit down at a Raspberry Pi station, and after five minutes of being shown how to bring up a Python console and use Python's turtle graphics module to draw a line and turn a corner, they're happily typing away, experimenting and making Python draw great colorful shapes.

Python-turtle turns out to be a wonderful way for beginners to learn. It's easy to get started, it makes pretty pictures, and yet, since it's Python, it's not just training wheels: kids are using a real programming language from the start, and they can search the web and find lots of helpful examples when they're trying to figure out how to do something new (just like professional programmers do. :-)

Initially we set easy requirements for the first (white) belt: attend for three weeks, learn the names of other Dojo members. We didn't require any actual programming until the second (yellow) belt, which required writing a program with two of three elements: a conditional, a loop, a function.

That plan went out the window at the end of the first evening, when two kids had already fulfilled the yellow belt requirements ... even though they were still two weeks away from the attendance requirement for the white belt. One of them had never programmed before. We've since scrapped the attendance belt, and now the white belt has the conditional/loop/function requirement that used to be the yellow belt.

The program has been going for a bit over three months now. We've awarded lots of white belts and a handful of yellows (three new ones just this week). Although most of the kids are working in Python, there are also several playing music or running LED strips using Arduino/C++, writing games and web pages in Javascript, writing adventure games Scratch, or just working through Khan Academy lectures.

When someone is ready for a belt, they present their program to everyone in the room and people ask questions about it: what does that line do? Which part of the program does that? How did you figure out that part? Then the mentors review the code over the next week, and they get the belt the following week.

For all but the first belt, helping newer members is a requirement, though I suspect even without that they'd be helping each other. Sit a first-timer next to someone who's typing away at a Python program and watch the magic happen. Sometimes it feels almost superfluous being a mentor. We chat with the kids and each other, work on our own projects, shoulder-surf, and wait for someone to ask for help with harder problems.

Overall, a terrific program, and our only problems now are getting funding for more belts and more workstations as the word spreads and our Dojo nights get more crowded. I've had several adults ask me if there was a comparable program for adults. Maybe some day (I hope).

Fri 2017/Feb/24

  • Griping about parsers and shitty specifications

    The last time, I wrote about converting librsvg's tree of SVG element nodes from C to Rust — basically, the N-ary tree that makes up librsvg's view of an SVG file's structure. Over the past week I've been porting the code that actually implements specific shapes. I ran into the problem of needing to port the little parser that librsvg uses for SVG's list-of-points data type; this is what SVG uses for the points in a polygon or a polyline. In this post, I want to tell you my story of writing Rust parsers, and I also want to vent a bit.

    My history of parsing in Rust

    I've been using hand-written parsers in librsvg, basically learning how to write parsers in Rust as I go. In a rough timeline, this is what I've done:

    1. First was the big-ass parser for Bézier path data, an awkward recursive-descent monster that looks very much like what I would have written in C, and which is in dire need of refactoring (the tests are awesome, though, if you (hint, hint) want to lend a hand).

    2. Then was the simple parser for RsvgLength values, in which the most expedient thing, if not the prettiest, was to reimplement strtod() in Rust. You know how C and/or glib have a wealth of functions to write half-assed parsers quickly? Yeah, that's what I went for here. However, with this I learned that strtod()'s behavior of returning a pointer to the thing-after-the-number can be readily implemented with Rust's string slices.

    3. Then was the little parser for preserveAspectRatio values, which actually uses Rust idioms like using a split_whitespace() iterator and returing a custom error inside a Result.

    4. Lastly, I've implemented a parser for SVG's list-of-points data type. While shapes like rect and circle simply use RsvgLength values which can have units, like percentages or points:

      <rect x="5pt" y="6pt" width="7.5 pt" height="80%/">
      
      <circle cx="25.0" cy="30.0" r="10.0"/>
      		

      In contrast, polygon and polyline elements let you specify lists of points using a simple but quirky grammar that does not use units; the numbers are relative to the object's current coordinate system:

      <polygon points="100,60 120,140 140,60 160,140 180,60 180,100 100,100"/>
      		

    Quirky grammars and shitty specs

    The first inconsistency in the above is that some object types let you specify their positions and sizes with actual units (points, centimeters, ems), while others don't — for these last ones, the coordinates you specify are relative to the object's current transformation. This is not a big deal; I suspect it is either an oversight in the spec, or they didn't want to think of a grammar that would accomodate units like that.

    The second inconsistency is in SVG's quirky grammars. These are equivalent point lists:

    points="100,-60 120,-140 140,60 160,140"
    
    points="100 -60 120 -140 140 60 160 140"
    
    points="100-60,120-140,140,60 160,140"
    
    points="100-60,120, -140,140,60 160,140"
    	    

    Within an (x, y) coordinate pair, the space is optional if the y coordinate is negative. Or you can separate the x and y with a space, with an optional comma.

    But between coordinate pairs, the separator is not optional. You must have any of whitespace or a comma, or both. Even if an x coordinate is negative, it must be separated from its preceding y coordinate.

    But this is not true for path specifications! The following are paths that start with a Move_to comamnd and follow with three Line_to commands, and they are equivalent:

    M 1 2 L -3 -4 5 -6 7 -8
    
    M1,2 L-3 -4, 5,-6,7 -8
    
    M1,2L-3-4, 5-6,7-8
    
    M1 2-3,-4,5-6 7,-8
    
    M1,2-3-4 5-6 7-8
    	    

    Inside (x, y) pairs, you can have whitespace with an optional comma, or nothing if the y is negative. Between coordinate pairs you can have whitespace with an optional comma, or nothing at all! As you wish! And also, since all subpaths start with a single move_to anyway, you can compress a "M point L point point" to just "M point point point" — the line_to is implied.

    I think this is a misguided attempt by the SVG spec to allow compressibility of the path data. But they might have gotten more interesting gains, oh, I don't know, by not using XML and just nesting things with parentheses.

    Borat and XML big     data

    Also, while the grammar for the point lists in polygon and polyline clearly implies that you must have an even number of floats in your list (for the x/y coordinate pairs), the description of the polygon element says that if you have an odd number of coordinates, the element is "in error" and should be treated the same as an erroneous path description. But the implementation notes for paths say that you should "render a path element up to (but not including) the path command containing the first error in the path data specification", i.e. just ignore the orphan coordinate and somehow indicate an error.

    So which one is it? Should we a) count the floats and ignore the odd one out? Or b) should we follow the grammar and ditch the whole string if it isn't matched by the grammar? Or c) put special handling in the grammar for the last odd coordinate?

    I mean, it's no big deal to actually handle this in the code, but it's annoying to have a spec that can't make up its mind.

    The expediency of half-assed parsers

    How would you parse a string that has an array of floating-point numbers separated by whitespace? In Python you could split() the string and then apply float() to each item, and catch an exception if the value does not parse to float. Or something like that.

    The C code in librsvg has a rather horrible rsvg_css_parse_number_list() function, complete with a "/* TODO: some error checking */" comment, that is implemented in terms of an equally horrible rsvg_css_parse_list() function. This last one is the one that handles the "optional whitespace and comma" shenanigans... with strtok_r(). Between both functions there are like a million temporary copies of the whole string and the little substrings. Or something like that.

    Neither version is fully conformant to the quirks in the SVG spec. I'm replacing all that shit with "proper" parsers written in Rust. And what could be more proper than to actually use a parser-generating library instead of writing them from scratch?

    The score is Nom: 2 - Federico: 1

    This is the second time I have tried to use nom, a library for parser combinators. I like the idea of parser combinators instead of writing a grammar and then generating a parser out of that: supposedly your specification of a parser combinator closely resembles your grammar, but the whole thing is written and embedded in the same language as the rest of your code.

    There is a good number of parsers already written with nom, so clearly it works.

    But nom has been... hard. A few months ago I tried rewriting the path data parser for librsvg, from C to Rust with nom, and it was a total failure. I couldn't even parse a floating-point number out of a string. I ditched it after a week of frustration, and wrote my own parser by hand.

    Last week I tried nom again. Sebastian Dröge was very kind to jumpstart me with some nom-based code to parse floating-point numbers. I then discovered that the latest nom actually has that out of the box, but it is buggy, and after some head-scratching I was able to propose a fix. Apparently my fix won't work if nom is being used for a streaming parser, but I haven't gotten around to learning those yet.

    I've found nom code to be very hard to write. Nom works by composing macros which then call recognizer functions and other macros. To be honest, I find the compiler errors for that kind of macros quite intractable. And I'm left with not knowing if the problem is in my use of the macros, or in the macro definitions themselves.

    After much, much trial-and-error, and with the vital help of Sebastian Köln from the #nom IRC channel, I have a nom function that can match SVG's comma-whitespace construct and two others that can match coordinate pairs and lists of points for the polygon and polyline elements. This last one is not even complete: it still doesn't handle a list of points that has whitespace around the list itself (SVG allows that, as in "   1 2, 3 4   ").

    While I can start using those little nom-based parsers to actually build implementations of librsvg's objects, I don't want to battle nom anymore. The documentation is very sketchy. The error messages from macros don't make sense to me. I really like the way nom wants to handle things, with an IResult type that holds the parsing state, but it's just too hard for me to even get things to compile with it.

    So what are you gonna use?

    I don't know yet. There are other parser libraries for Rust; pom, lalrpop, and pest among them. I hope they are easier to try out than nom. If not, "when in doubt, use brute force" may apply well here — hand-written parsers in Rust are as cumbersome as in any other language, but at least they are memory-safe, and writing tests for them is a joy in Rust.

February 22, 2017

A logo for cri-o

Dan Walsh recently asked me if I could come up with a logo for a project he is involved with – cri-o.

The “cri” of cri-o stands for Container Runtime Interface. The CRI is a different project – the CRI is an API between Kubernetes (container orchestration) and various container runtimes. cri-o is a runtime – like rkt or Docker – that can run containers that are compliant with OCI (Open Containers Initiative) specification. (Some more info on this is here.)

Dan and Antonio suggested a couple of ideas at the outset:

  • Since the project means to connect to Kubernetes via the CRI, it might be neat to have some kind of nod to Kubernetes. Kubernetes’ logo is a nautical one (the wheel of a ship, with 7 spokes.)
  • If you say cri-o out loud, it kind of sounds like cyro, e.g., icy-cool like Mr. Freeze from Batman!
  • If we want to go for a mascot, a mammoth might be a neat one (from an icy time.)

So I had two initial ideas, riffing off of those:

  1. I tried to think of something nautical and frozen that might relate to Kubernetes in a reasonable way given what cri-o actually does. I kept coming back to icebergs, but they don’t relate to ships’ steering in the same way, and worse, I think they could have a bad connotation whether it’s around stranding polar bears, melting and drowning us all, or the Titanic.
  2. Better idea – not nautical, yet it related to the Kubernetes logo in a way. I was thinking a snowflake might be an interesting representation – it could have 7 spokes like the Kubernetes wheel. It relates a bit in that snowflakes are a composition of a lot of little ice crystals (containers), and kubernetes would place them on the runtime (cri-o) in a formation that made the most sense, forming something beautiful 🙂 (the snowflake.)

I abandoned the iceberg idea and went through a lot of iterations of different snowflake shapes – there are so many ways to make a snowflake! I used the cloning feature in Inkscape to set up the first spoke, then cloned it to the other 6 spokes. I was able to tweak the shapes in the first spoke and have it affect all spokes simultaneously. (It’s a neat trick I should video blog one of these days.)

This is what I came up with:

3 versions of the crio logo with different color treatments - one on a white background, one on a flat blue background, one on a blue gradient background. on the left is a 7-spoke snowflake constructed from thin lines and surrounded by a 7-sided polygon, on the right is the logotype 'cri-o'

I ended up on a pretty simple snowflake – I think it needs to be readable at small sizes, and while you can come up with some beautiful snowflake compositions in Inkscape, it’s easy to make snowflakes that are too elaborate and detailed to work well at a small size. The challenge was clarity at a small size as well as readability as a snowflake. The narrow-line drawing style seems to be pretty popular these days too.

The snowflake shape is encased in a 7-sided polygon (similar to the Kubernetes logo) – my thinking being the shape and narrowness of the line kind of make it looked like the snowflake is encased in ice (along the cryo initial idea.)

The dark blue color is a nod to the nautical theme; the bright blue highlight color is a nod to the cyro idea.

Completely symbolic, and maybe not in a clear / rational way, but I colored a little piece of each snowflake spoke using a blue highlight color, trying to make it look like those are individual pieces of the snowflake structure (eg the crystals == containers idea) getting deployed to create the larger snowflake.

Anyway! That is an idea for the cri-o logo. What do you think? Does it work for you? Do you have other, better ideas?

February 21, 2017

Help Fedora Hubs by taking this survey

Here’s a quick and easy way to help Fedora Hubs!

Our Outreachy intern, Suzanne Hillman, has put together a survey about Fedora contributors’ usage of social media to help us prioritize potential future integration with various social media platforms with Fedora Hubs. If you’d like your social media hangouts of choice to be considered for integration, please take the survey!

Take the survey now!

February 19, 2017

Stellarium 0.12.8

Stellarium 0.12.8 has been released today!

The series 0.12 is LTS for owners of old computers (old with weak graphics cards) and this is bugfix release:
- Added textures for deep-sky objects (port from the series 1.x/0.15)
- Fixed sizes of few DSO textures (LP: #1641773)
- Fixed problem with defaultStarsConfig data and loading a new star catalogs (LP: #1641803)

February 18, 2017

Highlight and remove extraneous whitespace in emacs

I recently got annoyed with all the trailing whitespace I saw in files edited by Windows and Mac users, and in code snippets pasted from sites like StackOverflow. I already had my emacs set up to indent with only spaces:

(setq-default indent-tabs-mode nil)
(setq tabify nil)
and I knew about M-x delete-trailing-whitespace ... but after seeing someone else who had an editor set up to show trailing spaces, and tabs that ought to be spaces, I wanted that too.

To show trailing spaces is easy, but it took me some digging to find a way to control the color emacs used:

;; Highlight trailing whitespace.
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "yellow")

I also wanted to show tabs, since code indented with a mixture of tabs and spaces, especially if it's Python, can cause problems. That was a little harder, but I eventually found it on the EmacsWiki: Show whitespace:

;; Also show tabs.
(defface extra-whitespace-face
  '((t (:background "pale green")))
  "Color for tabs and such.")

(defvar bad-whitespace
  '(("\t" . 'extra-whitespace-face)))

While I was figuring this out, I got some useful advice related to emacs faces on the #emacs IRC channel: if you want to know why something is displayed in a particular color, put the cursor on it and type C-u C-x = (the command what-cursor-position with a prefix argument), which displays lots of information about whatever's under the cursor, including its current face.

Once I had my colors set up, I found that a surprising number of files I'd edited with vim had trailing whitespace. I would have expected vim to be better behaved than that! But it turns out that to eliminate trailing whitespace, you have to program it yourself. For instance, here are some recipes to Remove unwanted spaces automatically with vim.

February 17, 2017

Fri 2017/Feb/17

  • How librsvg exports reference-counted objects from Rust to C

    Librsvg maintains a tree of RsvgNode objects; each of these corresponds to an SVG element. An RsvgNode is a node in an n-ary tree; for example, a node for an SVG "group" can have any number of children that represent various shapes. The toplevel element is the root of the tree, and it is the "svg" XML element at the beginning of an SVG file.

    Last December I started to sketch out the Rust code to replace the C implementation of RsvgNode. Today I was able to push a working version to the librsvg repository. This is a major milestone for myself, and this post is a description of that journey.

    Nodes in librsvg in C

    Librsvg used to have a very simple scheme for memory management and its representation of SVG objects. There was a basic RsvgNode structure:

    typedef enum {
        RSVG_NODE_TYPE_INVALID,
        RSVG_NODE_TYPE_CHARS,
        RSVG_NODE_TYPE_CIRCLE,
        RSVG_NODE_TYPE_CLIP_PATH,
        /* ... a bunch of other node types */
    } RsvgNodeType;
    	      
    typedef struct {
        RsvgState    *state;
        RsvgNode     *parent;
        GPtrArray    *children;
        RsvgNodeType  type;
    
        void (*free)     (RsvgNode *self);
        void (*draw)     (RsvgNode *self, RsvgDrawingCtx *ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNode;
    	    

    This is a no-frills base struct for SVG objects; it just has the node's parent, its children, its type, the CSS state for the node, and a virtual function table with just three methods. In typical C fashion for derived objects, each concrete object type is declared similar to the following one:

    typedef struct {
        RsvgNode super;
        RsvgLength cx, cy, r;
    } RsvgNodeCircle;
    	    

    The user-facing object in librsvg is an RsvgHandle: that is what you get out of the API when you load an SVG file. Internally, the RsvgHandle has a tree of RsvgNode objects — actually, a tree of concrete implementations like the RsvgNodeCircle above or others like RsvgNodeGroup (for groups of objects) or RsvgNodePath (for Bézier paths).

    Also, the RsvgHandle has an all_nodes array, which is a big list of all the RsvgNode objects that it is handling, regardless of their position in the tree. It also has a hash table that maps string IDs to nodes, for when the XML elements in the SVG have an "id" attribute to name them. At various times, the RsvgHandle or the drawing-time machinery may have extra references to nodes within the tree.

    Memory management is simple. Nodes get allocated at loading time, and they never get freed or moved around until the RsvgHandle is destroyed. To free the nodes, the RsvgHandle code just goes through its all_nodes array and calls the node->free() method on each of them. Any references to the nodes that remain in other places will dangle, but since everything is being freed anyway, things are fine. Before the RsvgHandle is freed, the code can copy pointers around with impunity, as it knows that the all_nodes array basically stores the "master" pointers that will need to be freed in the end.

    But Rust doesn't work that way

    Not so, indeed! C lets you copy pointers around all you wish; it lets you modify all the data at any time; and forces you to do all the memory-management bookkeeping yourself. Rust has simple and strict rules for data access, with profound implications. You may want to read up on ownership (where variable bindings own the value they refer to, there is one and only one binding to a value at any time, and values are deleted when their variable bindings go out of scope), and on references and borrowing (references can't outlive their parent scope; and you can either have one or more immutable references to a resource, or exactly one mutable reference, but not both at the same time). Together, these rules avoid dangling pointers, data races, and other common problems from C.

    So while the C version of librsvg had a sea of carefully-managed shared pointers, the Rust version needs something different.

    And it all started with how to represent the tree of nodes.

    How to represent a tree

    Let's narrow down our view of RsvgNode from C above:

    typedef struct {
        ...
        RsvgNode  *parent;
        GPtrArray *children;
        ...
    } RsvgNode;
    	    

    A node has pointers to all its children, and each child has a pointer back to the parent node. This creates a bunch of circular references! We would need a real garbage collector to deal with this, or an ad-hoc manual memory management scheme like librsvg's and its all_nodes array.

    Rust is not garbage-collected and it doesn't let you have shared pointers easily or without unsafe code. Instead, we'll use reference counting. To avoid circular references, which a reference-counting scheme cannot handle, we use strong references from parents to children, and weak references from the children to point back to parent nodes.

    In Rust, you can add reference-counting to your type Foo by using Rc<Foo> (if you need atomic reference-counting for multi-threading, you can use an Arc<Foo>). An Rc represents a strong reference count; conversely, a Weak<Foo> is a weak reference. So, the Rust Node looks like this:

    pub struct Node {
        ...
        parent:    Option<Weak<Node>>,
        children:  RefCell<Vec<Rc<Node>>>,
        ...
    }
    	    

    Let's unpack that bit by bit.

    "parent: Option<Weak<Node>>". The Weak<Node> is a weak reference to the parent Node, since a strong reference would create a circular refcount, which is wrong. Also, not all nodes have a parent (i.e. the root node doesn't have a parent), so put the Weak reference inside an Option. In C you would put a NULL pointer in the parent field; Rust doesn't have null references, and instead represents lack-of-something via an Option set to None.

    "children: RefCell<Vec<Rc<Node>>>". The Vec<Rc<Node>>> is an array (vector) of strong references to child nodes. Since we want to be able to add children to that array while the rest of the Node structure remains immutable, we wrap the array in a RefCell. This is an object that can hand out a mutable reference to the vector, but only if there is no other mutable reference at the same time (so two places of the code don't have different views of what the vector contains). You may want to read up on interior mutability.

    Strong Rc references and Weak refs behave as expected. If you have an Rc<Foo>, you can ask it to downgrade() to a Weak reference. And if you have a Weak<Foo>, you can ask it to upgrade() to a strong Rc, but since this may fail if the Foo has already been freed, that upgrade() returns an Option<Rc<Foo>> — if it is None, then the Foo was freed and you don't get a strong Rc; if it is Some(x), then x is an Rc<Foo>, which is your new strong reference.

    Handing out Rust reference-counted objects to C

    In the post about Rust constructors exposed to C, we talked about how a Box is Rust's primitive to put objects in the heap. You can then ask the Box for the pointer to its heap object, and hand out that pointer to C.

    If we want to hand out an Rc to the C code, we therefore need to put our Rc in the heap by boxing it. And going back, we can unbox an Rc and let it fall out of scope in order to free the memory from that box and decrease the reference count on the underlying object.

    First we will define a type alias, so we can write RsvgNode instead of Rc<Node> and make function prototypes closer to the ones in the C code:

    pub type RsvgNode = Rc<Node>;
    	    

    Then, a convenience function to box a refcounted Node and extract a pointer to the Rc, which is now in the heap:

    pub fn box_node (node: RsvgNode) -> *mut RsvgNode {
        Box::into_raw (Box::new (node))
    }
    	    

    Now we can use that function to implement ref():

    #[no_mangle]
    pub extern fn rsvg_node_ref (raw_node: *mut RsvgNode) -> *mut RsvgNode {
        assert! (!raw_node.is_null ());
        let node: &RsvgNode = unsafe { & *raw_node };
    
        box_node (node.clone ())
    }
    	    

    Here, the node.clone () is what increases the reference count. Since that gives us a new Rc, we want to box it again and hand out a new pointer to the C code.

    You may want to read that twice: when we increment the refcount, the C code gets a new pointer! This is like creating a hardlink to a Unix file — it has two different names that point to the same inode. Similarly, our boxed, cloned Rc will have a different heap address than the original one, but both will refer to the same Node in the end.

    This is the implementation for unref():

    #[no_mangle]
    pub extern fn rsvg_node_unref (raw_node: *mut RsvgNode) -> *mut RsvgNode {
        if !raw_node.is_null () {
            let _ = unsafe { Box::from_raw (raw_node) };
        }
    
        ptr::null_mut () // so the caller can do "node = rsvg_node_unref (node);" and lose access to the node
    }
    	    

    This is very similar to the destructor from a few blog posts ago. Since the Box owns the Rc it contains, letting the Box go out of scope frees it, which in turn decreases the refcount in the Rc. However, note that this rsvg_node_unref(), intended to be called from C, always returns a NULL pointer. Together, both functions are to be used like this:

    RsvgNode *node = ...; /* acquire a node from Rust */
    
    RsvgNode *extra_ref = rsvg_node_ref (node);
    
    /* ... operate on the extra ref; now discard it ... */
    
    extra_ref = rsvg_node_unref (extra_ref);
    
    /* Here extra_ref == NULL and therefore you can't use it anymore! */
    	    

    This is a bit different from g_object_ref(), which returns the same pointer value as what you feed it. Also, the pointer that you would pass to g_object_unref() remains usable if you didn't take away the last reference... although of course, using it directly after unreffing it is perilous as hell and probably a bug.

    In these functions that you call from C but are implemented in Rust, ref() gives you a different pointer than what you feed it, and unref() gives you back NULL, so you can't use that pointer anymore.

    To ensure that I actually used the values as intended and didn't fuck up the remaining C code, I marked the function prototypes with the G_GNUC_WARN_UNUSED_RESULT attribute. This way gcc will complain if I just call rsvg_node_ref() or rsvg_node_unref() without actually using the return value:

    RsvgNode *rsvg_node_ref (RsvgNode *node) G_GNUC_WARN_UNUSED_RESULT;
    
    RsvgNode *rsvg_node_unref (RsvgNode *node) G_GNUC_WARN_UNUSED_RESULT;
    	    

    And this actually saved my butt in three places in the code when I was converting it to reference counting. Twice when I forgot to just use the return values as intended; once when the old code was such that trivially adding refcounting made it use a pointer after unreffing it. Make the compiler watch your back, kids!

    Testing

    One of the things that makes me giddy with joy is how easy it is to write unit tests in Rust. I can write a test for the refcounting machinery above directly in my node.rs file, without needing to use C.

    This is the test for ref and unref:

    #[test]
    fn node_refs_and_unrefs () {
        let node = Rc::new (Node::new (...));
    
        let mut ref1 = box_node (node);                            // "hand out a pointer to C"
    
        let new_node: &mut RsvgNode = unsafe { &mut *ref1 };       // "bring back a pointer from C"
        let weak = Rc::downgrade (new_node);                       // take a weak ref so we can know when the node is freed
    
        let mut ref2 = unsafe { rsvg_node_ref (new_node) };        // first extra reference
        assert! (weak.upgrade ().is_some ());                      // "you still there?"
    
        ref2 = unsafe { rsvg_node_unref (ref2) };                  // drop the extra reference
        assert! (weak.upgrade ().is_some ());                      // "you still have life left in you, right?"
    
        ref1 = unsafe { rsvg_node_unref (ref1) };                  // drop the last reference
        assert! (weak.upgrade ().is_none ());                      // "you are dead, aren't you?
    }
    	    

    And this is the test for two refcounts indeed pointing to the same Node:

    #[test]
    fn reffed_node_is_same_as_original_node () {
        let node = Rc::new (Node::new (...));
    
        let mut ref1 = box_node (node);                         // "hand out a pointer to C"
    
        let mut ref2 = unsafe { rsvg_node_ref (ref1) };         // "C takes an extra reference and gets a new pointer"
    
        unsafe { assert! (rsvg_node_is_same (ref1, ref2)); }    // but they refer to the same thing, correct?
    
        ref1 = rsvg_node_unref (ref1);
        ref2 = rsvg_node_unref (ref2);
    }
    	    

    Hold on! Where did that rsvg_node_is_same() come from? Since calling rsvg_node_ref() now gives a different pointer to the original ref, we can no longer just do "some_node == tree_root" to check for equality and implement a special case. We need to do "rsvg_node_is_same (some_node, tree_root)" instead. I'll just point you to the source for this function.

Today, I:

I’ve gazed enviously at many a productivity scheme. Getting Things Done™, do one thing at a time, use a swimming desk, only use hand-hewn pencils on organic hemp paper, and so on.

I assume most of these techniques and schemes are like diets or exercise routines. There are no silver bullets, but there may be an occasional nugget of truth among the gimmicks and marketing.

Inspired by a post about daily work journals, I have found one tiny little trick that has actually worked for me. It hasn’t transformed my life or quadrupled my productivity. It has made me a touch more aware of how I spend my time.

Every weekday at 4:45pm, get a gentle reminder from Slack, the chat system we use at work. It looks like this:

The #retrospectives text is a link to a channel in Slack that is available to others to read, but where they won’t be bothered by my updates (unless they opt-in). I click the link and write a quick bullet-list summary of what I have done that day, starting with “Today, I:”. It usually looks something like this:

Screenshot of a daily work log

My first such post was on August 16, 2016. To my surprise, I have stuck with it. As of mid-February, about seven months later, I have posted 134 entries – one for every day I have worked.

What’s the point of writing about what you’ve already done each day? It serves several purposes for me. Most importantly, the ritual reminds me to pause and reflect (very briefly) on what I accomplished that day. This simple act makes me a bit more mindful of how I spend my time and energy. The log also proves useful for any kind of retroactive reporting (When did I start working on project X? How many days in October did I spend on client Y?).

It may also be helpful in 10,000 years, when aliens are trying to reconstruct what daily life was like for 2000-era web designer.

February 13, 2017

Emacs: Initializing code files with a template

Part of being a programmer is having an urge to automate repetitive tasks.

Every new HTML file I create should include some boilerplate HTML, like <html><head></head></body></body></html>. Every new Python file I create should start with #!/usr/bin/env python, and most of them should end with an if __name__ == "__main__": clause. I get tired of typing all that, especially the dunderscores and slash-greater-thans.

Long ago, I wrote an emacs function called newhtml to insert the boilerplate code:

(defun newhtml ()
  "Insert a template for an empty HTML page"
  (interactive)
  (insert "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n"
          "<html>\n"
          "<head>\n"
          "<title></title>\n"
          "</head>\n\n"
          "<body>\n\n"
          "<h1></h1>\n\n"
          "<p>\n\n"
          "</body>\n"
          "</html>\n")
  (forward-line -11)
  (forward-char 7)
  )

The motion commands at the end move the cursor back to point in between the <title> and </title>, so I'm ready to type the page title. (I should probably have it prompt me, so it can insert the same string in title and h1, which is almost always what I want.)

That has worked for quite a while. But when I decided it was time to write the same function for python:

(defun newpython ()
  "Insert a template for an empty Python script"
  (interactive)
  (insert "#!/usr/bin/env python\n"
          "\n"
          "\n"
          "\n"
          "if __name__ == '__main__':\n"
          "\n"
          )
  (forward-line -4)
  )
... I realized that I wanted to be even more lazy than that. Emacs knows what sort of file it's editing -- it switches to html-mode or python-mode as appropriate. Why not have it insert the template automatically?

My first thought was to have emacs run the function upon loading a file. There's a function with-eval-after-load which supposedly can act based on file suffix, so something like (with-eval-after-load ".py" (newpython)) is documented to work. But I found that it was never called, and couldn't find an example that actually worked.

But then I realized that I have mode hooks for all the programming modes anyway, to set up things like indentation preferences. Inserting some text at the end of the mode hook seems perfectly simple:

(add-hook 'python-mode-hook
          (lambda ()
            (electric-indent-local-mode -1)
            (font-lock-add-keywords nil bad-whitespace)
            (if (= (buffer-size) 0)
                (newpython))
            (message "python hook")
            ))

The (= (buffer-size) 0) test ensures this only happens if I open a new file. Obviously I don't want to be auto-inserting code inside existing programs!

HTML mode was a little more complicated. I edit some files, like blog posts, that use HTML formatting, and hence need html-mode, but they aren't standalone HTML files that need the usual HTML template inserted. For blog posts, I use a different file extension, so I can use the elisp string-suffix-p to test for that:

  ;; s-suffix? is like Python endswith
  (if (and (= (buffer-size) 0)
           (string-suffix-p ".html" (buffer-file-name)))
      (newhtml) )

I may eventually find other files that don't need the template; if I need to, it's easy to add other tests, like the directory where the new file will live.

A nice timesaver: open a new file and have a template automatically inserted.

February 10, 2017

Accelerated compositing in WebKitGTK+ 2.14.4

WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

  • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
  • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
  • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

We really hope this new release and the upcoming 2.16 will work much better for everybody.

From the Community Vol. 2


From the Community Vol. 2

Welcome to the second installment of From the Community, a (hopefully) quarterly-ish blog post to highlight a few of the things our community members have been doing!

Improving grain simulation

@arctic has posted some research about how to better simulate grain in our digital images and the ensuing conversation is both fascinating and way above my head! This discussion is thus far raw processor independent and more input and code is welcome!

Examples of grain from raw processing programs

A tutorial on RBG color mixing

We’ve somewhat recently welcomed the painters into the fold on the pixls’ forum and @Elle rewarded us all with a tutorial RGB color mixing. She delves into subjects such as mixing color pigments like a traditional painter and how to handle that in the digital darkroom. You can read the whole article here.

Working to support Pentax Pixel Shift files in RawTherapee

There has been a lot of on-going work to bring support for Pentax Pixel Shift files in RawTherapee; the thread has now reached 234 posts and it is inspiring to see the community and developers coming together to bring support for an interesting technology. The feature set has been evolving pretty rapidly and it will be exiting when it makes it to a stable release.

An example pixel shift file

Midi controller support for Darktable

Some preliminary work has begun to bring generic midi controller support to darktable. The funding for the midi controller to spur the development of this feature is a direct result of the members of the forum directly giving to further community causes. Once the darktable developers are finished with the midi controller, it’ll be offered to other developers to use to help implement support!

A Korg midi controller

Methods for dealing with clipped highlights

@Morgan_Hardwood has written a very nice post detailing several methods for dealing with clipped highlights in RawTherapee. These include tone-mapping, highlights and shadows, and using the CIECAM02 mode.

Working with clipped highlights

February 08, 2017

Helping new users get on IRC, Part 2

Fedora Hubs

Where our story began…

You may first want to check out the first part of this blog post, Helping new users get on IRC. We’ll wait for you here. 🙂

A simpler way to choose a nick

(Relevant ticket: https://pagure.io/fedora-hubs/issue/283)

So Sayan kindly reviewed the ticket with the irc registration mockups in it and had some points of feedback about the nick selection process (original mockup shown below:)

Critical Feedback on Original Mockup

  • The layout of a grid of nicks to choose from invited the user to click on all of them, even if that wasn’t in their best interest. It drove their attention to a multiplicity of choices rather than focused them towards one they could use to move forward.
  • If the user clicked even on just one nick, they would have to wait for us to check if it was available. If they clicked on multiple, it could take a long time to get through the dialog. They might give up and not register. (We want them to join and chat, though!)
  • To make it clear which nick they wanted to register, we had the user click on a “Register” button next to every available nick. This meant, necessarily, that the button wasn’t in the lower right corner, the obvious place to look to continue. Users might be confused as to the correct next step.
  • Overall, the screen is more cluttered than it could be.

mockup showing 9 up nick suggestion display

We thought through a couple of alternatives that would meet the goals I had with the initial design, yet still address Sayan’s concerns listed above. Those goals are:

Mo’s goals for the mockup

  • Provide the user clues as to the standard format of the nicknames (providing an acceptable example can do this.)
  • Giving the user ideas in case they just can’t think of any nickname (generating suggestions based on heuristics can help.)
  • Making it very clear which nickname the user is going to continue with and register (in the first mockup, this was achieved through having the register button right next to each nick.)
  • Making it clear to the user that we needed to check if the nick was available after they came up with one. This is important becauae many websites do this as you type – we can’t because our availability check is much more expensive (parsing /info in irc!)

New solution

We decided to instead make the screen a simple form field for nick with a button to check availability, as well as a button the user could optionally click on to generate suggested nicks that would populate the field for them. Based on whether or not an available nick was populated in the field, the “Register” button in the lower right corner would be greyed out or active.

Initial view

Nickname field is blank.

mockup of screen with a single form field for nickname input

Nickname suggestion

If you click on the suggest button, it’ll fill the form field with a suggestion for you:

mockup of screen showing the suggest nickname button populating the nickname field

Checking nick availability

Click on the “Check availability” button, and it’ll fade out and a spinner will appear letting you know that we’re checking whether or not the nick is available (in the backend, querying Freenode nickserv or doing a /info on the nick.)

mockup showing nickname availability checking spinner

Nickname not available

If the nickname’s not available, we let you know. Edit the form field or click on the suggest button to try again and have the “Check availability” button appear.

mockup showing a not available message if a nickname fails the availability check

Nickname available

Hey, your nick’s not taken! The “Register” button in the lower right lights up and you’re ready to move forward if you want.

mockup showing the register button activating when the user's input nickname is in fact available

Taking a verification code instead

I learned a lesson I already knew – I should have known better but didn’t! 🙂 I assumed that when you provide your email address to freenode, the email they send back havs a link to click on to verify your email. I knew I should go through the process myself to be sure what the email said, what it looked like, etc., but I didn’t before I designed the original screen based on a faulty assumption. Here is the faulty assumption screen:

Original version of the email confirmation mockup which tells users to click a link in the email that doesn't exist.

I knew I should go through the process to verify some things about my understanding of it, though (hey, it’s been years and years since I registered my own nick and email with freenode NickServ.) I got around to it, and here’s what I got (with some details redacted for privacy, irc nick is $IRCNICK below:)

From "freenode" <noreply.support@freenode.net>
To "$IRCNICK" <email@example.com>
Date Mon, 06 Feb 2017 19:35:35 +0000
Subject freenode Account Registration

$IRCNICK,

In order to complete your account registration, you must type the following
command on IRC:

/msg NickServ VERIFY REGISTER $IRCNICK qbgcldnjztbn

Thank you for registering your account on the freenode IRC network!

Whoopsie! You’ll note the email has no link to click. See! Assumptions that have not been verified == bad! Burned Mo, burned!

So here’s what it looks like now. I added a field for the user to provide the verification code, as well as some hints to help them identify the code from the email. In the process, I also cut down the original text significantly since there is a lot more that has to go on the screen now. I should have cut the text down without this excuse (more text, less read):

new mockp for email verification showing a field for entering the verification code

 

I still need to write up the error cases here – what happens if the verification code gets rejected by NickServ or if it’s too long, has invalid characters, etc.

Handling edge cases

(Relevant ticket: https://pagure.io/fedora-hubs/issue/318)

Both in Twitter comments and IRC messages you sent me after I blogged the first design, I realized I needed to gather some data about the nickname registration process on Freenode. (Thank you!) I was super concerned about the fragility of the approach of slapping a UI on top of a process we don’t own or control. For example, the email verification email freenode sends that a user must act on within 24 hours to keep their nick – we can’t re-send that mail, so how do we handle this in a way that doesn’t break?

Even though I was a little intimidated (I forget that freenode isn’t like EFnet,) I popped into the #freenode admin channel and asked a bunch of questions to clear things up. The admins are super-helpful and nice, and they cleared everything up. I learned a few things:

  • A user is considered active or not based on the amount of time that has passed since they have been authenticated / identified with freenode NickServ.
  • After the user initially registers with NickServ, they are sent an email from “freenode Account Registration” <noreply.support@freenode.net> with a message that contains a verification code they need to send to NickServ to verify their email address.
  • If you don’t receive the email from freenode, you can drop the nick, take it again and try again with another email address.
  • While there is no formal freenode privacy policy for the email address collection, they confirmed they are only used for password recovery purposes and are not transmitted to third parties for any purpose.
  • If a nickname hasn’t been actively identified with NickServ for 10 weeks, it is considered “expired.” Identifying to NickServ with that nick and password will still work indefinitely until either the DB is purged (a regular maintenance task) or if another user requested the nick and took it over:
    • The DB purges don’t happen right away, so an expired nick won’t be removed on day 1 of week 11, but it’s vulnerable to purge from that point forward. Purges happen a couple times a year or so.
    • If another user wants to take over an expired nick that has not been purged, they can message an admin to have the nick freed for them to claim.

This was mostly good news, because being identified to NickServ means you’re active. Since we have an IRC bouncer (ircb) under the covers keeping users identified, the likelihood of their sitting around inactive for 10 weeks is far less. The possibility that they actually lose their nick is limited to an opportunist requesting it and taking it over or bad timing with a DB purge. This is a mercifully narrow case.

So here’s the edge cases we have to worry about from this:

Lost nickname

These cases result in the user needing to register a new nickname.

  • User hasn’t logged into Hubs for > 10 weeks, and circumstances (netsplit?) kept them from being identified. Their nick was taken by another user.
  • User didn’t verify email address, and their nick was taken by another user.

Need to re-register nickname

These cases result in the user needing to re-register their nickname.

  • User hasn’t logged into Hubs for > 10 weeks, circumstances kept them from being identified. Their nick was purged from the DB but is still available.
  • User didn’t verify email address, and their nick was purged from DB but is still available.

Handling lost nicks

If we can’t authenticate the user and their nickname, we’ll disable chat hubs-wide. IRC widgets on hubs will appear like so, to prompt the user to re-register:

IRC widget with re-register nag

If the user happens to visit the user settings panel, they’ll also see a prompt to re-register with the IRC feature disabled:

mockup of the user settings panel showing irc disabled with a nag to re-register nickname

Re-registration process

The registration process should appear the same as in the initial user registration flow, with a message at the top indicating which state the user is in (if their nick was db purged and is now available, let them know so they can re-register the same nick; if someone else grabbed it, let them know so they know to make a new one.) Here’s what this will look like:

nickname registration screen with a message showing the expired nickname is still available

 

The cases I forgot

What if the user already had a registered nickname before Hubs?  (This will be the vast majority of Hubs users when we launch!) I kind of thought about this, and assumed we’d slurp the user’s nickname in from FAS and prompt them for their password at some point, and forgot about it until Sayan mentioned it in our meeting this morning. There’s two cases here, actually:

  • User has a nickname registered with Nickserv already that they’ve provided to FAS. We need to provide them a way to enter in their password so we can authenticate them using Hubs.
  • User has a nickname registered with Nickserv already that is not in FAS. We need to let them provide their nickname/password so we can authenticate them.

I haven’t mocked this up yet. Next time! 🙂

Initial set of mockups for this.

Feedback Welcome!

So there you have it in today’s installation of Hubs’ IRC feature – a pretty major iteration on a design based on feedback, some research into how freenode handles registration, some mistakes (not verifying the email registration process first-hand, forgetting some cases), and additional mockups.

Keep the feedback coming – as you can see here, it’s super-helpful and gets applied directly to the design. 🙂

Made with Krita 2016: The Artbooks Have Arrived!

Made With Krita 2016 is now available! This morning the printer delivered 250 copies of the first book filled with art created in Krita by great artists from all around the world. We immediately set to work to send out all pre-orders, including the ones that were a kickstarter reward.

The books themselves are gorgeous. The artwork is great and varied, of course, but the printer did a good job on the colors, too — helped by the excellent way the open source desktop publishing application Scribus prepares PDF’s for printing. The picture doesn’t do it justice, since it was made with an old phone…

Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist. Get your rare first edition now, an essential addition to every self-respecting bookshelf! The book is professionally printed on 130 grams paper and softcover bound in signatures.

The cover illustration is by Odysseas Stamoglou. The inner artwork features Arrianne Criseyde Pascual, Baukje Jagersma, Beelzy, Chewsome, David Revoy, Enrico Guarnieri, Eric Lee, Filipe Ferreira, Justin Nichol, Kesbet Tree, Livio Fania, Liz de Souza, Matt Preece, Melissa Lipan, Michael Bowling, Mozart Couto, Naghree Greenskin, Neotheta, Nivailis, Paolo Puggioni, R.J. Quiralta, Radian 1, Raghukamath, Ramón Miranda, Reine, Sylvain Boussiron, William Thorup, Elésiane Huve, Amelia Hamrick, Danilo Junior, Ivan Aros, Jennifer Reuter, Karen Kaye Llamas, Lucas Ribeiro, Motion Arc Foundry, Odysseas Stamoglou, Sylvia Ritter, Timothée Giet, Tony Jennison, Tyson Tan, and Wayne Parker.

Made with Krita 2016

Made with Krita 2016

Made with Krita 2016 is 19,95€ excluding VAT in the European Union, excluding shipping. Shipping is 11.25€ outside the Netherlands and 3.65€ inside the Netherlands.

International:

European Union:

 

New fwupd release, and why you should buy a Dell

This morning I released the first new release of fwupd on the 0.8.x branch. This has a number of interesting fixes, but more importantly adds the following new features:

  • Adds support for Intel Thunderbolt devices
  • Adds support for some Logitech Unifying devices
  • Adds support for Synaptics MST cascaded hubs
  • Adds support for the Altus-Metrum ChaosKey device
  • Adds Dell-specific functionality to allow other plugins turn on TBT/GPIO

Mario Limonciello from Dell has worked really hard on this release, and I can say with conviction: If you want to support a hardware company that cares about Linux — buy a Dell. They seem to be driving the importance of Linux support into their partners and suppliers. I wish other vendors would do the same.

February 07, 2017

Refugee Hope Box

I’m not sure that I’ve ever posted anything non-Fedora / Linux / tech in this blog since I started it over 10 years ago.

My daughter’s school is running a refugee hope box drive. The boxes are packed full of toiletries and other supplies for refugee children. We will drop our box off at school and it will get shipped to the Operation Refugee Child program, where its contents will be packed into a backpack and delivered to a child in the refugee camps. We decided to pack a box for a teenage girl. It includes everything from personal toiletries, to non-perishable snacks, to some fun things like gel pens, markers, a journal, and Lip Smackers.

We explained to our daughter that there are kids like her who got kicked out of their home by bad guys (“Like the Joker?” she asked. Yep, bad guys like him.) There’s one girl we are going to try to help – since she is far away from home and has almost nothing, we are going to help by sending some supplies for her. My daughter loved the idea and was really into it. We spent most of our Saturday this past weekend getting supplies out and about with the kids, until the kids kind of melted down (nap, shopping cart fatigue, etc.) and we had to head back home.

We were so close to being finished but needed a few more items to finish it up, so I set up an Amazon wishlist for the items remaining and posted it to Twitter. I figured other folks might want to go in on it with us and help, and I could always pick up anything else remaining later this week.

It seriously took 22 minutes to get all of the items left to order purchased. I’m totally floored by everyone’s generosity. Our community is awesome. Thank you.

If you would like to help, you can start your own box or buy an item off of the central organization’s Amazon wishlist.

February 06, 2017

Open Desktop Review System : One Year Review

This weekend we had the 2,000th review submitted to the ODRS review system. Every month we’re getting an additional ~300 reviews and about 500,000 requests for reviews from the system. The reviews that have been contributed are in 94 languages, and from 1387 different users.

Most reviews have come from Fedora (which installs GNOME Software as part of the default workstation) but other distros like Debian and Arch are catching up all the time. I’d still welcome KDE software center clients like Discover and Apper using the ODRS although we do have quite a lot of KDE software reviews submitted using GNOME Software.

Out of ~2000 reviews just 23 have been marked as inappropriate, of which I agreed with 7 (inappropriate is supposed to be swearing or abuse, not just being unhelpful) and those 7 were deleted. The mean time between a review being posted that is actually abuse and it being marked as such (or me noticing it in the admin panel) is just over 8 hours, which is certainly good enough. In the last few months 5523 people have clicked the “upvote” button on a review, and 1474 people clicked the “downvote” button on a review. Although that’s less voting that I hoped for, that’s certainly enough to give good quality sorting of reviews to end users in most locales. If you have a couple of hours on your hands, gnome-software --mode=moderate is a great way to upvote/downvote a lot of reviews in your locale.

So, onward to 3,000 reviews. Many thanks to those who submitted reviews already — you’re helping new users who don’t know what software they should install.

Rosy Finches

Los Alamos is having an influx of rare rosy-finches (which apparently are supposed to be hyphenated: they're rosy-finches, not finches that are rosy).

[Rosy-finches] They're normally birds of the snowy high altitudes, like the top of Sandia Crest, and quite unusual in Los Alamos. They're even rarer in White Rock, and although I've been keeping my eyes open I haven't seen any here at home; but a few days ago I was lucky enough to be invited to the home of a birder in town who's been seeing great flocks of rosy-finches at his feeders.

There are four types, of which three have ever been seen locally, and we saw all three. Most of the flock was brown-capped rosy-finches, with two each black rosy-finches and gray-capped rosy-finches. The upper bird at right, I believe, is one of the blacks, but it might be a grey-capped. They're a bit hard to tell apart. In any case, pretty birds, sparrow sized with nice head markings and a hint of pink under the wing, and it was fun to get to see them.

[Roadrunner] The local roadrunner also made a brief appearance, and we marveled at the combination of high-altitude snowbirds and a desert bird here at the same place and time. White Rock seems like much better roadrunner territory, and indeed they're sometimes seen here (though not, so far, at my house), but they're just as common up in the forests of Los Alamos. Our host said he only sees them in winter; in spring, just as they start singing, they leave and go somewhere else. How odd!

Speaking of birds and spring, we have a juniper titmouse determinedly singing his ray-gun song, a few house sparrows are singing sporadically, and we're starting to see cranes flying north. They started a few days ago, and I counted several hundred of them today, enjoying the sunny and relatively warm weather as they made their way north. Ironically, just two weeks ago I saw a group of about sixty cranes flying south -- very late migrants, who must have arrived at the Bosque del Apache just in time to see the first northbound migrants leave. "Hey, what's up, we just got here, where ya all going?"

A few more photos: Rosy-finches (and a few other nice birds).

We also have a mule deer buck frequenting our yard, sometimes hanging out in the garden just outside the house to drink from the heated birdbath while everything else is frozen. (We haven't seen him in a few days, with the warmer weather and most of the ice melted.) We know it's the same buck coming back: he's easy to recognize because he's missing a couple of tines on one antler.

The buck is a welcome guest now, but in a month or so when the trees start leafing out I may regret that as I try to find ways of keeping him from stripping all the foliage off my baby apple tree, like some deer did last spring. I'm told it helps to put smelly soap shavings, like Irish Spring, in a bag and hang it from the branches, and deer will avoid the smell. I will try the soap trick but will probably combine it with other measures, like a temporary fence.

February 03, 2017

Fri 2017/Feb/03

  • Algebraic data types in Rust, and basic parsing

    Some SVG objects have a preserveAspectRatio attribute, which they use to let you specify how to scale the object when it is inserted into another one. You know when you configure the desktop's wallpaper and you can set whether to Stretch or Fit the image? It's kind of the same thing here.

    Examples of        preserveAspectRatio from the SVG spec

    The SVG spec specifies a simple syntax for the preserveAspectRatio attribute; a valid one looks like "[defer] <align> [meet | slice]". An optional defer string, an alignment specifier, and an optional string which can be meet or slice. The alignment specifier can be any one of these strings:

    none
    xMinYMin
    xMidYMin
    xMaxYMin
    xMinYMid
    xMidYMid
    xMaxYMid
    xMinYMax
    xMidYMax
    xMaxYMax

    (Boy oh boy, I just hate camelCase.)

    The C code in librsvg would parse the attribute and encode it as a bitfield inside an int:

    #define RSVG_ASPECT_RATIO_NONE (0)
    #define RSVG_ASPECT_RATIO_XMIN_YMIN (1 << 0)
    #define RSVG_ASPECT_RATIO_XMID_YMIN (1 << 1)
    #define RSVG_ASPECT_RATIO_XMAX_YMIN (1 << 2)
    #define RSVG_ASPECT_RATIO_XMIN_YMID (1 << 3)
    #define RSVG_ASPECT_RATIO_XMID_YMID (1 << 4)
    #define RSVG_ASPECT_RATIO_XMAX_YMID (1 << 5)
    #define RSVG_ASPECT_RATIO_XMIN_YMAX (1 << 6)
    #define RSVG_ASPECT_RATIO_XMID_YMAX (1 << 7)
    #define RSVG_ASPECT_RATIO_XMAX_YMAX (1 << 8)
    #define RSVG_ASPECT_RATIO_SLICE (1 << 30)
    #define RSVG_ASPECT_RATIO_DEFER (1 << 31)

    That's probably not the best way to do it, but it works.

    The SVG spec says that the meet and slice values (represented by the absence or presence of the RSVG_ASPECT_RATIO_SLICE bit, respectively) are only valid if the value of the align field is not none. The code has to be careful to ensure that condition. Those values specify whether the object should be scaled to fit inside the given area, or stretched so that the area slices the object.

    When translating that this C code to Rust, I had two choices: keep the C-like encoding as a bitfield, while adding tests to ensure that indeed none excludes meet|slice; or take advantage of the rich type system to encode this condition in the types themselves.

    Algebraic data types

    If one were to not use a bitfield in C, we could represent a preserveAspectRatio value like this:

    typedef struct {
        defer: gboolean;
        
        enum {
            None,
            XminYmin,
            XminYmid,
            XminYmax,
            XmidYmin,
            XmidYmid,
            XmidYmax,
            XmaxYmin,
            XmaxYmid,
            XmaxYmax
        } align;
        
        enum {
            Meet,
            Slice
        } meet_or_slice;
    } PreserveAspectRatio;
    	    

    One would still have to be careful that meet_or_slice is only taken into account if align != None.

    Rust has algebraic data types; in particular, enum variants or sum types.

    First we will use two normal enums; nothing special here:

    pub enum FitMode {
        Meet,
        Slice
    }
    
    pub enum AlignMode {
        XminYmin,
        XmidYmin,
        XmaxYmin,
        XminYmid,
        XmidYmid,
        XmaxYmid,
        XminYmax,
        XmidYmax,
        XmaxYmax
    }

    And the None value for AlignMode? We'll encode it like this in another type:

    pub enum Align {
        None,
        Aligned {
            align: AlignMode,
            fit: FitMode
        }
    }

    This means that a value of type Align has two variants: None, which has no extra parameters, and Aligned, which has two extra values align and fit. These two extra values are of the "simple enum" types we saw above.

    If you "let myval: Align", you can only access the align and fit subfields if myval is in the Aligned variant. The compiler won't let you access them if myval is None. Your code doesn't need to be "careful"; this is enforced by the compiler.

    With this in mind, the final type becomes this:

    pub struct AspectRatio {
        pub defer: bool,
        pub align: Align
    }

    That is, a struct with a boolean field for defer, and an Align variant type for align.

    Default values

    Rust does not let you have uninitialized variables or fields. For a compound type like our AspectRatio above, it would be nice to have a way to create a "default value" for it.

    In fact, the SVG spec says exactly what the default value should be if a preserveAspectRatio attribute is not specified for an SVG object; it's just "xMidYMid", which translates to an enum like this:

    let aspect = AspectRatio {
        defer: false,
        align: Align::Aligned {
            align: AlignMode::XmidYmid,
    	fit: FitMode::Meet
        }
    }

    One nice thing about Rust is that it lets us define default values for our custom types. You implement the Default trait for your type, which has a single default() method, and make it return a value of your type initialized to whatever you want. Here is what librsvg uses for the AspectRatio type:

    impl Default for Align {
        fn default () -> Align {
            Align::Aligned {
                align: AlignMode::XmidYmid,
                fit: FitMode::Meet
            }
        }
    }
    
    impl Default for AspectRatio {
        fn default () -> AspectRatio {
            AspectRatio {
                defer: false,
                align: Default::default ()    // this uses the value from the trait implementation above!
            }
        }
    }

    Librsvg implements the Default trait for both the Align variant type and the AspectRatio struct, as it needs to generate default values for both types at different times. Within the implementation of Default for AspectRatio, we invoke the default value for the Align variant type in the align field.

    Simple parsing, the Rust way

    Now we have to implement a parser for the preserveAspectRatio strings that come in an SVG file.

    The Result type

    Rust has a FromStr trait that lets you take in a string and return a Result. Now that we know about variant types, it will be easier to see what Result is about:

    #[must_use]
    enum Result<T, E> {
       Ok(T),
       Err(E),
    }

    This means the following. Result is an enum with two variants, Ok and Err. The first variant contains a value of whatever type you want to mean, "this is a valid parsed value". The second variant contains a value that means, "these are the details of an error that happened during parsing".

    Note the #[must_use] tag in Result's definition. This tells the Rust compiler that return values of this type must not be ignored: you can't ignore a Result returned from a function, as you would be able to do in C. And then, the fact that you must see if the value is an Ok(my_value) or an Err(my_error) means that the only way ignore an error value is to actually write an empty stub to catch it... at which point you may actually write the error handler properly.

    The FromStr trait

    But we were talking about the FromStr trait as a way to parse strings into values! This is what it looks like for our AspectRatio:

    pub struct ParseAspectRatioError { ... };
    
    impl FromStr for AspectRatio {
        type Err = ParseAspectRatioError;
    
        fn from_str(s: &str) -> Result<AspectRatio, ParseAspectRatioError> {
            ... parse the string in s ...
    
            if parsing succeeded {
                return Ok (AspectRatio { ... fields set to the right values ... });
            } else {
                return Err (ParseAspectRatioError { ... fields set to error description ... });
            }
        }
    }

    To implement FromStr for a type, you implement a single from_str() method that returns a Result<MyType, MyErrorType>. If parsing is successful you return the Ok variant of Result with your parsed value as Ok's contents. If parsing fails, you return the Err variant with your error type.

    Once you have that implementation, you can simply call "let my_result = AspectRatio::from_str ("xMidyMid");" and piece apart the Result as with any other Rust code. The language provides facilities to chain successful results or errors so that you don't have nested if()s and such.

    Testing the parser

    Rust makes it very easy to write tests. Here are some for our little parser above.

    #[test]
    fn parsing_invalid_strings_yields_error () {
        assert_eq! (AspectRatio::from_str (""), Err(ParseAspectRatioError));
    
        assert_eq! (AspectRatio::from_str ("defer foo"), Err(ParseAspectRatioError));
    }
    
    #[test]
    fn parses_valid_strings () {
        assert_eq! (AspectRatio::from_str ("defer none"),
                    Ok (AspectRatio { defer: true,
                                      align: Align::None }));
    
        assert_eq! (AspectRatio::from_str ("XmidYmid"),
                    Ok (AspectRatio { defer: false,
                                      align: Align::Aligned { align: AlignMode::XmidYmid,
                                                              fit: FitMode::Meet } }));
    }

    Using C-friendly wrappers for those fancy Rust enums and structs, the remaining C code in librsvg now parses and uses AspectRatio values that are fully implemented in Rust. As a side benefit, the parser doesn't use temporary allocations; the old C code built up a temporary list from split()ting the string. Rust's iterators and string slices essentially let you split() a string with no temporary values in the heap, which is pretty awesome.

February 02, 2017

Introducing Neon – a way to Quickly review stuff and share with your friends

Over at silverorange, we’ve been working on a new product called Neon.The goal is to see if we can create compelling reviews with limited input (often from a phone). Our current take on this boils a review down to a few basic elements of a review:

  1. Title (what are you reviewing)
  2. Photo
  3. Pros & Cons
  4. A rating from 0 to 10
  5. An emoji to represent how you feel about it

You can also optionally add a longer description, a link to where you can buy it, and the price you paid.

For example, here’s a cutting and insightful review I wrote about a mouse pad.

Neon is in a closed alpha right now, which means that anyone can read the reviews, but to create reviews, you need to be invited to try it out. If you’re interested in trying out the alpha, or being notified when it is opened up to a larger audience, you an leave your email at neon.io.

Why I’m a Social Media Curmudgeon (oh, and follow my blog on Twitter)

I wanted to clarify for myself why it is that I don’t use Facebook or (for the most part) Twitter. Brace yourself for self-justification and equivocation.

First caveat: I actually do have a Twitter account (@sgarrity), but I don’t post anything (sort of, more on this later). I use it to follow people.

I don’t dislike Twitter or Facebook.  They are both amazing systems. They both took blogging and messaging and made them way easier on a massive scale. As a professional web designer and developer, I respect the craft with which both Facebook and Twitter have built their platforms. I regularly rely on open-source projects that both companies produce and finance (thanks!).

Messaging and communication are too important to be controlled by a private corporation. For all of their faults, our phone or text messaging services allow portability. If I have a problem with my phone company, I can take my phone number with me to another company. I can talk to someone regardless of what phone company they have chosen. The same is true of the web and of email (as long as you use your own domain name).

I’m not an extremist. I don’t think you’re doing something wrong if you use these services. I would like to see people use more open alternatives, but I understand that for many, the ease and convenience of platforms like Facebook and Twitter are worth the trade-offs.

All of this is to say that you can now follow @aov_blog on Twitter for updates on my Acts of Volition blog posts.

While I’m contradicting myself, I also have a third Twitter account, @steven_reviews, which I created to share reviews for a new site I’m helping to develop and test at work (more on that soon). While I may opt out of these services personally, if there’s a compelling reason for me to use them at work, or my reluctance proves a significant hindrance for those around me, the scales of the trade-offs may tip in a different direction.

Oh, and I also help manage the @silverorangeinc Twitter account as part of my job.

Now, get off my #lawn.

February 01, 2017

darktable 2.2.3 released

we're proud to announce the third bugfix release for the 2.2 series of darktable, 2.2.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.3.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.2.3.tar.xz
1b33859585bf283577680c61e3c0ea4e48214371453b9c17a86664d2fbda48a0  darktable-2.2.3.tar.xz
$ sha256sum darktable-2.2.3.dmg
1ebe9a9905b895556ce15d556e49e3504957106fe28f652ce5efcb274dadd41c  darktable-2.2.3.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.2 can be found below.

Bugfixes:

  • Fix fatal crash when generating preview for medium megapixel count (~16MP) Bayer images
  • Properly subtract black levels: respect the even/odd -ness of the raw crop origin point
  • Collection module: fix a few UI quirks

Krita 3.1.2 released!

Krita 3.1.2, released on February 1st 2017, is the first bugfix release in the 3.1 release series. But there are a few extra new features thrown in for good measure!

Audio Support for Animations

Import audio files to help with syncing voices and music. In the demo on the left, Timothée Giet shows how scrubbing and playback work when working with audio.

  • Available audio formats are WAV, MP3, OGG, and FLAC
  • A checkbox was added in the Render animation dialog to include the audio while exporting
  • See the documentation for more information on how to set it up and use the audio import feature.

Audio is not yet available in the Linux appimages. It is an experimental feature, with no guarantee that it works correctly yet — we need your feedback!

Other New Features

  • Ctrl key continue mode for Outline Selection tool: if you press ctrl while drawing an outline selection, the selection isn’t completed when you lift the stylus from the tablet. You can continue drawing the selection from an arbitrary point.
  • Allow deselection by clicking with a selection tool: you can now deselect with a single click with any selection tool.
  • Added a checkbox for enabling HiDPI to the settings dialog.
  • remove the export to PDF functionality. It is having too many issues right now. (BUG:372439)

There are also a lot of bug fixes. Check the full release notes!

Get the Book!

If you want to see what others can do with Krita, get Made with Krita 2016, the first Krita artbook, now available for pre-order!

Made with Krita 2016

Made with Krita 2016

Give us your feedback!

Almost 1000 people have already filled in the 2017 Krita Survey! Tell us how you use Krita, on what hardware and what you like and don’t like!

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.2 on Ubuntu and derivatives.

OSX

Source code

md5sums

For all downloads:

Key

The Linux appimage and the source tarbal are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here.

January 31, 2017

Helping new users get on IRC

Fedora Hubs

Hubs and Chat Integration Basics

Hubs uses Freenode IRC for its chat feature. I talked quite a bit about the basics of how we think this could work (see “Fedora Hubs and Meetbot: A Recursive Tale” for all of the details.)

One case that we have to account for is users who are new Fedora contributors who don’t already have an IRC nick or even experience with IRC. A tricky thing is that we have to get them identified with NickServ, and continue to identify them with Nickserv seamlessly and automatically, after netsplits and other events that would cause them to lose their authentication to Nickserv, without their needing to be necessarily aware that the identification process was going on. Nickserv auth is kind of an implementation detail of IRC that I don’t think users, particularly those new to and unfamiliar with IRC, need to be concerned with.

Nickserv?

“Nickserv? What’s Nickserv?” you ask. Well. Different IRC networks have a nickserv or something similar to it.

On IRC, people chat using the same nickname and come to be known by their nickname. For example, I’ve been mizmo on freenode IRC for well over a decade and am known by that name, similarly to how people know me by my email address or phone number. IRC is from the old and trusting days of the internet, however, so there’s nothing in IRC to guarantee that I could keep the nick mizmo if I logged out and someone else logged in using ‘mizmo’ as their nickname! In fact, this is/was a common way to attack or annoy people in IRC – steal their nick.

In comes Nickserv to save the day – it’s a bot of sorts that Freenode runs on its IRC network that registers nicknames and provides an authentication system to password protect those names. Someone can still take your nick if you’re offline, but if you’ve registered it, you can use your password and Nickserv to knock them off so you can reclaim your nick.

Yes, IRC definitely has a kind of a weird and rather quaint authentication system. Our challenge is getting people through it to be able to participate without having to worry about it!

Configuration Questions

“Well, wait,” you ask. “If they aren’t even aware of it, how do they set their nickserv password? What if they want to ‘graduate’ to a non-hubs IRC client and need their nickserv password? What is they want to change their password?”

I considered having Hubs silently auto-generate a nickserv password and managing that on its own, potentially with a way of viewing / changing this password in user settings. I opted to provide a way to create their own password, ultimately deciding that silently generating it, they wouldn’t be aware it existed, and may end up confused if they tried a different client and might post their FAS password over non-SSL plaintext IRC…

(Some other config we should allow eventually that we’ve discussed in the weekly hubs meetings – allowing users to use a different IRC bouncer than Hubs’ for IRC, and offering the connection details for our bouncer so they could use Hubs as a client as well as a third party client without issue.)

The Mockups

So here is an attempt to mock up the workflow of setting up IRC for a Hubs user who has never used IRC before, either on Hubs or ever. Note these do not address the case of someone new to Hubs who hasn’t enabled the RC feature but who does have a registered nick already that they have not entered into FAS – these will need modification to address that case. (Namely a link on the first screen to fill out their Nickserv auth details in settings.)

Widget in Context

This is what the as-of-yet unactivated IRC widget would look like for a user in context. The user is visiting a team hub, and the admin of that hub configured an IRC widget to be present by default for that hub. To join the chat, the user needs to enable IRC as a feature for their account on hubs so the widget offers to do that for them.

mockup of a fedora hubs screen showing a widget on the right hand side for enabling IRC chat

Chatter thumbnails

The top of the widget has a section that has small thumbnails of the avatars of people currently in the room (my thought is in order of who spoke most recently) with a headcount for the total number of people in the room. The main idea behind this is to try to encourage people to join the conversation – maybe they will see a friend’s avatar and feel like the room could be more approachable, maybe we tap into some primal FOMO (Fear Of Missing Out) by alluding to the activity that is happening without revealing it.

Call to action

Next we have a direct call to action, “Enable chat in Hubs to chat with other people in this hub” with an action-oriented button label, “Enable Hubs Chat.” This, I hope, clearly lets the user know what would happen if they clicked the button.

Hiding control

At the bottom, a small link: “Hide this notification.” Sometimes ‘upsell’ nags can be irritating if you have no intention of participating. If someone is sure they do not want to enable IRC in Hubs (perhaps they just want to use their own client and not bother with Hubs for this,) this will let them hide it and will ‘roll up’ the IRC widget to take up less space.

Here’s a close-up of the widget:

closeup of the IRC activation widget

Registration Wizard

So once you’ve decided to enable IRC in Hubs, then what?

Since selecting and registering a nick I think needs to be a multi-step process (in part because Freenode makes it one), I thought a wizard might be the best approach so this is how I mocked it up. A rough outline of the wizard steps is as follows:

  • Figure out a nickname you want to use and make sure it’s available
  • Provide email address and password for registration
  • Register nickname
  • Verify email

Choosing a nickname

This is a little weird, because of how IRC works. Let me explain how I would like it to work, and how I think it probably will end up working because of how IRC & nickserv work.

I would like to present the user with a bunch of options to pick from for their nickname. I want to do this because I think coming up with a clever nickname involves a high cognitive load they may not be up for, and at the very least offering suggestions could be good brain food for them making up their own (we offer a way to do that on the bottom of this screen, as well.)

Ideally, you wouldn’t offer something to someone and then tell them it’s not available. Unfortunately, I don’t think there’s an easy way to check whether or not a suggested nick is available without actually trying to use it on Freenode and seeing if you’re able to use it or if Nickserv scolds you for trying to use someone else’s registered nick. I don’t know if it’s possible to check nick availability in a less expensive way? These designs are based on the assumption that this is the only way to check nick availability:

mockup: irc nick selection

The model here is that we’d use some heuristics based on your name and FAS username to suggest potential nick, with a freeform option if you have one in mind. If you click on a name, we then check it for availability, displaying a spinner and a short message while we check. This way, we only check the nicks that you’re actually interested in and not waste cycles on nicks you have no interest in.

If a nick is available, it turns green and we display a “Register” button. If not, a message to let them know it’s not available:

IRC nick availability mockup

Once it’s finished checking on all of the nicks, it might look like this:

IRC nick availability mockup - all lookups complete

Provide email and password for registration

Freenode Nickserv registration requires providing an email address and a password. This screen is where we collect that.

We offer to submit their FAS account email address, but also allow them to freeform the email address they’d prefer to be assocaited with Freenode. We provide the rationale for why the email address is needed (account recovery) and should probably refer to Freenode’s privacy policy if it has one for their usage. There’s also fields for setting the password.

Verify Email Address

This is the most fragile component of this workflow. Freenode will allow you to keep a registration for 24 hours; if you do not confirm your email address in that time, you will lose your registration. Rather than explain all of this, we just ask that users check their email (supplying them the address the gave us so they know which account to check) and verify the email address using the link provided.

Problem: We don’t control these emails, freenode does. What if the user doesn’t receive the verification email? We can’t resend the email because we didn’t send it in the first place. No easy answer here. We might need some language to talk about checking your spam folder, and how long it might take (it seems to be pretty quick in testing.) We could let them go ahead and start chatting while they wait for the email, but what would we do if it never gets verified and they lose the registration? Messy. But here’s the mockup:

Screen for user to confirm email address for freenode registration

Finish

This is just a screen to let them know they’re all set. After clicking through this screen, they should be logged into the IRC channel for the hub they initiated the registration flow from.

IRC registration finish screen

Thoughts?

This is a first-cut at mocking up this particular flow. I’m actively working on other ones (including post-set up configuration, and turning IRC on/off on individual hubs which is the same as joining or leaving a channel.) If you have any ideas for solving some of the issues I brought up or any feedback at all, I’d love to hear it!

I find Jeff Atwood’s pragmatic reaction to the Trump presidency to be hopeful in his specific plans and actions. Well said.

January 30, 2017

Do not show up late to a meeting with a coffee.

As a general rule: Do not show up late to a meeting with a coffee.

I did this today, but you never should. The message is clear.

darktable 2.2.2 released

we're proud to announce the second bugfix release for the 2.2 series of darktable, 2.2.2!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.2.2.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

766d7d734e7bd5a33f6a6932a43b15cc88435c64ad9a0b20410ba5b4706941c2 darktable-2.2.2.tar.xz
52fd0e9a8bb74c82abdc9a88d4c369ef181ef7fe2b946723c5706d7278ff2dfb darktable-2.2.2.dmg

Important note: to make sure that darktable can keep on supporting the raw file format for your camera, please help us by visiting https://raw.pixls.us/ and making sure that we have the full raw sample set for your camera under CC0 license!

and the changelog as compared to 2.2.1 can be found below.

New features:

  • color look up table module: include preset for helmholtz/kohlrausch monochrome
  • Lens module: re-enable tiling
  • Darkroom: fix some artefacts in the preview image (not the main view!)
  • DNG decoder: support reading one more white balance encoding method
  • Mac: display an error when too old OS version is detected
  • Some documentation and tooltips updates

Bugfixes:

  • Main view no longer grabs focus when mouse enters it. Prevents accidental catastrophic image rating loss.
  • OSX: fix bauhaus slider popup keyboard input
  • Don't write all XMP when detaching tag
  • OSX: don't do PPD autodetection, gtk did their thing again.
  • Don't show database lock popup when DBUS is used to start darktable
  • Actually delete duplicate's XMP when deleting duplicated image
  • Ignore UTF-8 BOM in GPX files
  • Fix import of LR custom tone-curve
  • Overwrite Xmp rating from raw when exporting
  • Some memory leak fixes
  • Lua: sync XMPs after some tag manipulations
  • Explicitly link against math library

Base Support:

  • Canon PowerShot SX40 HS (dng)
  • Fujifilm X-E2S
  • Leica D-LUX (Typ 109) (4:3, 3:2, 16:9, 1:1)
  • Leica X2 (dng)
  • Nikon LS-5000 (dng)
  • Nokia Lumia 1020 (dng)
  • Panasonic DMC-GF6 (16:9, 3:2, 1:1)
  • Pentax K-5 (dng)
  • Pentax K-r (dng)
  • Pentax K10D (dng)
  • Sony ILCE-6500

Noise Profiles:

  • Fujifilm X-M1
  • Leica X2
  • Nikon Coolpix A
  • Panasonic DMC-G8
  • Panasonic DMC-G80
  • Panasonic DMC-G81
  • Panasonic DMC-G85

January 27, 2017

Making aliases for broken fonts

A web page I maintain (originally designed by someone else) specifies Times font. On all my Linux systems, Times displays impossibly tiny, at least two sizes smaller than any other font that's ostensibly the same size. So the page is hard to read. I'm forever tempted to get rid of that font specifier, but I have to assume that other people in the organization like the professional look of Times, and that this pathologic smallness of Times and Times New Roman is just a Linux font quirk.

In that case, a better solution is to alias it, so that pages that use Times will choose some larger, more readable font on my system. How to do that was in this excellent, clear post: How To Set Default Fonts and Font Aliases on Linux .

It turned out Times came from the gsfonts package, while Times New Roman came from msttcorefonts:

$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(dpkg -S doesn't find the file because msttcorefonts is a package that downloads a bunch of common fonts from Microsoft. Debian can't distribute the font files directly due to licensing restrictions.)

Removing gsfonts fonts isn't an option; aside from some documents and web pages possibly not working right (if they specify Times or Times New Roman and don't provide a fallback), removing gsfonts takes gnumeric and abiword with it, and I do occasionally use gnumeric. And I like having the msttcorefonts installed (hey, gotta have Comic Sans! :-) ). So aliasing the font is a better bet.

Following Chuan Ji's page, linked above, I edited ~/.config/fontconfig/fonts.conf (I already had one, specifying fonts for the fantasy and cursive web families), and added these stanzas:

    <match>
        <test name="family"><string>Times New Roman</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>
    <match>
        <test name="family"><string>Times</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>

The page says to log out and back in, but I found that restarting firefox was enough. Now I could load up a page that specified Times or Times New Roman and the text is easily readable.

Install openSUSE Tumbleweed + KDE on MacBook 2015

It is pretty easy to install openSUSE Linux on a MacBook as operating system. However there are some pitfalls, which can cause trouble. The article gives some hints about a dual boot setup with OS X 10.10 and at time of writing current openSUSE Tumbleweed 20170104 (oS TW) on a MacBookPro from early 2015. A recent Linux kernel, like in TW, is advisable as it provides better hardware support.

The LiveCD can be downloaded from www.opensuse.org and written with ImageWriter GUI to a USB stick ~1GB. I’ve choose the Live KDE one and it run well on a first test. During boot after the first sound and display light switches on hold Option/alt key and wait for the disk selection icon. Put the USB key with Linux in a USB port and wait until the removable media icon appears and select it for boot. For me all went fine. The internal display, sound, touchpad and keyboard where detected and worked well. After that test. It was a good time to backup all data from the internal flash drive. I wrote a compressed disk image to a stick using the unix dd command. With that image and the live media I was able to recover, in case anything went wrong. It is not easy to satisfy OS X for it’s journaled HFS and the introduced logical volume layout, which comes with a separate repair partition directly after the main OS partition. That combination is pretty fragile, but should not be touched. The rescue partition can be booted with the command key + r pressed. External tools failed for me. So I booted into rescue mode and took the OS X diskutil or it’s Disk Utility GUI counter part. The tool allows to split the disk into several partitions. The EFI and the rescue ones are hidden in the GUI. The newly created additional partitions can be formatted to exfat and later be modified for the Linux installation. One additional HFS partition was created for sharing data between OS X and Linux with the comfortable Unix attributes. The well know exfat used by many bigger USB sticks, is a possible option as well, but needs the exfat-kmp kernel module installed, which is not by default installed due to Microsofts patent license policy for the file system. In order to write to HFS from Linux, any HFS partition must have switched off the journal feature. This can be done inside the OS X Disk Utility GUI, by selecting the data partition and holding the alt key and searching in the menu for the disable journaling entry. After rebooting into the Live media, I clicked on the Install icon on the desktop background and started openSUSE’s Yast tool. Depending on the available space, it might be a good idea to disable the btrfs filesystem snapshot feature, as it can eat up lots of disk space during each update. An other pitfall is the boot stage. Select there secure GrubEFI mode, as Grub needs special handling for the required EFI boot process. That’s it. Finish install and you should be able to reboot into Linux with the alt key.

My MacBook has unfortunedly a defect. It’s Boot Manager is very slow. Erasing and reinstalling OS X did not fix that issue. To circumvent it, I need to reset NVRAM by pressing alt+cmd+r+p at boot start for around 14 second, until the display gets dark, hold alt on the soon comming next boot sound, select the EFI TW disk in Apple Boot Manager and can then fluently go through the boot process. Without that extra step, the keyboard and mouse might not respond in Linux at all, except the power button. Hot reboot from Linux works fine. OS X does a cold reboot and needs the extra sequence.

KDE’s Plasma needs some configuration to run properly on a high resolution display. Otherwise additional monitors can be connected and easily configured with the kscreen SystemSettings module. Hibernate works fine. Currently the notebooks SD slot is ignored and the facetime camera has no ready oS packages. Battery run time can be extended by spartan power consumption (less brightness, less USB devices and pulseaudio -k, check with powertop), but is not too far from OS X anyway.

January 26, 2017

FreeCAD Arch development news

A long time I didn't post here. Writing regularly on a blog proves more difficult than I thought. I have this blog since a long time, but never really tried to constrain myself to write regularly. You look elsewhere a little bit, and when you get back to it, two months have gone by... Since this post is aimed...