October 16, 2018

Campaign 2018: Join the new Development Fund

Today Blender Foundation releases the Blender Development Fund 2.0 – an updated donation system created to empower the community and to ensure the future growth of Blender.

Join the campaign and become a member on fund.blender.org.

The community as driving force behind Blender development

Since its beginning in 2002, Blender has been a public and community-driven project, where hundreds of people collaborate online with the shared goal of developing a great 3D creation suite, free for everyone.

Contributions to Blender range from new features, to bug fixing, donations, producing documentation and tutorials. The Blender user community always has been a strong force in driving Blender development.

With the start of Blender Institute and the Open Movie Projects, Blender became more professional and suitable for CG production. This content-driven development model has helped bringing Blender where it is now, embraced by the professionals, studios and big players in the CG industry.

With so many people and companies depending on Blender, its future can’t be left to chance. That’s why the Blender Foundation keeps playing in important role – to be a neutral and independent force to manage the  blender.org  infrastructure and to facilitate the active contributors for the long term.

Development Fund goals

The Blender Development Fund was established in 2011, with the following goals:

  • Support developers with grants to work on projects with well defined objectives
  • Support work on the core  blender.org  infrastructure and support activities such as patch reviews, triaging and bug fixing
  • Enable the best talents from the developers community to work full-time on generally agreed projects.

These goals are still relevant today, and as Blender has grown in popularity over the years, the scale and complexity of these goals has equally grown.

Blender 2.8 Code Quest: a success story

This year’s Code Quest campaign has successfully pushed Blender 2.8 to where it is now. The beta is being wrapped up now and close to a release.

Having developers work full-time on Blender, preferably under the same roof, is vital for a project as big as Blender. As a result several bigger decisions could be made for 2.8 – for example a much better design for the tools system. Everyone loved the near-daily reports and videos we made – it was vibrant, energizing and reaching out to other developers and users in an unsurpassed way.

Unfortunately the Quest was just for 3 months. Most developers went home or to other tasks, and progress slowed down to a level we weren’t used to anymore. If only we could bring back everyone to work on Blender!

Development Fund 2.0

That’s where the Development Fund comes in. However, the Fund was hardly promoted, only supported PayPal and needed to be manually managed. Time for an upgrade, and time to implement a much wanted feature for many years – the badges system.

Campaign Targets

Currently the (old) Development Fund has 400 subscribers, bringing in 5500 euro per month fixed income. We will contact all of them to migrate their subscription to the new system. We also have bigger Development Fund contributors paying less regular or variable amounts. In total the development Fund brings in roughly 12k per month.

Allowing continuity is the primary goal of the Blender Development Fund. We have defined two funding milestones that will allow to achieve this goal.

  • 25K Euro/month: the main campaign target. With this budget the fund can support 6 full-timers, including a position for smaller projects.
  • 50K Euro/month: the stretch goal. While this might seem an ambitious goal, this was the monthly budget during the Code Quest. We supported 10 full-timers, including a position for docs/videos and a position for smaller projects.

The funding progress towards this goals can be monitored in real time on fund.blender.org.

Transparency

All Development Fund grants and supported projects have been published on blender.org since 2011. To improve transparency and involvement we will also:

  • post a half-year report on past results and a roadmap proposal for projects to be supported in the coming half year. The reports will be shared on the code.blender.org blog (and mailing lists or the devtalk forum), open for everyone to discuss give feedback on.
  • spend a small but relevant percentage (5-8%)  of the budget on making development projects visible and accessible in general. That means improving communication (sending as well as receiving!), better technical documentation, and attention for onboarding of new developers.
  • aim for the widest possible possible consensus on roadmaps. We expect that with the badges, development fund members have a good way to make sure they’re being heard.

Final decisions on assigning developer grants will be made by the Blender Foundation, verified and confirmed by blender.org development project administrators – the five top contributors to Blender.

A voting or polling mechanism is not in the planning – although it’s open for discussion and review, especially when roadmaps don’t get wide endorsements.

How to contribute

Individual membership and Community Badges

We offer a Development Fund membership based on a small recurring donation, starting at 5 Euro per month. As reward members can get “badges” (tokens that show their support on prominent Blender websites) and a name mention on the blender.org website.

Companies and organisations are also welcome to sign up for individual membership and have their company name and url mentioned.

Corporate membership

This high rated membership level starts at 5k per year and is for organizations who want the option to monitor in more detail the projects that will get funded with their contributions. They will get personal attention from the Blender team for strategic discussions and feedback on the roadmap.

In addition to this, corporate members can get a prominent name and logo mention on blender.org and in official publications by Blender Foundation .

Amsterdam. October 16, 2018
Ton Roosendaal, Blender Foundation chairman
contact: foundation(at)blender.org

Krita at the University of La Plata

Sebastian Labi ha sido invitado para presentar Krita en el Laboratorio de herramientas de software libre de la Universidad de La Plata. Hablará sobre ilustración digital y usará Krita para dar una demostración de cómo usar Krita para el campo de la Ilustración Digital.

El SLAD- FBA (Software libre para Arte y diseño) es una nueva unidad de de investigación y formación en la Facultad de Bellas Artes que promueve el conocimiento y uso del software libre en la capacitación académica de la Universidad de La Plata.

El evento tendrá lugar el Miércoles 31 de Octubre a las 14:00.


Sebastian Labi has been invited to present Krita in the laboratory of Free Software tools of the Unversity of La Plata.  He will talk about digital illustration and using Krita, as well as giving a hands-on demonstration of Krita. SLAD – FBA | Free Software for Art and Design is a new unit of research and training at the Faculty of Fine Arts which promotes knowledge and use of Free Software in the academic training of the University of La Plata.

SLAD is a new research and teaching group in the Faculty of Fine Art in La Plata University which wants to promote systemtatic and sustained knowledge on how Art and Open Source interacts in an academic setting.

The meeting will be on Wednesday October 31st at 14:00.

October 15, 2018

Interview with Sira Argia

Could you tell us something about yourself?

Hi, my name is Sira Argia. I am a digital artist who uses FLOSS (Free/Libre Open Source Software) for work. I come from Indonesia, Sanggau Regency, West Kalimantan.

Do you paint professionally, as a hobby artist, or both?

I don’t think that I am professional in my digital painting work because there’s so much things I need to learn before I achieve something that can be called “professional”. At the beginning, it’s just a hobby. I remember my first painting artwork was very bad, haha. But now I think my artworks are far better than the first. I believe in “practice makes perferct”.

What genre(s) do you work in?

I call it “semi realistic” art style. I usually use anime/manga or Disney style for the character’s face look-alike and use realistic shading a little bit.

Whose work inspires you most — who are your role models as an artist?

David Revoy and Sara Tepes. They inspire and help me a lot with their tutorials.

How and when did you get to try digital painting for the first time?

I started in 2014-2015. I used the traditional method (with pencil and paper) and traced it with Inkscape.

Then 2016. That’s the first time I tried digital painting because I just bought my first graphic tablet that time.

What makes you choose digital over traditional painting?

I am not in the position that I have to choose between digital or traditional painting, because if there’s a teacher, I really need to learn all of them. The reasons why I am doing digital painting at this moment is that iI can find tutorials everywhere on the internet and it’s easy to practice because I have the monitor and the digital pen for digital painting. Besides, I am working as a freelance artist and every client asks for digital raw files.

How did you find out about Krita?

2014 is the year that I first started to try Linux on my laptop, and then I knew that Windows programs don’t run perfectly on Linux even using “wine”. My curiosity about Linux and the alternative programs led me to Krita. The more time I spent with Linux, the more I fell in love with it. And finally I thought that “I’ll choose Linux as a single OS on my laptop and Krita as a digital painting program for work someday after I get my first graphic tablet.”

What was your first impression?

The first Krita that I tried is version 2.9. I followed David Revoy’s blog and youtube channel to learn his tutorials about Krita. I thought at the time that Krita was very laggy for bigger canvas sizes. But I still used it as my favorite digital painting program because I thought that Krita was a powerful program.

What do you love about Krita?

Powerful and Free Open Source Software.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I hope Krita will not use so much processor and ram in the future, it always gets to force close when I use many layers in a bigger canvas.

What sets Krita apart from the other tools that you use?

I’ve never used other painting programs for a long time except Krita. So I don’t know, Krita has unique features. For example, other programs have something called called “clipping layer” but Krita called it “inherit alpha”.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Here’s my character who is called Seira. Still my favourite because this is the first time I didn’t pick any palette colors from the internet and I
started to understand about “source light” in the digital painting.

What techniques and brushes did you use in it?

a. Sketch

I usually begin from the stickman. I thought about pose and the composition at this point. Then I start to draw the naked body from the stickman. That’s the part where I have to draw the anatomy clearly. After that, I start to draw the costume of the character. I am not really good at line art, so I let it become a sketch because for me, it’s just a guide to the next step.

b. Coloring

This is a very important part for me because I need to be clear with the shading, texture of the material, value, etc. For the brushes, the Deevad bundle and the Krita default brush bundle are more than enough.

Where can people see more of your work?

www.aflasio.com

Anything else you’d like to share?

I just want to give a message to the people who see this. “It’s better to use FLOSS (Free/Libre Open Source Software) like Krita for example, than use proprietary software (even worse if you use the cracked software). It’s bad. Really…”

How not to parent

First few pages of parenting advice book: “Don’t invalidate your child’s feelings.”

My child: “I don’t like this casserole.”

Me: “Yes you do!”

Introducing Blender Community Badges

Today the Blender Foundation introduces Community Badges: a new way for Blender users to display their role and involvement in the online community.

Badges are assigned to users who take part in official blender.org initiatives, see the list below, and are managed via Blender ID. This means that every website providing Blender ID authentication will have the possibility to show the badges associated with each user.

Blender Community Badges

The upcoming Blender Community Badges

Available Community Badges

Websites supporting Blender Community Badges

You can check all your badges on your Blender ID account.

Blender Community Badges Overview

Blender Community Badges Overview in a Blender ID account

We believe this will help shaping online conversations, and offer the possibility to long-time contributors and members of the community to proudly show their status.

And so the Fundraiser Ends

Yesterday was the last day of the developers sprint^Wmarathon, and the last day of the fundraiser. We’re all good and knackered here, but the fundraiser ended at a very respectable 26,426 euros! That’s really awesome, thanks everybody!

We’ve already updated the about box for Krita 4.2 with the names or nicknames of everyone who wanted to be mentioned, and here is the final vote tally:

1 – Papercuts 202
2 – Brush Engine 132
3 – Animation 128
6 – Vector Objects and Tools 73
5 – Layers 59
7 – Text 48
10 – Photoshop layer styles 43
4 – Color Management 29
9 – Resource Management and Tagging 20
8 – Shortcuts and Canvas Input 18

Now we’re going to take a short break, and then it’s rollup our sleeves and get to work!

October 14, 2018

The Last Day of the Krita Sprint and the Last Day of the Krita Fundraiser

We fully intended to make a post every day to keep everyone posted on what’s happening here in sunny Deventer, the Netherlands.

And that failed because we were just too busy! I’m triaging bugs even as I’m working on this post, too! We are on track to have fixed about a hundred bugs during this sprint. On the other hand, the act of fixing bugs seems to invite people to test, and then to report bugs, so in the past ten days, there were fifty new reports. That all makes for a very hectic sprint indeed!

All through the week, we all touched many different parts of Krita, sharing knowledge and making everyone of us more all-round when it comes to Krita’s codebase. Remember, people have been working on this code since 1999. Nobody back then expected we’d have millions of users one day!

At the Krita headquarters, one usually cooks for two or three, cooking for eight was a bit of a challenge, but cook we did, for Wolthera, Irina, Boudewijn, Dmitry, Ivan, Jouni, Emmet and Eoin. Let’s go through the days, menu and bugs!

Saturday: Minestrone

On Saturday, we merged Michael Zhou’s Summer of Code work on improving the palette docker and making it possible to save palettes in your .kra project file. Of course, that meant that all through the week we had to fix issues here and there caused by this merge, but that was to be expected. All through the week, using the nightly builds must have been a roller-coaster experience for our testers!

And we got a lot more done, too: Jouni was demonstrating his clone frames and frames cycle feature, Eoin added a global kinetic scrolling feature, where you can pan pretty much every list with the middle-mouse button. That makes Krita much nicer to use on touchscreens.

Sunday: Pasta with tomato sauce

On Sunday, we really got into our stride. Wolthera updated the user manual with all the new features — if you haven’t checked out the manual, do so, we’re quite proud of it! Emmet fixed the color picker tool, which had a bit of trouble showing the correct value for the alpha channel. There were smaller and larger bug fixes, but a very nice new feature was added as well: new contributor Reptorian, after coding a bunch of new blending modes, put his teeth into a larger feature: a symmetric difference mode for the selection tools.

This is not the first time that someone with little or no background in C++ does really useful work for Krita. In fact, it happens all the time, and for this we worked together with Alberto Flores who was working on his very first patch. And he did make it!

Monday: Moussaka

On Monday, we fixed a nasty little bug where the order in which you select layers in the layerbox would influence the order in which layers would be merged. There were fixes to the animation timeline.

The big feature of Monday was making it possible to import SVG files as a vector layer. Originally, and silly enough, an SVG file would be imported as a raster layer when using the layer/import image as layer function.

Dmitry started working on making Krita’s canvas support display scaling properly.: this work would go on all through the week.

Jouni started working on fixing the sound stuttering when playing an animation. That patch is ready to land, and when we released on Thursday, it turns out that this was the thing we got most questions about.

And we fixed bugs…

Tuesday: Honey-braised pork, sesame beans and cabbage in oyster sauce

On Tuesday we did a bunch of bug triaging and assigned bugs to the people present. Then there were crash fixes, python fixes, ui polish… And we also looked into what would be needed to support HDR monitors, but that’s going to be a long slog!

Wednesday: Stuffed bellpeppers

On Wednesday, Dmitry refined the way users can modify and work with selections: now one should hover over the selection border to start moving a selection. And a lot of bugs and crashes were fixed all long day long, but we also started preparing for the release itself, by backporting all the fixes to the 4.1 branch.

Thursday: Dining out at Da Mario, and a release

This was the day we had to redo the release three times! But we did release in the end, with almost fifty bugs fixed.

But there was also fixing being done!

Eoin fixed a bug in the animation curve editor, fixed bugs in the bristle and smudge brushes. Wolthera fixed a regression in the color selector. Turns out that if you replace a division by two with a multiplication by 0.5, you shouldn’t keep dividing it…

Dmitry also pushed the final set of fixes for using display scaling correctly in Krita. That’s another very, very long standing bug finally fixed!

And it’s not just people at the sprint who are busy! Mehmet Salih Çalışkan submitted two patches, one for the text tool, one for the brush editor, and today his patches were pushed. Anna Medonosva was fixing issues with the artistic color selector.

In the afternoon, we went out for a walk and met the Deventer sheep herd, as well as the shepherd.

That evening, none of us cooked, we went out to Da Mario instead to have dinner together with one of the more local supporters of Krita.

Friday: Ajam pedis and sambal goreng beans

Emmett fixed a really hard bug where sometimes when smudging black gets mixed into the color. We’re pretty sure there are more bugs like this in the database, but we’d need some testing done to see which bugs those are. It’s one thing that makes the bugs database such a mess: we have lots of duplicates, but often the fact that they’re duplicates isn’t apparent at all.

Jouni fixed issues with the G’Mic integration, Boudewijn looked into the problem with saving a group layer with a transparency mask to OpenRaster: that needs updating the OpenRaster standard though. But he didn’t waste the entire day, he also fixed loading line end markers for vector lines on Windows and macOS. Ivan fixed macOS-specific issues in the animation timeline.

Later in the afternoon we discussed options to reduce the mess in bugzilla a bit by using a helpdesk system and/or an a question/answer type site. Bugzilla really should only contain bugs, not user support type questions!

Saturday: Runner beans and meatballs with mint sauce

Saturday was our day off. All work and no play and all that sort of thing. We visited a nearby town, called Zwolle. It’s a pretty place, a little larger than Deventer, and it has kept one of its town gates, the Sassenpoort. Previously used as the town’s archive, it’s now and then open to the public, and well worth a visit. Zwolle has been barbaric enough to close its local history museum because it wasn’t turning a profit, but there’s still the Fundatie, a modern art museum, which was showing a smallish exhibition of works by sculptors Giacometti and Chadwick.

And in the evening we went back to hacking. Jouni is this close to fixing audio playback for animations: in fact, it’s ready to be merged!

Sunday: Black pudding with beetroot and stewed pears

And today, while Jouni has already gone back to Finland, we’re back at fixing bugs! But this was one loooong sprint, more of a marathon, and we’re getting a bit frazzled now. Still, there are more bugs to be fixed!

The End

Tomorrow, our marathon coders will wend their way homewards. The fundraiser will end, at the very great amount of 25,000 euros (it’s actually more, because people have also been donating directly through Krita’s bank account, and the website doesn’t count that.). And we will be fixing bugs, and work on achieving that polish and stability that makes all the difference!

Tape Rabbit

[Packing tape rabbit] I had to mail a package recently, and finished up a roll of packing tape.

I hadn't realized before I removed the tape roll from its built-in dispenser that packing tape was dispensed by rabbits.

October 11, 2018

Krita 4.1.5 Released

Coming hot on the heels of Krita 4.1.3, which had an unfortunate accident to the TIFF plugin, we’re releasing Krita 4.1.5 today! There’s a lot more than just that fix, though, since we’re currently celebrating the last week of the Krita Fundraiser by having a very productive development sprint in Deventer, the Netherlands.

There are some nice highlights as well, like much improved support for scaling on hi-dpi or retina screens. But here’s a full list of more than fifty fixes:

  • Associate Krita with .ico files
  • Auto-update the device pixel ration when changing screens
  • Disable autoscrolling for the pan tool
  • Disable drag & drop in the recent documents list BUG:399397
  • Disable zoom-in/out actions when editing text in rich-text mode BUG:399157
  • Do not add template files to recent documents list BUG:398877
  • Do not allow creating masks on locked layers. BUG:399145
  • Do not close the settings dialog when pressing enter while searching for a shortcut BUG:399116
  • Fill polyline shapes if some fill style was selected BUG:399135
  • Fix Tangent Normal paintop to work with 16 and 32 bit floating point images BUG:398826
  • Fix a blending issue with the color picker when picking a color for the first time BUG:394399
  • Fix a problem with namespaces when loading SVG
  • Fix an assert when right-clicking the animation timeline BUG:399435
  • Fix autohiding the color selector popup
  • Fix canvas scaling in hidpi mode BUG:360541
  • Fix deleting canvas input settings shortcuts BUG:385662
  • Fix loading multiline text with extra mark-up BUG:399227
  • Fix loading of special unicode whitespace characters BUG:392710
  • Fix loading the alpha channel from Photoshop TIFF files BUG:376950
  • Fix missing shortcut from Fill Tool tooltip. BUG:399111
  • Fix projection update after undoing create layer BUG:399575
  • Fix saving layer lock, alpha lock and alpha inheritance. BUG:399513
  • Fix saving the location of audio source files in .kra files
  • Fix selections and transform tool overlay when Mirror Axis is active BUG:395222
  • Fix setting active session after creating a new one
  • Fix showing the color selector popup in hover mode
  • Fix the ctrl-w close window shortcut on Windows BUG:399339
  • Fix the overview docker BUG:396922, BUG:384033
  • Fix the shift-I color selector shortcut
  • Fix unsuccessful restoring of a session blocking Krita from closing BUG:399203
  • Import SVG files as vector layers instead of pixel layers BUG:399166
  • Improve spacing between canvas input setting categories
  • Make Krita merge layers correctly if the order of selecting layers is not top-down. BUG:399146
  • Make it possible to select the SVG text tool text has been moved inside an hidden group and then made visible again BUG:395412
  • Make the color picker pick the alpha channel value correctly. BUG:399169
  • Prevent opening filter dialogs on non-editable layers. BUG:398915
  • Reset the brush preset selection docker on creating a new document BUG:399340
  • Support fractional display scaling factors
  • Update color history after fill BUG:379199

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.5 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

October 10, 2018

Great little improvements

There are relatively few great little improvements to desktop computing interfaces in recent years. One that I’ve particularly enjoyed is the searchable menus in macOS.

On any macOS app, open the Help menu and start typing. For example if you want the Connect to Server… feature in the Finder app, hit Help, start typing “conn…” and use the arrow keys and Enter key to choose the menu item.

This is particularly useful in complex applications, like Photoshop, that can have hundreds of nested menu items.

Searchable menus in macOS

Update: As Antonio points out in the comments, there’s a keyboard shortcut for this (Shift–Command–?). I never knew! Thanks, Antonio.

October 08, 2018

Krita October Sprint Day 1

On Saturday, the first people started to arrive for this autumn’s Krita development sprint. It’s also the last week of the fundraiser: we’re almost at 5 months of bug fixing funded! All in all, 8 people are here: Boudewijn, the maintainer, Dmitry, whose work is being sponsored by the Krita Foundation through this fundraiser, Wolthera, who works on the manual, videos, code, scripting, Ivan, who did the brush vectorization Google Summer of Code project this year, Jouni, who implemented the animation plugin, session management and the reference images tool, Emmet and Eoin who started hacking on Krita a short while ago, and who have worked on the blending color picker and kinetic scrolling.

We already did a ton of work! Wolthera finished up the last few problems in Michael Zhou’s Google Summer of Code rewrite of the palette docker: that’s merged to master, so it’s in the nightly builds for Windows and Linux. We did some pair programming so the text tool now creates new text with the currenly selected color.

Jouni got a long way with the implementation of animation clones and cycles: that is, a set of frames can now be “cloned” to appear in several places in your animation:

Then we sat down and distributed bugs to the hackers present, and we got rid of a lot of bugs already (total bugs, new reports, closed, balance):

Streaming

We’re going to continue to fix bugs for the rest of the week, of course! And we did some experimentation with stream to twitch, so tomorrow afternoon, CEST, we’ll do a live streaming of bug fixing on https://www.twitch.tv/artwithkrita! We’ll also be answering questions, so if you want to discuss a particular bug with us, join in!

Voting

We’ve also got the the updated vote tally for you:

1 – Papercuts 164
2 – Brush Engine 103
3 – Animation 88
6 – Vector Objects and Tools 56
5 – Layers 51
7 – Text 36
10 – Photoshop layer styles 28
4 – Color Management 21
9 – Resource Management and Tagging 18
8 – Shortcuts and Canvas Input 12

The only real change is that Resource Management now has dropped below Color Management, for the rest, the order is pretty stable.

And a bonus video

In case you missed it, Wolthera made a cool video showing off gamut masks and the new palette docker, create by two new Krita contributors:

And…

And it’s great to be together, of course! We’ve got people from the US, from Mexico, Russia, Finland and the Netherlands. For three of us, it’s the first Krita sprint they’ve attended. Here are the early birds who were already happily hacking on Sunday morning, without even waiting until after breakfast!

October 07, 2018

Hot tubs, time machine

As always, the latest episode of Heavyweight made me laugh out loud and feel feelings. My favorite detail is host Jonathan Goldsteins’ pretentious pluralisation of the “Hot Tub Time Machine” movies as Hot Tubs Time Machine.

Yes Jonathan, we noticed.

Recommended particularly for those who remember and miss the great CBC Radio show, Wiretap.

October 04, 2018

Announcing the first release of libxmlb

Today I did the first 0.1.0 preview release of libxmlb. We’re at the “probably API stable, but no promises” stage. This is the library I introduced a couple of weeks ago, and since then I’ve been porting both fwupd and gnome-software to use it. The former is almost complete, and nearly ready to merge, but the latter is still work in progress with a fair bit of code to write. I did manage to launch gnome-software with libxmlb yesterday, and modulo a bit of brokenness it’s both faster to start (over 800ms faster from cold boot!) and uses an amazing 90Mb less RSS at runtime. I’m planning to merge the libxmlb branch into the unstable branch of fwupd in the next few weeks, so I need volunteers to package up the new hard dep for Debian, Ubuntu and Arch.

The tarball is in the usual place – it’s a simple Meson-built library that doesn’t do anything crazy. I’ve imported and built it already for Fedora, much thanks to Kalev for the super speedy package review.

I guess I should explain how applications are expected to use this library. At its core, there are just five different kinds of objects you need to care about:

  • XbSilo – a deduplicated string pool and a read only node tree. This is typically kept alive for the application lifetime.
  • XbNode – a “Gobject wrapped” immutable node available as a query result from XbSilo.
  • XbBuilder – a “compiler” to build the XbSilo from XbBuilderNode’s or XbBuilderSource’s. This is typically created and destroyed at startup or when the blob needs regenerating.
  • XbBuilderNode – a mutable node that can have a parent, children, attributes and a value
  • XbBuilderSource – a source of data for XbBuilder, e.g. a .xml.gz file or just a raw XML string

The way most applications will use libxmlb is to create a local XbBuilder instance, add some XbBuilderSource’s and then “ensure” a local cache file. The “ensure” process either mmap loads the binary blob if all the file mtimes are the same, or compiles a blob saving it to a new file. You can also tell the XbSilo to watch all the sources that it was built with, so that if any files change at runtime the valid property gets set to FALSE and the application can xb_builder_ensure() at a convenient time.

Once the XbBuilder has been compiled, a XbSilo pops out. With the XbSilo you can query using most common XPath statements – I actually ended up implementing a FORTH-style stack interpreter so we can now do queries like /components/component/id[contains(upper-case(text()),'GIMP')] – I’ll say a bit more on that in a minute. Queries can limit the number of results (for speed), and are deduplicated in a sane way so it’s really quite a simple process to achieve something that would be a lot of C code. It’s possible to directly query an attribute or text value from a node, so the silo doesn’t have to be passed around either.

In the process of porting gnome-software, I had to make libxmlb thread-safe – which required some internal organisation. We now have an non-exported XbMachine stack interpreter, and then the XbSilo actually registers the XML-specific methods (like contains()) and functions (like ~=). These get passed some per-method user data, and also some per-query private data that is shared with the node tree – allowing things like [last()] and position()=3 to work. The function callbacks just get passed an query-specific stack, which means you can allow things like comparing “1” to 1.00f This makes it easy to support more of XPath in the future, or to support something completely application specific like gnome-software-search() without editing the library.

If anyone wants to do any API or code review I’d be super happy to answer any questions. Coverity and valgrind seem happy enough with all the self tests, but that’s no replacement for a human asking tricky questions. Thanks!

Looking forward to Krita 4.2!

Everyone is hard at work, and what will become Krita 4.2 is taking shape already. Today we’re presenting a preview of Krita 4.2. It’s not complete yet, and there ARE bugs. More than in the stable release (we’ll be doing a 4.1.4 after all next week to clear up some more bugs…), and some might make you lose work.

Support Krita! Join the 2018 Fundraiser!

But experiment with it, test it, check it out! There are lots of new things in there, and we’d love your feedback! There are also quite a few things we’re right now working on, that aren’t in this preview, but will be in 4.2. And we’ll talk about, too!

What’s in there already

Masks and Selections. We’ve done a lot of work on improving working with masks and selections. Some of that work has already landed in Krita 4.1.3, but there’s more to come. Ranging from making it easier to paint selections masks, to a new way of moving and transforming selections, to performance improvements all round. The “select opaque” function has received new options.

Gamut masks. A much-demanded new feature, gamut masks mask out part of the color selector. A technique described by James Gurney, this helps you to use color in a harmonious way. You can create new masks and edit existing masks right inside Krita. The masks now work with both the artistic and the advanced color selector. Rotating masks over the color selector is possible as well!

Improved performance. We’re always working to make Krita perform better, and there will always be room for improvement. This preview contains Andrey Kamakin’s Google Summer of Code work on the very core of Krita: the tile engine. That is, the bits where all your layer’s pixels are handled. There’s still some fixing to do here, so be warned that if you paint with really big brushes, you may experience crashes. At the same time, this preview also contains more of Ivan Yossi’s Summer of Code work: the creation of brush masks now uses your CPU’s vectorization instructions, and that also increates performance! Dmitry also worked hard on improving the performance of the Layer Styles feature: especially for the Stroke Layer Style. The rendering became more correct at the same time. And fill layers have become much faster, too!

Keep up to date! Only on Linux for now, since we’re still working on setting up the necessary libraries for encryption on Windows and macOS. The welcome screen can now show the latest news about Krita. It’s off by default, since to bring you the news we have to connect to the Krita website.

Colored Assistants. It’s now possible to give your painting assistants individual colors, and that color is saved and restored when you save and load your .kra project file.

Activate transform tool on pasting. When pasting something in Krita a new layer is created. Most of the time you’ll want to move the pasted layer around or transform it. There’s now a setting in the preferences dialog that, when checked, will make Krita automatically select the transform tool.

Improved move tool: undo and redo with the move tool was always a bit wonky… Now it works as expected. Not a big thing, but it should help with everyone’s workflow.

A smoother UI: 4.2 will have lots of small fixes to improve your workflow. That ranges from making it possible to resize the thumbnails in the layer docker to improved interaction with color palettes to making it possible to translate plugins written in Python. There are also new blending modes, with more coming, and the G’Mic plugin has been updated to the latest version.

Lots of bug fixes. We’re already at nearly 200 bug fixes for 4.2, and that number will only increase.

What we’re working on

But we’re not done yet! We intend to release Krita 4.2 this year, in December, but we haven’t gone into feature freeze. This is a little taste of what may still be coming from 4.2!

This isn’t in the preview yet, but here are a couple of things that our UX expert, Scott, is working on. First, the brush editor is being redesigned. As you can see from the video, it’s being condensed, and that’s because we also want to make it possible to dock it as a panel in one of the dock areas of Krita, or have it floating free, possibly on another monitor.

Then, the text tool‘s UI is being revamped. While Dmitry is working on making the text tool more reliable, Scott is working on making it nicer to use:

Michael’s Summer of Code work hasn’t been merged yet, but we fully intend to do that before the release. This will improve the stability of working with color palettes and make it possible to save palettes in your .kra Krita project file.

Another area where work is going is resource management. That is, loading, working with, tagging, saving and sharing things like brush presets, brush tips, gradients or patterns. This is a big project, but when done Krita will start faster, use less memory and a lot of little niggles and bugs with handling resources will be gone.

Downloads

Have fun with the 4.2 preview!

Download

Windows

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

October 02, 2018

FreeCAD BIM development news - September 2018

Hi folks, Time for one more of our monthly posts about the development of BIM tools for FreeCAD. This month unfortunately, since I was taking some holiday, travelling (and sketching) for the biggest part of the month, I have less new stuff than usual to show. To compensate, I tried a longer and more detailed...

October 01, 2018

Updated Vote Tally!

And here’s an update on voting! We’re almost at 15,000 euros now, or more than four months of solid work. And this what it looks like you want us to work on!

1 – Papercuts 139
2 – Brush Engines 82
3 – Animation 68
6 – Vector Objects and Tools 44
5 – Layers 42
7 – Text 31
10 – Photoshop layer styles 21
9 – Resource Management and Tagging 16
4 – Color Management 15
8 – Shortcuts and Canvas Input 10

Last time we tallied the votes, the papercuts were on top as well. Probably because it’s the default choice, but still. Today, Animation is now in the third place, and brush engines in the second  But Layers has dropped one place, in favour of Vector Objects and Tools. And where Resource Management and Tagging was at the bottom last time, this time, it has climbed up to eight place.

This week, we also plan to bring out a preview release of Krita 4.2. We don’t have everyhing in that we want to yet — like the updated resource handling, but there’s already plenty to play with!

Interview with João Garcia

Could you tell us something about yourself?

My name is João Garcia and I’m an illustrator hailing from Brazil, more specifically, from the city of Florianópolis in the southern part of the country. I graduated in Design in the Universidade Federal de Santa Catarina with a focus on illustration and animation.

Do you paint professionally, as a hobby artist, or both?

I’d say both. I work mainly with game art but I like to explore new ideas with illustration in my spare time.

What genre(s) do you work in?

I don’t think I have a set genre I exclusively work in, but I tend to more modern stuff involving technology or the future. I had a cyberpunk streak a couple of months ago hahah.

I rarely do fantasy or medieval works, but, hey, maybe in the future I’ll work in that genre.

Whose work inspires you most — who are your role models as an artist?

Thanks to Instagram and ArtStation I just can’t stop finding new inspiring art. At the moment my inspirations are: Jacob Hankinson, Ashley Wood, Kudaman, Faraz Shanyar, Ilya Kuvshinov, Anthony MacBain and many more.

How and when did you get to try digital painting for the first time?

I tried my first hand at digital art when I entered college around 2009. There was this shiny thing called a tablet that I didn’t know existed, but I knew I immediately wanted to use it.

What makes you choose digital over traditional painting?

The possibilities. That being said I learned tons from practicing the traditional way.

How did you find out about Krita?

I simply wanted to find alternatives to Photoshop, to be honest. I tried GIMP first and it didn’t suit my workflow and then, luckily, I found Krita.

What was your first impression?

It took a day or two to get used to the software, but I quickly got the hang of it.

What do you love about Krita?

The fact that is specifically made for digital artists. It becomes so intuitive after a while. That, its selection of brushes and, well, because it’s open-source. It amazes me how much Krita suits digital art.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe the fact that it seems to stutter with high resolution files. Or that may just be my not-so-great notebook.

What sets Krita apart from the other tools that you use?

As I said before, the fact that is made exclusively with the digital artist in mind. I mean, that new reference image tool is great!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Museu Nacional”. It was a way of expressing my sadness at losing such a national treasure.

What techniques and brushes did you use in it?

Sketching 2 Chrome Large, Bristles 2 Flat Rough, Watercolor Texture, Airbrush Soft and Blender Rake, basically.

Where can people see more of your work?

I’m at ArtStation, Instagram and Mastodon as @joaogarciart. Also, my website: https://joaogarcia.art

Anything else you’d like to share?

Yeah. Krita developers and the whole team: Please keep up your great work!

September 29, 2018

Armband Exxess [sic] Max

Here’s a stupid product idea for you.

It’s a case for the iPhone XS Max that is compatible with two Apple Watch wristbands, so you can wear your phone strapped to your forearm.

I figure it could be at least $299, since you’ve already had to spent at least $1,197 and as much as $2,577 (with options maxed) on your phone and two wristbands.

I’d call it the Armband Exxess [sic] Max for iPhone XS Max. If it existed.

Here’s an artist’s rendering:

September 27, 2018

Reluctant heroism

What makes the testimony of a survivor of sexual assault so heroic is that they shouldn’t have had to be a hero. As a society, we owe a great debt.

Krita 4.1.3 Released

Today we’re releasing the latest version of Krita! In the middle of our 2018 fundraiser campaign, we’ve found the time to prepare Krita 4.1.3. There are about a hundred fixes, so it’s a pretty important release and we urge everyone to update! Please join the 2018 fundraiser as well, so we can continue to fix bugs!

Now you might be wondering where Krita 4.1.2 went to… The answer is that we had 4.1.2 prepared, but the Windows builds were broken because of a change in the build infrastructure. While we were fixing that, Dmitry made a couple of bug fixes we really wanted to release immediately, like an issue where multiline centered text would be squashed into a single line when you’d edit the text. So we went and created a new version, 4.1.3!

Krita 4.1.3 is a bug fix release, so that’s the most important thing, but there are also some new things as well.

The first of these is the new welcome screen, by Scott Petrovic. You get some handy links, a list of recently used files, a link to create or open a file and a hint that you can also drag and drop images in the empty window to open them.

Dmitry Kazakov has worked like crazy fixing bugs and improving Krita in the past couple of weeks. One of the things he did was improve Instant Preview mode. Originally funded by our 2015 Kickstarter, Instant Preview works by computing a scaled-down version of the image and displaying that. But with some brushes, it would cause a little delay at the end of a stroke, or some flickering on the canvas: BUG:361448. That’s fixed now, and painting really feels smoother! And for added smoothness, most of Ivan Yossi’s Google Summer of Code work is also included in this release.

We’ve also done work on improving working with selections. Krita’s selections can be defined as vectors or as a pixel mask. If you’re working on a vector selection, using the figure tools, like rectangle or ellipse now add a vector to the selection, instead of rasterizing the vector selection.

The move tool has been improved so it’s possible to undo the steps you’ve set with the move tool, instead of undo immediately placing back the layer where it originally came from. See BUG:392014.

The bezier curve tools have been improved: there is an auto-smoothing option. If you select auto-smoothing, the created curve will be not a polygon, but a smooth curve with the type of all the points set to “smooth”. BUG:351787

The final new feature is round corners for the rectangle tool. Whether you’re working on a pixel or a vector layer, you have an ability to set round corners for the resulting shape. BUG:335568. You could, of course, already round vector rectangles by editing the shape, but this is easier.

The Comics Project manager, a Python plugin created by Wolthera van Hövell tot Westerflier has seen a ton of improvements, especially when it comes to generating standard-compliant epub and acbf files. On a related note, check out Peruse, the KDE Comic Book Reader. It’s a long list of improvements:

  • Add improved navigation to generated epubs. This adds…
    • Region navigation for panels and balloons, as per epub spec.
    • Navigation that uses the “acbf_title” keyword to create a TOC in both nav.xhtml and ncx
    • A Pagelist in both nav.xhtml and ncx.
  • Ensure generated EPUBs pass EPUB check validation. This invoved ensuring that the mimetype gets added first to the zip, as well as as some fixes with the author metadata and the NCX pagelist.
  • Fix language nonsense.
  • Fix several issues with the EPUB metadata export.
    • Add MARC-relators for use with the ‘refines’.
    • Add UUID sharing between acbf and epub.
    • Add a modiied and proper date stuff.
  • Implement “epub_spread”, the primary color ahl meta and more. This also…
    • Makes the balloon localisation more robust.
    • Names all balloons text-areas as that is a bit more accurate
    • Set a sequence on author
    • Adds a ton of documentation everywhere.
  • Make the generated EPUB 3 files pre-paginated. This’ll allow comics to be rendered as part of a spread which should have a nice result.
  • Move Epub to use QDomDocument for generation, split out ncx/opf. This is necessary so we have nicer looking xml files, as well. as having a bit more room to do proper generation for epub 2/3/3+
  • Update ComicBookInfo and ComicRack generators.

And here’s the full list of fixed bugs:

Animation

  • Add a workaround for loading broken files with negative frame ids. BUG:395378
  • Delete existing frame files only within exported range BUG:377354
  • Fix a problem of Insert Hold Frames action. We should also “offset” empty cell to make sure the expanding works correctly. BUG:396848
  • Fix an assert when trying to export a PNG image sequence BUG:398608
  • Fix updates when switching frame on a layer with scalar channel
  • Use user-selected color label for the auto-created animation frames BUG:394072
  • saving of the multiple frames insertion/removal options to the config

Improvements to support for various file formats

  • Fix an assert if an imported SVG file links to non-existent color profile BUG:398576
  • Fix backward compatibility of adjustment curves. Older versions supported fewer adjustable channels, so we can no longer assume the count in configuration data to matches exactly. BUG:396625
  • Fix saving layers with layer styles BUG:396224
  • Let Krita save all the kinds of layers into PSD (in rasterized way) BUG:399002
  • PNG Export: convert to rgb, if the image isn’t rgb or gray BUG:398241
  • Remove fax-related tiff options. In fax mode tiff can store only 1 bit per channel images, which Krita doesn’t support. So just remove these options from the GUI BUG:398548

Filters

  • Add a shortcut for the threshold filter BUG:383818
  • Fix Burn filter to work in 16-bit color space BUG:387102
  • Make color difference threshold for color labels higher
  • Restore the shortcut for the invert filters.

Painting

  • Remove hardcoded brush size limit for the Quick Brush BUG:376085
  • Fix rotation direction when the transformed piece is mirrored. BUG:398928
  • Make Stamp brush preview be scaled correctly CCBUG:399065

Tablets

  • Add a workaround for tablets not reporting tablet events in hover mode BUG:363284

Text

  • Do not reset text style when selecting something in text editor
  • Fix saving line breaks when the text is not left aligned BUG:395769

Reference images tool

  • Fix reference image cache update conditions BUG:397208

build system

  • Fix build with dcraw 0.19

Canvas

  • Disable pixel grid action if opengl is disabled BUG:388903 Patch by Shingo Ohtsuka, thanks!
  • Fix painting of selection decoration over grids BUG:362662

Fixes to Krita’s Core

  • Fix saving to a dropbox or google driver folder on Windows temporary workaround until QTBUG-57299: QSaveFile should be disabled on Windows.
  • Fix to/fromLab16/Rgb16 methods of the Alpha color space
  • Fix undo in the cloned KisDocument BUG:398730

Layers

  • Automatically avoid conflicts between color labels and system colors
  • Fix cursor jumps in the Layer Properties dialog BUG:398958
  • Fix resetting active tool when moving layers above vector layers BUG:398095
  • Fix selecting of the layer after undoing Flatten Image BUG:398814
  • Fix showing two nodes when converting to a Filter Mask 1) When a filter mask we should first remove the source layer, and only after that show the filter selection dialog 2) Also make sure that the operation is rolled back when the user presses Cancel button
  • Fix updates of Clone Layers when the nodes are updated with subtree walker
  • a spurious assert in layer cloning BUG:398788

Metadata handling

  • Fix a memory access problem in KisExifIO
  • Fix memory access problems in KisExifIo
  • Show metadata in the dublin core page of the metadata editor. The editor plugin is still broken, with dates not working, bools not working, but now at least a string one has entered is shown again. CCBUG:396672

Python scripting

  • SegFault in LibKis Node mergeDown BUG:397043
  • apidox for Node.position() BUG:393035
  • Add modified() getter to the Document class BUG:397320
  • Add resetCache() Python API to FileLayer BUG:398740
  • Fix memory management of the Filter’s InfoObject BUG:392183
  • Fix setting file path of the file layer through python API BUG:398740
  • Make sure we wait for the filter to be done

Resource handling

  • Fix saving a fixed bundle under the original name

Selections

  • Fix “stroke selection” to work with local selections BUG:398007
  • Fix a crash when moving a vector shape selection when it is an overlay
  • Fix crash when converting a shape selection shape into a shape selection
  • Fix crash when undoing removing of a selection mask
  • Fix rounded rectangular selection to actually work BUG:397806
  • Fix selection default bounds when loading old type of adjustment layers
  • Stroke Selection: don’t try to add a shape just because a layer doesn’t have a paint device BUG:398015

Other tools

  • Fix color picking from reference images. Desaturation now affects the picked color, and reference images are ignored for picking if hidden.
  • Fix connection points on cage transform BUG:396788
  • Fix minor UIX issues in the move tool: 1) adds an explicit frame when the move stroke is in progress; 2) Ctrl+Z now cancels the stroke if there is nothing to undo BUG:392014
  • Fix offset in Move Tool in the end of the drag
  • Fix shift modifier in Curve Selection Tool. The modifier of the point clicked the last should define the selection mode. For selection tools we just disable shift+click “path-close” shortcut of the base path tool. BUG:397932
  • Move tool crops the bounding area by exact bounds
  • Reduce aliasing in reference images BUG:396257

Papercuts and UI issues

  • Add the default shortcut for the close action: when opening Krita with an image, the close document shortcut was not available.
  • FEATURE: Add a hidden config option to lock all dockers in place
  • Fix KMainWindow saving incorrect widget settings
  • Fix broken buddy: Scale to New Size’s Alt-F points to Alt-T BUG:396948
  • Fix http link color in KritaBlender.colors: The link are now visible on the startup page of Krita and were dark blue, exact same value as the background making the frame hard to read. Switching them to bright cyan improves the situation.
  • Fix loading the template icons
  • Fix the offset dialog giving inaccurate offsets. BUG:397218
  • Make color label selector handle mouse release events CCBUG:394072
  • Remember the last opened settings page in the preferences dialog
  • Remember the last used filter bookmark
  • Remove the shortcut for wraparound mode: It’s still available from the menu and could be put on the toolbar, or people could assign a shortcut, but having it on by default makes life too hard for people who are trying to support our users.
  • Remove the shortcuts from the mdi window’s system menu’s actions. The Close Window action can now have a custom shortcut, and there are no conflicts with hidden actions any more. BUG:398729 BUG:375524 BUG:352205
  • Set color scheme hint for compositor. This is picked up by KWin and sets the palette on the decoration and window frame, ensuring a unified look.
  • Show a canvas message when entering wraparound mode
  • Show the zoom widget when switching documents BUG:398099
  • Use KSqueezedTextLabel for the pattern name in the pattern docker and brush editor BUG:398958
  • sort the colorspace depth combobox

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.3 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

September 26, 2018

Support Andrea Ferrero on Patreon!


Support Andrea Ferrero on Patreon!

Andrea is developing Photo Flow, GIMP AppImage, Hugin AppImage, and more!

Andrea Ferrero, or as we know him Carmelo_DrRaw, has been contributing to the PIXLS.US community since April of 2015. A self described developer and photography enthusiast, Andrea is the developer of the PhotoFlow image editor, and is producing AppImages for:

Andrea is the best sort of community member, contributing six different projects (including his own)! He is always thoughtful in his responses, does his own support for PhotoFlow, and is kind and giving. He has finally started a Patreon page to support his all of his hard work. Support him now!

He was also kind enough to answer a few questions for us:

PX: When did you get into photography? What’s your favorite subject matter?

AF: I think I was about 15 when I got my first reflex, and I was immediately fascinated by macro-photography. This is still what I like to do the most, together with taking pictures of my kids. ;-) By the way, you can visit my personal free web gallery on GitHub: http://aferrero2707.github.io/photorama/gallery/ (adapted from this project).

It is still a work in progress, but you are welcome to fork it and adapt it to your needs if you find it useful!

PX: What brought you to using and developing Free/Open Source Software?

AF: I started to get interested in programming when I was at the university, in the late 90’s. At that time I quickly realized that the easiest way to write and compile my code was to throw Linux into my hard drive. Things were not as easy as today but I eventually managed to get it running, and the adventure began.

A bit later I started a scientific career (nothing related to image processing or photography, so I won’t bother with more details about my daily job), and since then I have been a user of Linux-based computing clusters for almost 20 years at the time of writing… A large majority of the software tools I use at work are free and open sourced and this definitely has marked my way of thinking and developing.

PX: What are some new/exciting features you develop in Photo Flow?

AF: Currently I am mostly focusing on HDR processing and high-quality Dynamic Range compression - what is also commonly called shadows/highlights compression.

More generally, there is still a lot of work to do on the performances side. The software is already usable and quite stable, but some of the image filters are still a bit too slow for real-time feedback, especially when combined together.

The image exporting module is also currently in a state of work in progress. It is already possible to select either Jpeg or TIFF (8, 16 or floating-point 32 bits bit depth) as the output format, to resize the image and add some post-resize sharpening, and to select the output ICC profile. What is still missing is a real-time preview of the final result, with a possibility to soft-proof the output profile. The same options need to be included in the batch processor as well.

On a longer term, and if there is some interest from the community, I am thinking about porting the code to Android in a simplified form that would be suitable for tablets and the like. The small memory footprint of the program could be an important advantage on such systems.

PX: What other applications would you like to make an AppImage for? Have you explored Snaps or Flatpaks?

AF: I am currently developing and refining AppImage packages for GIMP, RawTherapee, LuminanceHDR and HDRMerge, in addition to PhotoFlow. All packages are automatically built and deployed through Travis CI, for better reproducibility and increased security. Hugin is the next application that I plan to package as an AppImage.

All the AppImage projects are freely available on GitHub. That’s also the best place for any feedback, bug report, or suggestion.

There is an ongoing discussion with the GIMP developers about the possibility to provide the AppImage as an official download.

In addition to the AppImage packages, I am also working with the RawTherapee developers on cross-compiled Windows packages that are also automatically built on Travis CI. The goal is to help them provide up-to-date packages from the main development branches, so that more users can test them and provide feedback.

I’m also open to any suggestions for additional programs that could be packaged as AppImages, so do not hesitate to express your wishes!

Personally I am a big fan of the AppImage idea, mostly because, unlike Snap or Flatpack packages, it is not bound to any specific distribution or run-time environment. The packager has full control over the contents of the AppImage package, pretty much like MacOS bundles.

Moreover, I find the community of developers around the AppImage format very active and open-minded. I am currently collaborating to improve the packaging of GTK applications. For those who are interested in the details, the discussion can be followed here: https://github.com/linuxdeploy/linuxdeploy/issues/2

A human being finishing Super Mario Bros. in 4m 55s is the four-minute mile of our generation. What a time to be alive.

September 23, 2018

Writing Solar System Simulations with NAIF SPICE and SpiceyPy

Someone asked me about my Javascript Jupiter code, and whether it used PyEphem. It doesn't, of course, because it's Javascript, not Python (I wish there was something as easy as PyEphem for Javascript!); instead it uses code from the book Astronomical Formulae for Calculators by Jean Meeus. (His better known Astronomical Algorithms, intended for computers rather than calculators, is actually harder to use for programming because Astronomical Algorithms is written for BASIC and the algorithms are relatively hard to translate into other languages, whereas Astronomical Formulae for Calculators concentrates on explaining the algorithms clearly, so you can punch them into a calculator by hand, and this ends up making it fairly easy to implement them in a modern computer language as well.)

Anyway, the person asking also mentioned JPL's page HORIZONS Ephemerides page, which I've certainly found useful at times. Years ago, I tried emailing the site maintainer asking if they might consider releasing the code as open source; it seemed like a reasonable request, given that it came from a government agency and didn't involve anything secret. But I never got an answer.

[SpiceyPy example: Cassini's position] But going to that page today, I find that code is now available! What's available is a massive toolkit called SPICE (it's all in capitals but there's no indication what it might stand for. It comes from NAIF, which is NASA's Navigation and Ancillary Information Facility).

SPICE allows for accurate calculations of all sorts of solar system quantities, from the basic solar system bodies like planets to all of NASA's active and historical public missions. It has bindings for quite a few languages, including C. The official list doesn't include Python, but there's a third-party Python wrapper called SpiceyPy that works fine.

The tricky part of programming with SPICE is that most of the code is hidden away in "kernels" that are specific to the objects and quantities you're calculating. For any given program you'll probably need to download at least four "kernels", maybe more. That wouldn't be a problem except that there's not much help for figuring out which kernels you need and then finding them. There are lots of SPICE examples online but few of them tell you which kernels they need, let alone where to find them.

After wrestling with some of the examples, I learned some tricks for finding kernels, at least enough to get the basic examples working. I've collected what I've learned so far into a new GitHub repository: NAIF SPICE Examples. The README there explains what I know so far about getting kernels; as I learn more, I'll update it.

SPICE isn't easy to use, but it's probably much more accurate than simpler code like PyEphem or my Meeus-based Javascript code, and it can calculate so many more objects. It's definitely something worth knowing about for anyone doing solar system simulations.

Urban sketching in Salvador

The drawings I did during the wonderful national encounter of urban sketchers in Salvador da Bahia, Brazil

September 20, 2018

Speeding up AppStream: mmap’ing XML using libxmlb

AppStream and the related AppData are XML formats that have been adopted by thousands of upstream projects and are being used in about a dozen different client programs. The AppStream metadata shipped in Fedora is currently a huge 13Mb XML file, which with gzip compresses down to a more reasonable 3.6Mb. AppStream is awesome; it provides translations of lots of useful data into basically all languages and includes screenshots for almost everything. GNOME Software is built around AppStream, and we even use a slightly extended version of the same XML format to ship firmware update metadata from the LVFS to fwupd.

XML does have two giant weaknesses. The first is that you have to decompress and then parse the files – which might include all the ~300 tiny AppData files as well as the distro-provided AppStream files, if you want to list installed applications not provided by the distro. Seeking lots of small files isn’t so slow on a SSD, and loading+decompressing a small file is actually quicker than loading an uncompressed larger file. Parsing an XML file typically means you set up some callbacks, which then get called for every start tag, text section, then end tag – so for a 13Mb XML document that’s nested very deeply you have to do a lot of callbacks. This means you have to process the description of GIMP in every language before you can even see if Shotwell exists at all.

The typical way parsing XML involves creating a “node tree” when parsing the XML. This allows you treat the XML document as a Document Object Model (DOM) which allows you to navigate the tree and parse the contents in an object oriented way. This means you typically allocate on the heap the nodes themselves, plus copies of all the string data. AsNode in libappstream-glib has a few tricks to reduce RSS usage after parsing, which includes:

  • Interning common element names like description, p, ul, li
  • Freeing all the nodes, but retaining all the node data
  • Ignoring node data for languages you don’t understand
  • Reference counting the strings from the nodes into the various appstream-glib GObjects

This still has a both drawbacks; we need to store in hot memory all the screenshot URLs of all the apps you’re never going to search for, and we also need to parse all these long translated descriptions data just to find out if gimp.desktop is actually installable. Deduplicating strings at runtime takes nontrivial amounts of CPU and means we build a huge hash table that uses nearly as much RSS as we save by deduplicating.

On a modern system, parsing ~300 files takes less than a second, and the total RSS is only a few tens of Mb – which is fine, right? Except on resource constrained machines it takes 20+ seconds to start, and 40Mb is nearly 10% of the total memory available on the system. We have exactly the same problem with fwupd, where we get one giant file from the LVFS, all of which gets stored in RSS even though you never have the hardware that it matches against. Slow starting of fwupd and gnome-software is one of the reasons they stay resident, and don’t shutdown on idle and restart when required.

We can do better.

We do need to keep the source format, but that doesn’t mean we can’t create a managed cache to do some clever things. Traditionally I’ve been quite vocal against squashing structured XML data into databases like sqlite and Xapian as it’s like pushing a square peg into a round hole, and forces you to think like a database doing 10 level nested joins to query some simple thing. What we want to use is something like XPath, where you can query data using the XML structure itself.

We also want to be able to navigate the XML document as if it was a DOM, i.e. be able to jump from one node to it’s sibling without parsing all the great, great, great, grandchild nodes to get there. This means storing the offset to the sibling in a binary file.

If we’re creating a cache, we might as well do the string deduplication at creation time once, rather than every time we load the data. This has the added benefit in that we’re converting the string data from variable length strings that you compare using strcmp() to quarks that you can compare just by checking two integers. This is much faster, as any SAT solver will tell you. If we’re storing a string table, we can also store the NUL byte. This seems wasteful at first, but has one huge advantage – you can mmap() the string table. In fact, you can mmap the entire cache. If you order the string table in a sensible way then you store all the related data in one block (e.g. the <id> values) so that you don’t jump all over the cache invalidating almost everything just for a common query. mmap’ing the strings means you can avoid strdup()ing every string just in case; in the case of memory pressure the kernel automatically reclaims the memory, and the next time automatically loads it from disk as required. It’s almost magic.

I’ve spent the last few days prototyping a library, which is called libxmlb until someone comes up with a better name. I’ve got a test branch of fwupd that I’ve ported from libappstream-glib and I’m happy to say that RSS has reduced from 3Mb (peak 3.61Mb) to 1Mb (peak 1.07Mb) and the startup time has gone from 280ms to 250ms. Unless I’ve missed something drastic I’m going to port gnome-software too, and will expect even bigger savings as the amount of XML is two orders of magnitude larger.

So, how do I use this thing. First, lets create a baseline doing things the old way:

$ time appstream-util search gimp.desktop
real	0m0.645s
user	0m0.800s
sys	0m0.184s

To create a binary cache:

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.497s
user	0m0.453s
sys	0m0.028s

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.016s
user	0m0.004s
sys	0m0.006s

Notice the second time it compiled nearly instantly, as none of the filename or modification timestamps of the sources changed. This is exactly what programs would do every time they are launched.

$ df -h appstream.xmlb
4.2M	appstream.xmlb

$ time xb-tool query appstream.xmlb "components/component[@type='desktop']/id[text()='firefox.desktop']"
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
real	0m0.008s
user	0m0.007s
sys	0m0.002s

8ms includes the time to load the file, search for all the components that match the query and the time to export the XML. You get three results as there’s one AppData file, one entry in the distro AppStream, and an extra one shipped by Fedora to make Firefox featured in gnome-software. You can see the whole XML component of each result by appending /.. to the query. Unlike appstream-glib, libxmlb doesn’t try to merge components – which makes it much less magic, and a whole lot simpler.

Some questions answered:

  • Why not just use a GVariant blob?: I did initially, and the cache was huge. The deeply nested structure was packed inefficiently as you have to assume everything is a hash table of a{sv}. It was also slow to load; not much faster than just parsing the XML. It also wasn’t possible to implement the zero-copy XPath queries this way.
  • Is this API and ABI stable?: Not yet, as soon as gnome-software is ported.
  • You implemented XPath in c‽: No, only a tiny subset. See the README.md

Comments welcome.

Let’s Tally Some Votes!

We’re about a week into the campaign, and almost 9000 euros along the path to bug fixing. So we decided to do some preliminary vote tallying! And share the results with you all, of course!

On top is Papercuts, with 84 votes. Is that because it’s the default choice? Or because you are telling us that Krita is fine, it just needs to be that little bit smoother that makes all the difference? If the latter, we won’t disagree, and yesterday Boudewijn fixed one of the things that must have annoyed everyone who wanted to create a custom image: now the channel depths are finally shown in a logical order!

Next, and that’s a  bit of a surprise, is Animation with 41 votes. When we first added animation to Krita, we were surprised  by the enthusiasm with which it was welcomed. We’ve actually seen, with our own eyes, at a Krita Sprint, work done in Krita for a very promising animated television series!

Coming third, there’s the Brush Engine bugs, with 39 votes. Until we decided that it was time to spend time on stability and polish, we thought that in 2018, we’d work on adding some cool new stuff for brushes. Well, with Digital Atelier, it’s clear that there is a lot more possible with brushes in Krita than we thought — but as you’re telling us, there’s also a lot that should be fixed. The brush engine code dates back to a rewrite in 2006, 2007, with a huge leap made when Lukáš Tvrdý wrote his thesis on Krita’s brush engines. Maybe we’ll have to do some deep work, maybe it really is all just surface bugs. We will find out!

Fourth, bugs with Layer handling. 23 votes. For instance, flickering when layers get updated. Well, Dmitry fixed one bug there on Wednesday already!

Vector Objects and Tools: with 20 votes, Text with 15 votes and Layer Styles, with 13 votes (4 less than there are bug reports for Layer Styles…): enough to show people are very much interested in these topics, but it looks like the priority is not that high.

The remaining topics, Color Management, Shortcuts and Canvas Input, Resource Management and Tagging, all get 8 votes. We did fix a shortcuts bug, though… Well, that fix fixed three of them! And resource management is being rewritten in any case — maybe that’s why people don’t need to vote for it!

September 18, 2018

Let’s take this bug, for example…

Krita’s 2018 fund raiser is all about fixing bugs! And we’re fixing bugs already. So, let’s take a non-technical look at a bug Dmitry fixed yesterday. This is the bug: “key sequence ctrl+w ambiguous with photoshop compatible bindings set” And this is the fix.

So, we actually both started looking at the bug at the same time, me being Boudewijn. The issue is, if you use a custom keyboard shortcut scheme that includes a shortcut definition for “close current image”, then a popup would show, saying that the shortcut is ambiguous:

The popup doesn’t tell where the ambiguous definition is… Only that there is an ambiguous definition. Hm… Almost everything that does something in Krita that is triggered by a shortcut is an action. And deep down, Qt keeps track of all actions, and all shortcuts, but we cannot access that list.

So we went through Krita’s source code. The action for closing an image was really created only once, inside Krita’s code. And, another bug, Krita doesn’t by default assign a shortcut to this action. The default shortcut should be CTRL+W on Linux and Windows, and COMMAND+W on macOS.

Curiously enough, the photoshop-compatible shortcut definitions did assign that shortcut. So, if you’d select that scheme, a shortcut would be set.

Even curiouser, if you don’t select one of those profiles, so Krita doesn’t set a shortcut, the default ctrl+w/command+w shortcut would still work.

Now, that can mean only one thing: Krita’s close-image action is a dummy. It never gets used. Somewhere else, another close-image action is created, but that doesn’t happen inside Krita.

So, Dmitry started digging into Qt’s source code. Parts of Qt are rather old, and the module that makes it possible to show multiple subwindows inside a big window is part of that old code.

/*!
    \internal
*/
void QMdiSubWindowPrivate::createSystemMenu()
{
...
    addToSystemMenu(CloseAction, QMdiSubWindow::tr("&Close"), SLOT(close()));
...
    actions[CloseAction]->setShortcuts(QKeySequence::Close);
....
}
#endif

Ah! That’s where another action is created, and a shortcut allocated. Completely outside our control. This bug, which was reported only two days ago, must have been in Krita since version 2.9! So, what we do now is to make sure that the Krita’s own close-image action’s shortcut gets triggered. We do that by making sure Qt’s action only can get triggered if the subwindow’s menu is open.

    /**
     * Qt has a weirdness, it has hardcoded shortcuts added to an action
     * in the window menu. We need to reset the shortcuts for that menu
     * to nothing, otherwise the shortcuts cannot be made configurable.
     *
     * See: https://bugs.kde.org/show_bug.cgi?id=352205
     *      https://bugs.kde.org/show_bug.cgi?id=375524
     *      https://bugs.kde.org/show_bug.cgi?id=398729
     */
    QMdiSubWindow *subWindow = d->mdiArea->currentSubWindow();
    if (subWindow) {
        QMenu *menu = subWindow->systemMenu();
        if (menu) {
            Q_FOREACH (QAction *action, menu->actions()) {
                action->setShortcut(QKeySequence());
            }
        }
    }

That means, for every subwindow we’ve got, we grab the menu. For every entry in the menu, we remove the shortcut. That means that our global Krita close-window shortcut always fires, and that people can select a different shortcut, if they want to.

September 17, 2018

Paris Art School Looking for Krita Teacher

An art school in Paris, France, is looking for a Krita teacher! This is pretty cool, isn’t it? If you’re interested, please mail foundation@krita.org and we’ll forward your mail to the school!


A freelance Krita teacher to teach basics to art school students, Paris 11e. November 2018.

The course has to be mainly in french (could be half in english).
That’s full-days session with differents students group (28 students x 5 classes).

  • The teacher has to have an “statut autoentrepreneur” (french freelance).
  • Price is around 50 euros per hour (6 hours days)
  •  The courses will be on : 5th, 8th, 9th,12th, 13th, 14th, 15th, 16th november 2018
  • It’s mainly about general tools of the software, and how to draw and paint with Krita
    using tablets
  • There are possibilities to work more than this first schedule, later in 2019

Profile: Game artist, Concept artist, digital painter, illustrator… please send your website if interested!


Cherche formateur auto-entrepreneur pour cours de Krita à des étudiants en Ecole d’Art, Paris 11. Novembre 2018.

Le cours doit être majoritairement donné en français, mais anglais partiel possible.
Jours pleins à l’école avec différents groupes d’étudiants (28 étudiants x 5 classes).

  • le formateur doit avoir un statut auto-entrepreneur
  • rémunération autour de 50 euros l’heure (journées de 6 heures)
  • les cours seront les : 5, 8, 9, 12, 13, 14, 15, 16 novembre 2018
  • il s’agit d’apprendre les outils de bases du logiciel, et surtout les outils de dessin et peinture avec tablette
  • Il y a des possibilités futures pour d’autres dates dans l’année 2019

Profil: Game artist, Concept artist, digital painter, illustrateur… merci d’envoyer votre site si intéressé !

Interview with Alyssa May

Could you tell us something about yourself?

I’m a graduate of the Kansas City Art Institute, but I’ve since moved back to Northwest Arkansas. When I’m not painting or playing video games, I’m enjoying the great outdoors with my wonderful husband, adorable daughter, and our 8-year-old puppy.

Do you paint professionally, as a hobby artist, or both?

I do both! I do freelance work for clients, but I still paint for fun (and for prints).

What genre(s) do you work in?

Primarily fantasy because I love how much room for imagination there is. Nothing is off-limits! While that genre is my favorite, I also do portraiture (mostly pets) and have been dabbling a bit in children’s illustration recently.

Whose work inspires you most — who are your role models as an artist?

Dan dos Santos is my absolute favorite artist. His rendering is gorgeous. His color is masterful. There’s a lot that can be learned just by looking at the paintings he produces, but luckily for me and anyone else who looks up to his work, he even shares some of his techniques and professional experience with the world. He works mostly in traditional media, but the concepts that he discusses are pretty universal. As far as digital artists go, I’m very fond of the work of people like Marta Dahlig and Daarken.

How and when did you get to try digital painting for the first time?

I think college was the first time that I sat down with a tablet and computer and gave the digital painting thing a go. Before that, though, I had dabbled in some other digital image-making techniques, I was more interested in the traditional stuff like oil paints and charcoal.

What makes you choose digital over traditional painting?

Once I got over that initial transitional hump from traditional to digital I was hooked. There’s no cleanup. You can’t run out of blue paint at just the wrong moment. The undo function is absolute magic. More than anything, though, it’s the control that keeps me working in a digital space. Between layers, history states, and myriad manipulation options, I can experiment without worrying about destroying anything. It’s very freeing and really strips the blank canvas of any of its intimidating qualities.

How did you find out about Krita?

Reddit. It came up in a number of threads about digital painting software. People had a lot of positive things to say about it, so when I felt like it was time to start looking at some Photoshop alternatives to paint in, it was the first one that I tried.

What was your first impression?

I was pleasantly surprised when I launched Krita the first time. The UI was polished and supported high DPI displays. All of the functionality that I was looking to replace from Photoshop CS5 was there—and it had been tuned for painting specifically. I was even able to open up the PSDs I had been working on and transition over to Krita without losing a beat. All of my layers and blending modes were intact and ready rock. Hard to ask for a smoother switch than that!

What do you love about Krita?

Aside from the pricing and the whole thing being built from the ground up with painting in mind, I think my favorite feature is actually just the ability to configure what shows up in the toolbars as much as you can in Krita. I work on a Surface Pro 4, and being able to have all of the functions I need in the interface itself so that I don’t have to have a keyboard in between me and the painting to keep repetitive functions speedy is so great.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Really, the brush system improvements in the last big update fixed up basically anything I could have hoped for! The only lingering thing for me probably comes down to a personal preference, but when I choose to save out a flat version of my document while working on a KRA, I’d rather the open document default to saving on the KRA on subsequent saves instead of that new JPEG/TIFF/whatever other format I selected for the un-layered copy.

What sets Krita apart from the other tools that you use?

I think the open source nature of it is a big component, but on a daily use basis, the biggest functional difference between Krita and other tools I use (like Photoshop and Illustrator) is that painting is the intended usage. The interface and toolsets are geared entirely toward painting and drawing by default, so doing that feels more natural most of the time.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think my favorite Krita painting so far is my most recent—Demon in the Dark. Out of my work produced start to finish in the program, that one had the most detail to play with and I had a lot more fun balancing the composition. Cloth, smoky details, and the female figure are some of the things I enjoy painting most, and that one included all of them!

What techniques and brushes did you use in it?

There weren’t any particularly fancy techniques or brushes used. I like to keep it simple and sketch out my composition, broadly block in color, then progressively refine, layering in and collapsing down chunks of painting until things are reasonably polished and cohesive. To pull it off, I used my staple brushes (hard round and standard blender) for the brunt of the painting, then some more specific texture build-up brushes for hair, clouds, particles, and the like. A time lapse of the whole process is up on my YouTube channel (https://youtu.be/phPrPgK5DYQ).

Where can people see more of your work?

People can check out my work on my website, www.artofalyssamay.com, or my Instagram, @artofalyssamay. They can also find time lapses of my paintings on my YouTube channel, www.youtube.com/c/AlyssaMay!

Anything else you’d like to share?

Just my thanks to the Krita team for making and sharing such a solid program!

September 16, 2018

Printing Two-Sided from the Command Line

The laser printers we bought recently can print on both sides of the page. Nice feature! I've never had access to a printer that can do that before.

But that requires figuring out how to tell the printer to do the right thing. Reading the man page for lp, I spotted the sides option: print -o sides=two-sided-long-edge. But that doesn't work by itself. Adding -n 2 looked like the way to go, but nope! That gives you one sheet that has page 1 on both sides, and a second sheet that has page 2 on both sides. Because of course that's what a normal person would want. Right.

The real answer, after further research and experimentation, turned out to be the collate=true option:

lp -o sides=two-sided-long-edge -o collate=true -d printername file

September 15, 2018

The Krita 2018 Fundraiser Starts: Squash the Bugs!

It’s time for a new Krita fundraiser! Our goal this year is to make it possible for the team to focus on one thing only: stability. Our previous fundraisers were all about features: adding new features, extending existing features. Thanks to your help, Krita has grown at breakneck speed!

Squash the bugs!

This year, we want to take a step back, look at we’ve achieved, and take stock of what got broken, what didn’t quite make the grade and what got forgotten. In short, we want to fix bugs, make Krita more stable and bring more polish and shine to all those features you all have made possible!

We’re not using Kickstarter this year. Already in 2016, Kickstarter felt like a tired formula. We’re also not going for a fixed amount of funding this year: every 3500 euros funds one month of work, and we’ll spend that time on fixing bugs, improving features and adding polish.

Polish Krita!

As an experiment, Dmitry has just spent about a month on area of Krita: selections. And now there are only a few issues left with selection handling: the whole area has been enormously improved. And now we want to ask you to make it possible for us to do the same with some other important areas in krita, ranging from papercuts to brush engines, from color management to resource management. We’ve dug through the bugs database, grouped some things together and arrived at a list of ten areas where we feel we can improve Krita a lot.

The list is order of number of reports, but if you support Krita in this fundraiser, you’ll be able to vote for what you think is important! Voting is fun, after all, and we love to hear from you all what you find  the most important things.

Practical Stuff

Practically speaking, we’ve kicked out Kickstarter, which means that from the start, you’ll be able to support our fundraiser with credit cards, paypal, bank transfers — even bitcoin! Everyone who donates from 15 September to 15 October will get a vote.

And everyone who donates 50 euros or more will get a free download of Ramon Miranda’s wonder new brush preset bundle, Digital Atelier. Over fifty of the highest-quality painterly brush presets (oils, pastel, water color) and more than that: 2 hours of tutorial video explaining the creation process in detail.

 

Go to the campaign page!

September 12, 2018

Introducing Digital Atelier: a painterly brush preset pack by Ramon Miranda with tutorial videos!

Over the past months, Ramon Miranda, known for his wonderful introduction to digital painting, Muses, has worked on creating a complete new brush preset bundle: Digital Atelier. Not only does this contain over fifty new brush presets, more than thirty new brush tips and twenty patterns and surfaces.

There is almost two hours of in-depth video tutorial, working you through the process of creating new brush presets.

Ramon has gone deep here! The goal was to create painterly brushes: achieving the look and feel of oil paint, pastel or water colors. Ramon did a lot of research and experimentation and it has paid off handsomely:

The complete download is 8 gigabytes!

When?

On Saturday, Krita’s 2018 Squash the Bugs fundraiser will start. Anyone who supports Krita with €50 or more will get a free download! On October 16th, we’ll put Digital Atelier in the Krita shop for €39.95.

What?

Brush pack:

  • Fifty-one new brush presets — check out the reference sheet!
  • Twenty-four oil paint brush presets, of which four are very experimental.
  • Thirteen Pastel brush presets.
  • Fourteen Watercolor brush presets.
  • Thirty-four new PNG and five new SVG brush tips.
  • Twenty 512×512 paper surfaces and patterns.

Tutorial Videos:

  • Introduction: Knowing our tools
  • Oil painting
  • Pastel painting
  • Water Color painting
  • Creating your own Patterns
  • Creating your own Brush tips.

The music is by Kevin MacLeod. The language used in the videos is English.

September 08, 2018

PyBullet and “Sim-to-Real: Learning Agile Locomotion For Quadruped Robots”

PyBullet is receiving regular updates, you can see the latest version here: https://pypi.org/project/pybullet
Installation and update is simple:
pip install -U pybullet

Check out the PyBullet Quickstart Guide and clone the github repository for more PyBullet examples and OpenAI Gym environments.

A while ago, Our RSS 2018 paper “Sim-to-Real: Learning Agile Locomotion For Quadruped Robots” is accepted! (with Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, Vincent Vanhoucke).

See also the video and paper on Arxiv.

Erwin @ twitter

September 07, 2018

3 Million Firmware Files and Counting…

In the last two years the LVFS has supplied over 3 million firmware files to end users. We now have about a two dozen companies uploading firmware, of which 9 are multi-billion dollar companies.

Every month about 200,000 more devices get upgraded and from the reports so far the number of failed updates is less than 0.01% averaged over all firmware types. The number of downloads is going up month-on-month, although we’re no longer growing exponentially, thank goodness. The server load average is 0.18, and we’ve made two changes recently to scale even more for less money: signing files in a 30 minute cron job rather than immediately, and switching from Amazon to BunnyCDN.

The LVFS is mainly run by just one person (me!) and my time is sponsored by the ever-awesome Red Hat. The hardware costs, which recently included random development tools for testing the dfu and nvme plugins, and the server and bandwidth costs are being paid from charitable donations from the community. We’re even cost positive now, so I’m building up a little pot for the next server or CDN upgrade. By pretty much any metric, the LVFS is a huge success, and I’m super grateful to all the people that helped the project grow.

The LVFS does have one weakness, that it has a bus factor of one. In other words, if I got splattered by a bus, the LVFS would probably cease to exist in the current form. To further grow the project, and to reduce the dependence on me, we’re going to be moving various parts of the LVFS to the Linux Foundation. This means that they’ll be sysadmins who don’t have to google basic server things, a proper community charter, and access to an actual legal team. From a OEM point of view, nothing much should change, including the most important thing that it’ll continue to be free to use for everyone. The existing server and all the content will be migrated to the LVFS infrastructure. From a users point of view, new metadata and firmware will be signed by the Linux Foundation key, rather than my key, although we haven’t decided on a date for the switch-over yet. The LF key has been trusted by fwupd for firmware since 1.0.8 and it’s trivial to backport to older branches if required.

Before anyone gets too excited and starts pointing me at all my existing bugs for my other stuff: I’ll probably still be the person “onboarding” vendors onto the LVFS, and I’m fully expecting to remain the maintainer and core contributor to the lvfs-website code itself — but I certainly should have a bit more time for GNOME Software and color stuff.

In related news, even more vendors are jumping on the LVFS. No more public announcements yet, but hopefully soon. For a lot of hardware companies they’ll be comfortable “going public” when the new hardware currently in development is on shelves in stores. So, please raise a glass to the next 3 million downloads!

Last Month in Krita: August 2018

We used to do a weekly development news post… Last Week in Krita. But then we got too busy doing development to maintain that, and that’s kind of a pity. Still, we’d like to share what we’re doing with you all — and not just through the git history! So, let’s try to revive the tradition…

In August, we started preparing for our next big Fund Raiser. Mid-September to mid-October, we’ll be raising funds for the next round of Krita development. The last fund raisers were all about features: this will be all about stability and polish. Zero Bugs, while obviously unattainable, is to be the rallying cry! We’re moving to a new payment provider, to make it possible to donate to Krita with other options than paypal or a direct bank transfer. Credit cards, various national e-payment systems and even bitcoin will become possible. It’s up already on our donation page!

We’ve already made a good start on stability and polish by fixing our unittests — small bits of code that test one or another function of Krita and that we run to see whether new code breaks something. We also fixed almost a hundred bugs. And, of course, the Google Summer of Code came to an end.

But let’s look at some highlights in a bit more detail:

Polish, polish, polish

After porting Krita to Qt5, we were left with dozens of unittests that didn’t work anymore. They wouldn’t build, they would hang, they would give the wrong results or even crash. August was almost too warm to really work on anything too complicated, so all we did was fix stuff. Well, apart from all the other work we’ve been doing, of course…

Improving Selections

Dmitry, one of our core developers, whose work is funded by your donations, has been working for a couple of weeks on improving selections in Krita. Selections in Krita are a bit different from other applications, since we have both local and global selections, and selections can be visualized with marching ants or as a mask. And a selection can either consist of pixels, or of vector objects. So, he has worked on like:

  • Painting on the selection mask in realtime
  • Make it possible to have more than one local selection
  • Adding opaque, intersect opaque and remove opaque from the global selection
  • A selection is automatically created when you select “show global selection”.
  • Conversions between vector and pixel selections
  • Making it possible to modify selections

Rewriting Resource Management

Resources are things you want to use with Krita. Brush tips for instance, or brush presets. Or gradients, or patterns, or workspaces. Some resources come with Krita by default, some you’ll create yourself, some you’ll want to download. Krita’s resource system dates back to the 20th century, or, if you prefer, the previous millennium. It was designed in the days when a 50 pixel brush was BIG, and patterns would be 128 x 128 pixels. These days, people want to use 1000 pixel brushes, 5000 pixel patterns and lots of them.

The original design for handling resources doesn’t scale any more! And besides, after over twenty years of hack work, it’s a mess. Doing a rewrite is almost always a mistake, but… We’re working on rewriting that part of Krita anyway. Boudewijn is making lots of progress, but nothing much that anyone can see, except for the Phabricator task. It’s been something we’ve been planning and discussing for a long time. All bugs with tagging, adding, removing and changing resources should be fixed in one big bang!

That’s the plan at least…  There’s still a way to go!

And then, there were the usual surprises and little extras:

New and unexpected

A pretty amazing new feature is the gamut mask, created by Anna Medonosova. This includes both editing and applying of the mask to the artistic color selector. See James Gurney’s blog post for some background information.

There’s now also a debug log docker, so Windows users don’t have to mess with debug view again:

(Artwork by Iza Ka)

And there’s more, of course! Reptorian has created a bunch of new blending modes and is now working on a new selection intersection mode. Jouni has been fixing issues to the reference images docker and has implemented clone frames for animation!

September 06, 2018

Design with Difficult Data: Published on A List Apart

I’m excited to have my second article in 17 years published on the website “For People Who Make Websites”, A List Apart: Design with Difficult Data.

Screenshot from A List Apart

Thanks to Ste Grainer for the great editing.

September 05, 2018

Krita’s 2018 Google Summer of Code

This year, we participated in Google Summer of Code with three students: Ivan, Andrey and Michael. Some of the code these awesome students produced is already in Krita 4.1.1, and most of the rest has been merged already, so you can give it a whirl in the latest nightly builds for Windows or Linux. So, let’s go through what’s been achieved this year!

Ivan’s project was all about making brushes faster using vectorization. If that sounds technical, it’s because it is! Basically, your CPU is powerful enough to do a lot of calculations at the same time, as long as it’s the same calculation, but with different numbers. You could feed more than 200 numbers to the CPU, tell it to multiply them all, and it would do that just as fast as multiplying one number. And it just happens that calculating the way a brush looks is more or less just that sort of thing. Of course, there are complications, and Ivan is still busy figuring out how to apply the same logic to the predefined brushes. But here’s a nice image from his blog:

Above, how it was, underneath, what the performance is now.

If Ivan’s project was all about performance, well, so was Andrey‘s project. Andrey has been working on something just as technical and head-achey. Modern CPU’s have many cores — some have four, and fake having eight, others have ten and fake having 20 — or even more. And unused silicon is a waste of good sand! Krita has been able to use multiple cores for a long time: it started with Dmitry Kazakov’s summer of code project, back in 2009, which was all about the tile engine. Nine years later, Dmitry mentored Andrey’s work on the tile engine. The tile engine is used to break up the image into smaller tiles, so every tile can be worked on independently.

So when Dmitry, last year, worked on a project to let Krita use more cores and simultaneously found some places where Krita would force cores to wait on each other, he made a note: something needed to be done about that. It’s called locking, and the solution is to get rid of those locks.

So Andrey’s project was all about making the list of tiles function without locks. And that work is done. There are still a few bugs — this stuff is amazingly complicated and tricky, so real testing is really needed. It will all be in Krita 4.2, which should be released by the end of this year. And some of it was already merged to Krita 4.1.1, too. He managed to get some really nice gains:

Michael has been working on something completely different: palette support in Krita. A palette or colorset is a set of colors — what we call a resource. Palette editing was one of the things we shared with the rest of Calligra, back then, or KOffice, even earlier on. The code is complicated and tangled and spread out. Michael’s first task was to bring order to the chaos, and that part has already been merged to Krita 4.1.1.

His next project was to work on the palette docker. His work is detailed in this Phabricator post. There’s still work in progress, but everything that was planned for the Summer of Code project has been done!

And this is the editor:

September 04, 2018

Raspberry Pi Zero as Ethernet Gadget Part 3: An Automated Script

Continuing the discussion of USB networking from a Raspberry Pi Zero or Zero W (Part 1: Configuring an Ethernet Gadget and Part 2: Routing to the Outside World): You've connected your Pi Zero to another Linux computer, which I'll call the gateway computer, via a micro-USB cable. Configuring the Pi end is easy. Configuring the gateway end is easy as long as you know the interface name that corresponds to the gadget.

ip link gave a list of several networking devices; on my laptop right now they include lo, enp3s0, wlp2s0 and enp0s20u1. How do you tell which one is the Pi Gadget? When I tested it on another machine, it showed up as enp0s26u1u1i1. Even aside from my wanting to script it, it's tough for a beginner to guess which interface is the right one.

Try dmesg

Sometimes you can tell by inspecting the output of dmesg | tail. If you run dmesg shortly after you initialized the gadget (either by plugging the USB cable into the gateway computer, you'll see some lines like:

[  639.301065] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0
[ 9458.218049] usb 3-1: USB disconnect, device number 3
[ 9458.218169] cdc_ether 3-1:1.0 enp0s20u1: unregister 'cdc_ether' usb-0000:00:14.0-1, CDC Ethernet Device
[ 9462.363485] usb 3-1: new high-speed USB device number 4 using xhci_hcd
[ 9462.504635] usb 3-1: New USB device found, idVendor=0525, idProduct=a4a2
[ 9462.504642] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 9462.504647] usb 3-1: Product: RNDIS/Ethernet Gadget
[ 9462.504660] usb 3-1: Manufacturer: Linux 4.14.50+ with 20980000.usb
[ 9462.506242] cdc_ether 3-1:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, f2:df:cf:71:b9:92
[ 9462.523189] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0

(Aside: whose bright idea was it that it would be a good idea to rename usb0 to enp0s26u1u1i1, or wlan0 to wlp2s0? I'm curious exactly who finds their task easier with the name enp0s26u1u1i1 than with usb0. It certainly complicated all sorts of network scripts and howtos when the name wlan0 went away.)

Anyway, from inspecting that dmesg output you can probably figure out the name of your gadget interface. But it would be nice to have something more deterministic, something that could be used from a script. My goal was to have a shell function in my .zshrc, so I could type pigadget and have it set everything up automatically. How to do that?

A More Deterministic Way

First, the name starts with en, meaning it's an ethernet interface, as opposed to wi-fi, loopback, or various other types of networking interface. My laptop also has a built-in ethernet interface, enp3s0, as well as lo0, the loopback or "localhost" interface, and wlp2s0, the wi-fi chip, the one that used to be called wlan0.

Second, it has a 'u' in the name. USB ethernet interfaces start with en and then add suffixes to enumerate all the hubs involved. So the number of 'u's in the name tells you how many hubs are involved; that enp0s26u1u1i1 I saw on my desktop had two hubs in the way, the computer's internal USB hub plus the external one sitting on my desk.

So if you have no USB ethernet interfaces on your computer, looking for an interface name that starts with 'en' and has at least one 'u' would be enough. But if you have USB ethernet, that won't work so well.

Using the MAC Address

You can get some useful information from the MAC address, called "link/ether" in the ip link output. In this case, it's f2:df:cf:71:b9:92, but -- whoops! -- the next time I rebooted the Pi, it became ba:d9:9c:79:c0:ea. The address turns out to be randomly generated and will be different every time. It is possible to set it to a fixed value, and that thread has some suggestions on how, but I think they're out of date, since they reference a kernel module called g_ether whereas the module on my updated Raspbian Stretch is called cdc_ether. I haven't tried.

Anyway, random or not, the MAC address also has one useful property: the first octet (f2 in my first example) will always have the '2' bit set, as an indicator that it's a "locally administered" MAC address rather than one that's globally unique. See the Wikipedia page on MAC addressing for details on the structure of MAC addresses. Both f2 (11110010 in binary) and ba (10111010 binary) have the 2 (00000010) bit set.

No physical networking device, like a USB ethernet dongle, should have that bit set; physical devices have MAC addresses that indicate what company makes them. For instance, Raspberry Pis with networking, like the Pi 3 or Pi Zero W, have interfaces that start with b8:27:eb. Note the 2 bit isn't set in b8.

Most people won't have any USB ethernet devices connected that have the "locally administered" bit set. So it's a fairly good test for a USB ethernet gadget.

Turning That Into a Shell Script

So how do we package that into a pipeline so the shell -- zsh, bash or whatever -- can check whether that 2 bit is set?

First, use ip -o link to print out information about all network interfaces on the system. But really you only need the ones starting with en and containing a u. Splitting out the u isn't easy at this point -- you can check for it later -- but you can at least limit it to lines that have en after a colon-space. That gives output like:

$ ip -o link | grep ": en"
5: enp3s0:  mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000\    link/ether 74:d0:2b:71:7a:3e brd ff:ff:ff:ff:ff:ff
8: enp0s20u1:  mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000\    link/ether f2:df:cf:71:b9:92 brd ff:ff:ff:ff:ff:ff

Within that, you only need two pieces: the interface name (the second word) and the MAC address (the 17th word). Awk is a good tool for picking particular words out of an output line:

$ ip -o link | grep ': en' | awk '{print $2, $17}'
enp3s0: 74:d0:2b:71:7a:3e
enp0s20u1: f2:df:cf:71:b9:92

The next part is harder: you have to get the shell to loop over those output lines, split them into the interface name and the MAC address, then split off the second character of the MAC address and test it as a hexadecimal number to see if the '2' bit is set. I suspected that this would be the time to give up and write a Python script, but no, it turns out zsh and even bash can test bits:

ip -o link | grep en | awk '{print $2, $17}' | \
    while read -r iff mac; do
        # LON is a numeric variable containing the digit we care about.
        # The "let" is required so LON will be a numeric variable,
        # otherwise it's a string and the bitwise test fails.
        let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')

        # Is the 2 bit set? Meaning it's a locally administered MAC
        if ((($LON & 0x2) != 0)); then
            echo "Bit is set, $iff is the interface"
        fi
    done

Pretty neat! So now we just need to package it up into a shell function and do something useful with $iff when you find one with the bit set: namely, break out of the loop, call ip a add and ip link set to enable networking to the Raspberry Pi gadget, and enable routing so the Pi will be able to get to networks outside this one. Here's the final function:

# Set up a Linux box to talk to a Pi0 using USB gadget on 192.168.0.7:
pigadget() {
    iface=''

    ip -o link | grep en | awk '{print $2, $17}' | \
        while read -r iff mac; do
            # LON is a numeric variable containing the digit we care about.
            # The "let" is required so zsh will know it's numeric,
            # otherwise the bitwise test will fail.
            let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')

            # Is the 2 bit set? Meaning it's a locally administered MAC
            if ((($LON & 0x2) != 0)); then
                iface=$(echo $iff | sed 's/:.*//')
                break
            fi
        done

    if [[ x$iface == x ]]; then
        echo "No locally administered en interface:"
        ip a | egrep '^[0-9]:'
        echo Bailing.
        return
    fi

    sudo ip a add 192.168.7.1/24 dev $iface
    sudo ip link set dev $iface up

    # Enable routing so the gadget can get to the outside world:
    sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
    sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
}

September 03, 2018

Interview with Danielle Williams

Could you tell us something about yourself?

My name is Danielle Williams. I’m a writer, an illustrator, and a fan of Krita.

Do you paint professionally, as a hobby artist, or both?

I’ve been paid for my work before and I use my art as covers for my ebooks, so technically I guess I am a professional…but I often feel like a hobby artist in my heart because of my desire to improve my skill.

What genre(s) do you work in?

I’m prone to figurative work—portraiture, animals, monsters, that sort of thing—in many styles, but I’m trying to improve and incorporate backgrounds into my art for greater impact. Drawing also helps me visualize my story characters and worlds.

Whose work inspires you most — who are your role models as an artist?

Between an online youth spent browsing Yerf, DeviantArt, Side7, and Elfwood, my love of animation art, and my visual arts degree, I’ve been exposed to a tsunami of creative folks! On the fine arts side: John Singer Sargent, Norman Rockwell, and Cecelia Beaux. I’m discovering more fine artists as I follow James Gurney’s blog (GurneyJourney.blogspot.com).

On the animation art side: I admire the artists behind Earthworm Jim, Oddworld Inhabitants, Glen Keane, Marc Davis, and the other Nine Old Men of Disney Animation.

On the ‘net side: Ursula Vernon, Makani, TheMinttu, Don Seegmiller, NibbledPencil.com, Dirk Grundy (StringTheoryComic.com), DimeSpin, Michelle Czajkowski (AvasDemon.com), Enayla.

… I’m forgetting lotsa folks, ha. No single one is my idol, though. I’m like a magpie—I pick out shinies wherever I see ‘em!

How and when did you get to try digital painting for the first time?

Oh, back when I was…maybe 12 or so? We got a scanner that came with “Micrografx Picture Publisher”, a sort of imitation Photoshop. Around that time I saved my money and bought a Wacom Bamboo. Through online tutorials I taught myself how to use it and my program.

The lesson I learned from using this off-brand product for so long was that it wasn’t the tool that made your art good, but your own practice and use of solid artistic principles. That’s what makes the difference.

What makes you choose digital over traditional painting?

A: I’m cheap. B: I don’t feel like I have any place to safely make a mess. I had fun taking watercolor classes in college and even did an oils class…but it’s different working with staining liquid paints over concrete in a designated area vs. the corner of the apartment you’re renting or at the table in your mom’s Better-Homes-And-Garden-worthy kitchen.

Other people’ve found workarounds, I know, but I’m still not comfortable with it.

Oh, and C: space considerations. Art supplies take up space!

How did you find out about Krita?

Some fifteen years after buying Photoshop CS2, it was finally glitching and refusing to open files on Windows 7, no matter how many trick reinstalls I did. I had to go looking for alternatives, and I just didn’t like the dated interface of the GIMP (though I know artists like DimeSpin can make that program sit up and do flips). I dunno where I got the Krita link, but I’m sure glad I found it!

What was your first impression?

“Oo, pretty interface. Wait, why won’t this brush make a mark?!”

What do you love about Krita?

The cost (see above), the quality, the interface, the compatibility—and the ability to open a window of your painting in LUT mode and watch it update in real time while you paint on the file in color. MAN that is helpful for values! (Thanks to https://www.youtube.com/watch?v=F0LBVjJPb9o)

What do you think needs improvement in Krita? is there anything that really annoys you?

Annoys me? I refuse to be annoyed with free software. I have nothing to complain about. There are two things I miss that my old CS2 used to do, however.

First: I haven’t tried Krita’s text tool since before the last big update, but it wasn’t much use to me in making book covers. I’ve since added Inkscape to my workflow to do my title text, but it still requires some wrangling. I look forward to the day when Krita’s text tools are as good as CS2’s were. Or better!

Second: I miss Photoshop Actions. Actions + photos = a very quick way to make differently-mooded book covers.

Finally, I’m like, ack!!, knowing that I’m only scratching the surface of what Krita can do! I just learned about Liquefy this morning. I wish the documentation was more clearly written.

I also wish there were more step-by-step text-with-picture tutorials (like David Revoy’s Getting Started with Krita) about the different features—so you don’t just know that the features exist, but have some idea of what they’re good for.

I don’t like video tutorials as much; I’m definitely a read-the-manual, follow-the-pictures, scroll-back-up-when-you-make-a-mistake sorta gal.

What sets Krita apart from the other tools that you use?

That’s easy! The beautiful modern interface, and the COMMUNITY. The community around Krita is unlike any I’ve ever experienced around an art program! David Revoy, in particular, is a treasure.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Ooh, that’s a toughie. I’ll pick the cover for my Armello fanfic The Heroes of Houndsmouth. It’s just so dang dramatic, even in greyscale, and I tackled a lot of different things in it, especially textures: teeth, fire, metal scratches, fur.

What techniques and brushes did you use in it?

It’s been a couple years so I’m not exactly sure. I remember using a brush like the current “Ink 1 Precision” or “Ink 2 Fineliner” to do the whiskers and keep the fur texture sharper. I also laid a free metal texture on a layer over the shield, then used a transparency layer (applied to the metal texture layer) to erase the metal where I didn’t want it (such as in the flames). Transparency layers are your friends!

Oh, and I used the LUT management trick (see video link above) while painting in color to make the values *smoochy finger-kiss thing* MWAH! I really love that trick.

Where can people see more of your work?

PixelvaniaStudios.com. My ebook cover paintings are in use at my author homepage, PixelvaniaPublishing.com.

Anything else you’d like to share?

I can play a piano, but never in a million years could I build one. Similarly, while I can use Krita fairly well, it’s the fine folks working
their code sorcery behind the scenes that make Krita—and the art I create with it—possible. I salute them!

September 01, 2018

FreeCAD BIM development news - August 2018

Hi there, One month passes bloody fast, doesn't it? So here we are again, for one more report about what I've been coding this month in FreeCAD. Looking at the text below (I'm writing this intro after I wrote the contents) I think we actually have an interesting set of new features. None of this open-source...

August 31, 2018

Raspberry Pi Zero as Ethernet Gadget Part 2: Routing to the Outside World

I wrote some time ago about how to use a Raspberry Pi over USB as an "Ethernet Gadget". It's a handy way to talk to a headless Pi Zero or Zero W if you're somewhere where it doesn't already have a wi-fi network configured.

However, the setup I gave in that article doesn't offer a way for the Pi Zero to talk to the outside world. The Pi is set up to use the machine on the other end of the USB cable for routing and DNS, but that doesn't help if the machine on the other end isn't acting as a router or a DNS host.

A lot of the ethernet gadget tutorials I found online explain how to do this on Mac and Windows, but it was tough to find an example for Linux. The best I found was for Slackware, How to connect to the internet over USB from the Raspberry Pi Zero, which should work on any Linux, not just Slackware.

Let's assume you have the Pi running as a gadget and you can talk to it, as discussed in the previous article, so you've run:

sudo ip a add 192.168.7.1/24 dev enp0s20u1
sudo ip link set dev enp0s20u1 up
substituting your network number and the interface name that the Pi created on your Linux machine, which you can find in dmesg | tail or ip link. (In Part 3 I'll talk more about how to find the right interface name if it isn't obvious.)

At this point, the network is up and you should be able to ping the Pi with the address you gave it, assuming you used a static IP: ping 192.168.7.2 If that works, you can ssh to it, assuming you've enabled ssh. But from the Pi's end, all it can see is your machine; it can't get out to the wider world.

For that, you need to enable IP forwarding and masquerading:

sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now the Pi can route to the outside world, but it still doesn't have DNS so it can't get any domain names. To test that, on the gateway machine try pinging some well-known host:

$ ping -c 2 google.com
PING google.com (216.58.219.110) 56(84) bytes of data.
64 bytes from mia07s25-in-f14.1e100.net (216.58.219.110): icmp_seq=1 ttl=56 time=78.6 ms
64 bytes from mia07s25-in-f14.1e100.net (216.58.219.110): icmp_seq=2 ttl=56 time=78.7 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 78.646/78.678/78.710/0.032 ms

Take the IP address from that -- e.g. 216.58.219.110 -- then go to a shell on the Pi and try ping -c 2 216.58.219.110, and you should see a response.

DNS with a Public DNS Server

Now all you need is DNS. The easy way is to use one of the free DNS services, like Google's 8.8.8.8. Edit /etc/resolv.conf and add a line like

nameserver 8.8.8.8
and then try pinging some well-known hostname.

If it works, you can make that permanent by editing /etc/resolv.conf, and adding this line:

name_servers=8.8.8.8

Otherwise you'll have to do it every time you boot.

Your Own DNS Server

But not everyone wants to use public nameservers like 8.8.8.8. For one thing, there are privacy implications: it means you're telling Google about every site you ever use for any reason.

Fortunately, there's an easy way around that, and you don't even have to figure out how to configure bind/named. On the gateway box, install dnsmasq, available through your distro's repositories. It will use whatever nameserver you're already using on that machine, and relay it to other machines like your Pi that need the information. I didn't need to configure it at all; it worked right out of the box.

In the next article, Part 3: more about those crazy interface names (why is it enp0s20u1 on my laptop but enp0s26u1u1i1 on my desktop?), how to identify which interface is the gadget by using its MAC, and how to put it all together into a shell function so you can set it up with one command.

August 29, 2018

GIMP receives a $100K donation

Earlier this month, GNOME Foundation announced that they received a $400,000 donation from Handshake.org, of which $100,000 they transferred to GIMP’s account.

We thank both Handshake.org and GNOME Foundation for the generous donation and will use the money to do much overdue hardware upgrade for the core team members and organize the next hackfest to bring the team together, as well as sponsor the next instance of Libre Graphics Meeting.

Handshake is a decentralized, permissionless naming protocol compatible with DNS where every peer is validating and in charge of managing the root zone with the goal of creating an alternative to existing Certificate Authorities. Its purpose is not to replace the DNS protocol, but to replace the root zone file and the root servers with a public commons.

GNOME Foundation is a non-profit organization that furthers the goals of the GNOME Project, helping it to create a free software computing platform for the general public that is designed to be elegant, efficient, and easy to use.

G'MIC 2.3.6


G'MIC 2.3.6

10 Years of Open Source Image Processing!

The IMAGE team of the GREYC laboratory is happy to celebrate the 10th anniversary of G’MIC with you, an open-source (CeCILL), generic and extensible framework for image processing. GREYC is a public research laboratory on digital technology located in Caen, Normandy/France, under the supervision of 3 research institutions: the CNRS (UMR 6072), the University of Caen Normandy and the ENSICAEN engineering school.

G’MIC-Qt G’MIC-Qt, the main user interface of the G’MIC project. 

This celebration gives us the perfect opportunity to announce the release of a new version (2.3.6) of this free software and to share with you a summary of the latest notable changes since our last G’MIC report, published on PIXLS.US in February 2018.


Related links:


(Click on the images of the report to display them in full resolution)

1. Looking back at 10 years of development

G’MIC is a multiplatform framework (GNU/Linux, macOS, Windows…) providing various user interfaces for manipulating generic image data, such as 2D or 3D hyperspectral images or image sequences with float values (thus including “normal” color images). More than 1000 different operators for image processing are included, a number that is extensible at will since users can add their own functions by using the embedded script language.

It was at the end of July 2008 that the first lines of G’MIC code were created (in C++). At that time, I was the main developer involved in CImg, a lightweight open source C++ library for image processing, when I made the following observation:

  • The initial goal of CImg, which was to propose a “minimal” library of functions to help C++ developers to develop image processing algorithms, was broadly achieved; most of the algorithms I considered as essential in image processing were integrated. CImg was initially meant to stay lightweight, so I didn’t want to include new algorithms ad vitam æternam, which would be too heavy or too specific, thus betraying the initial concept of the library.
  • However, this would only cater to a rather small community of people with both C++ knowledge and image processing knowledge! One of the natural evolutions of the project, creating bindings of CImg to other programming languages, didn’t appeal much to me given the lack of interest I had in writing the code. And these potential bindings still only concerned an audience with some development expertise.

My ideas were starting to take shape: I needed to find a way to provide CImg processing features for non-programmers. Why not attempt to build a tool that could be used on the command line (like the famous convert command from Imagemagick)? A first attempt in June 2008 (inrcast, presented on the French news site LinuxFR), while unsuccessful, allowed me to better understand what would be required for this type of tool to easily process images from the command line.

In particular, it occurred to me that conciseness and coherence of the command syntax were the two most important things to build upon. These were the aspects that required the most effort in research and development (the actual image processing features were already implemented in CImg). In the end, the focus on conciseness and coherence took me much further than originally planned as G’MIC got an interpreter) of its own scripting language, and then a JIT compiler for the evaluation of mathematical expressions and image processing algorithms working at the pixel level.

With these ideas, by the end of July 2008, I was happy to announce the first draft of G’MIC. The project was officially up and running!

G’MIC logo Fig. 1.1: Logo of the G’MIC project, libre framework for image processing, and its cute mascot “Gmicky” (illustrated by David Revoy).

A few months later, in January 2009, enriched by my previous development experience on GREYCstoration (a free tool for nonlinear image denoising and interpolation, from which a plug-in was made for GIMP), and in the hopes of reaching an even larger public, I published a G’MIC GTK plug-in for GIMP. This step proved to be a defining moment for the G’MIC project, giving it a significant boost in popularity as seen below (the project was hosted on Sourceforge at the time).

Download statistics Fig.1.2: Monthly downloads statistics of G’MIC, between July 2008 and May 2009 (release of the GIMP plug-in happened in January 2009).

The sudden interest in the plugin from different users of GIMP (photographers, illustrators and other types of artists) was indeed a real launchpad for the project, with the rapid appearance of various contributions and external suggestions (for the code, management of the forums, web pages, writing of tutorials and realization of videos, etc.). The often idealized community effect of free software finally began to take off! Users and developers began to take a closer look at the operation of the original command-line interface and its associated scripting language (which admittedly did not interest many people until that moment!). From there, many of them took the plunge and began to implement new image processing filters in the G’MIC language, continuously integrated them into the GIMP plugin. Today, these contributions represent almost half of the filters available in the plugin.

Meanwhile, the important and repeated contributions of Sébastien Fourey, colleague of the GREYC IMAGE team (and experienced C++ developer) significantly improved the user experience of G’MIC. Sébastien is indeed at the heart of the main graphical interface development of the project, namely:

  • The G’MIC Online web service (which was later re-organised by GREYC’s Development Department).
  • Free Software ZArt, a graphical interface - based on the _Qt_ library - for the application of G’MIC filters to video sequences (from files or digital camera streams).
  • And above all, at the end of 2016, Sébastien tackled a complete rewrite of the G’MIC plugin for GIMP in a more generic form called G’MIC-Qt. This component, also based on the _Qt_ library (as the name suggests), is a single plugin that works equally well with both GIMP and Krita, two of the leading free applications for photo retouching/editing and digital painting. G’MIC-Qt has now completely supplanted the original GTK plugin thanks to its many features: built-in filter search engine, better preview, superior interactivity, etc. Today it is the most successful interface of the G’MIC project and we hope to be able to offer it in the future for other host applications (contact us if you are interested in this subject!).
Interfaces graphiques de G’MIC Fig.1.3: Different graphical interfaces of the G’MIC project, developed by Sébastien Fourey: G’MIC-Qt, G’MIC Online and ZArt.

The purpose of this article is not to go into too much detail about the history of the project. Suffice it to say that we have not really had time to become bored in the last ten years!

Today, Sébastien and I are the two primary maintainers of the G’MIC project (Sébastien mainly for the interface aspects, myself for the development and improvement of filters and the core development), in addition to our main professional activity (research and teaching/supervision).

Let’s face it, managing a free project like G’MIC takes a considerable amount of time, despite its modest size (~120k lines of code). But the original goal has been achieved: thousands of non-programming users have the opportunity to freely and easily use our image processing algorithms in many different areas: image editing, photo manipulation, illustration and digital painting, video processing, scientific illustration, procedural generation, glitch art

The milestone of 3.5 million total downloads was exceeded last year, with a current average of about 400 daily downloads from the official website (figures have been steadily declining in recent years as G’MIC is becoming more commonly downloaded and installed via alternative external sources).

It is sometimes difficult to keep a steady pace of development and the motivation that has to go with it, but we persisted, thinking back to the happy users who from time to time share their enthusiasm for the project!

Obviously we can’t name all the individual contributors to G’MIC whom we would like to thank, and with whom we’ve enjoyed exchanging during these ten years, but our heart is with them! Let’s also thank the GREYC laboratory and INS2I institute of CNRS for their strong support for this free project. A big thank you also to all the community of PIXLS.US who did a great job supporting the project (hosting the forum and publishing our articles on G’MIC).

But let’s stop reminiscing and get down to business: new features since our last article about the release of version 2.2!

2. Automatic illumination of flat-colored drawings

G’MIC recently gained a quite impressive new filter named « Illuminate 2D shape », the objective of which is to automatically add lit zones and clean shadows to flat-colored 2D drawings, in order to give a 3D appearance.

First, the user provides an object to illuminate, in the form of an image on a transparent background (typically a drawing of a character or animal). By analyzing the shape and content of the image, G’MIC then tries to deduce a concordant 3D elevation map (“ bumpmap “). The map of elevations obtained is obviously not exact, since a 2D drawing colored in solid areas does not contain explicit information about an associated 3D structure! From the estimated 3D elevations it is easy to deduce a map of normals (“ normalmap “) which is used in turn to generate an illumination layer associated with the drawing (following a Phong Shading model).

Illuminate 2D shape Fig. 2.1: G’MIC’s “Illuminate 2D shape“ filter in action, demonstrating automatic shading of a beetle drawing (shaded result on the right).

This new filter is very flexible and allows the user to have a fairly fine control over the lighting parameters (position and light source rendering type) and estimation of the 3D elevation. In addition the filter gives the artist the opportunity to rework the generated illumination layer, or even directly modify the elevation maps and estimated 3D normals. The figure below illustrates the process as a whole; using the solid colored beetle image (top left), the filter fully automatically estimates an associated 3D normal map (top right). This allows it to generate renditions based on the drawing (bottom row) with two different rendering styles: smooth and quantized.

Normalmap estimation Fig. 2.2: The process pipeline of the G’MIC “Illuminate 2D shape“ filter involves the estimation of a 3D normal map to generate the automatic illumination of a drawing.

Despite the difficulty inherent in the problem of converting a 2D image into 3D elevation information, the algorithm used is surprisingly effective in a good many cases. The estimation of the 3D elevation map obtained is sufficiently consistent to automatically generate plausible 2D drawing illuminations, as illustrated by the two examples below - obtained in just a few clicks!

Shading example 1 Shading example 2 Fig. 2.3: Two examples of completely automatic shading of 2D drawings, generated by G’MIC

It occurs, of course, that the estimated 3D elevation map does not always match what one might want. Fear not, the filter allows the user to provide “guides” in the form of an additional layer composed of colored lines, giving more precise information to the algorithm about the structure of the drawing to be analyzed. The figure below illustrates the usefulness of these guides for illuminating a drawing of a hand (top left); the automatic illumination (top right) does not account for information in the lines of the hand. Including these few lines in an additional layer of “guides” (in red, bottom left) helps the algorithm to illuminate the drawing more satisfactorily.

Using additional guides Fig. 2.4: Using a layer of “guides” to improve the automatic illumination rendering generated by G’MIC.

If we analyze more precisely the differences obtained between estimated 3D elevation maps with and without guides (illustrated below as symmetrical 3D objects), there is no comparison: we go from a very round boxing glove to a much more detailed 3D hand estimation!

Estimated 3D elevations with and without guides Fig. 2.5: Estimated 3D elevations for the preceding drawing of a hand, with and without the use of “guides”.

Finally, note that this filter also has an interactive preview mode, allowing the user to move the light source (with the mouse) and have a preview of the drawing illuminated in real time. By modifying the position parameters of the light source, it is thus possible to obtain the type of animations below in a very short time, which gives a fairly accurate idea of the 3D structure estimated by the algorithm from the original drawing.

light animation Fig. 2.6: Modification of the position of the light source and associated illumination renderings, calculated automatically by G’MIC.

A video showing the various possible ways to edit the illumination allowed by this filter is visible here. The hope is this new feature of G’MIC allows artists to accelerate the illumation and shading stage of their future drawings!

3. Stereographic projection

In a completely different genre, we have also added a filter implementing stereographic projection, suitably named “Stereographic projection“. This type of cartographic projection makes it possible to project planar defined image data onto a sphere. It should be noted that this is the usual projection used to generate images of “mini-planets” from equirectangular panoramas, like the one illustrated in the figure below.

equirectangular panorama Fig. 3.1: Example of equirectangular panorama (created by Alexandre Duret-Lutz).

If we launch the G’MIC plugin with this panorama and select the filter “Stereographic projection“, we get:

Filter 'Stereographic projection' Fig. 3.2: The “Stereographic projection“ filter of G’MIC in action using the plugin for GIMP or Krita.

The filter allows precise adjustments of the projection center, the rotation angle, and the radius of the sphere, all interactively displayed directly on the preview window (we will come back to this later). In a few clicks, and after applying the filter, we get the desired “mini-planet”:

Mini-planet Fig. 3.3: “Mini-planet” obtained after stereographic projection.

It is also intruiging to note that simply by reversing the vertical axis of the images, we transform a “mini-planet” into a “max-tunnel”!

Max-tunnel Fig. 3.4: “Maxi-tunnel” obtained by inversion of the vertical axis then stereographic projection.

Again, we made this short video which shows this filter used in practice. Note that G’MIC already had a similar filter (called “Sphere“), which could be used for the creation of “mini-planets”, but with a type of projection less suitable than the stereographic projection now available.

4. Even more possibilities for color manipulation

Manipulating the colors of images is a recurring occupation among photographers and illustrators, and G’MIC already had several dozen filters for this particular activity - grouped in a dedicated category (the originally named “Colors“ category!). This category is still growing, with two new filters having recently appeared:

  • The “CLUT from after-before layers“ filter tries to model the color transformation performed between two images. For example, suppose we have the following pair of images:
Image pair Fig. 4.1: Pair of images where an unknown colorimetric transformation has been applied to the top image to obtain the bottom one.

Problem: we do not remember at all how we went from the the original image to the modified image, but we would like to apply the same process to another image. Well, no more worries, call G’MIC to the rescue! The filter in question will seek to better model the modification of the colors in the form of a HaldCLUT, which happens to be a classic way to represent any colorimetric transformation.

Filter 'CLUT from after-before layers' Fig. 4.2: The filter models the color transformation between two images as a HaldCLUT.

The HaldCLUT generated by the filter can be saved and re-applied on other images, with the desired property that the application of the HaldCLUT on the original image produces the target model image originally used to learn the transformation. From there, we are able to apply an equivalent color change to any other image:

HaldCLUT applied on another image Fig. 4.3: The estimated color transformation in the form of HaldCLUT is re-applied to another image.

This filter makes it possible in the end to create HaldCLUT “by example”, and could therefore interest many photographers (in particular those who distribute compilations of HaldCLUT files, freely or otherwise!).

  • A second color manipulation filter, named “Mixer [PCA]“ was also recently integrated into G’MIC. It acts as a classic color channel mixer, but rather than working in a predefined color space (like sRGB, HSV, Lab…), it acts on the “natural” color space of the input image, obtained by principal component analysis (PCA) of its RGB colors. Thus each image will be associated with a different color space. For example, if we take the “lion” image below and look at the distribution of its colors in the RGB cube (right image), we see that the main axis of color variation is defined by a straight line from dark orange to light beige (axis symbolized by the red arrow in the figure).
PCA of RGB colors Fig. 4.4: Distribution of colors from the “lion” image in the RGB cube, and associated main axes (colorized in red, green and blue).

The secondary axis of variation (green arrow) goes from blue to orange, and the tertiary axis (blue arrow) from green to pink. It is these axes of variation (rather than the RGB axes) that will define the color basis used in this channel mix filter.

Filter 'Mixer [PCA]' Fig. 4.5: The “Mixer [PCA]“ filter is a channel mixer acting on the axes of “natural” color variations of the image.

It would be wrong to suggest that it is always better to consider the color basis obtained by PCA for the mixing of channels, and this new filter is obviously not intended to be the “ultimate” mixer that would replace all others. It simply exists as an alternative to the usual tools for mixing color channels, an alternative whose results proved to be quite interesting in tests of several images used during the development of this filter. It does no harm to try in any case…

5. Filter mishmash

This section is about a few other filters improved or included lately in G’MIC which deserve to be talked about, without dwelling too much on them.

  • Filter “Local processing” applies a color normalization or equalization process on the local image neighborhoods (with possible overlapping). This is an additional filter to make details pop up from under or over-exposed photographs, but it may create strong and unpleasant halo artefacts with non-optimal parameters.

    Filter 'Local processing' Fig. 5.1: The new filter “Local processing” enhances details and contrast in under or over-exposed photographs.
  • If you think that the number of layer blending modes available in GIMP or Krita is not enough, and dream about defining your own blending mode formula, then the recent improvement of the G’MIC filter « Blend [standard] » will please you! This filter now gets a new option « Custom formula » allowing the user to specify their own mathematical formula when blending two layers together. All of your blending wishes become possible!

    Filter 'Blend (standard)'' Fig. 5.2: The “Blend [standard]“ filter now allows definition of mathematical formulas for layer merging.
  • Also note the complete re-implementation of the nice “Sketch“ filter, which had existed for several years but could be a little slow on large images. The new implementation is much faster, taking advantage of multi-core processing when possible.

    Filter 'Sketch' Fig. 5.3: The “Sketch“ filter has been re-implemented and now exploits all available compute cores.
  • A large amount of work has also gone into the re-implementation of the “Mandelbrot - Julia sets“ filter, since the navigation interface has been entirely redesigned, making exploration of the Mandelbrot set much more comfortable (as illustrated by this video). New options for choosing colors have also appeared.

    Filtre Mandelbrot - Julia sets Fig. 5.4: The “Mandelbrot - Julia sets“ filter and its new navigation interface in the complex space.
  • In addition, the “Polygonize [Delaunay]“ filter that generates polygonized renderings of color images has a new rendering mode, using linearly interpolated colors in the Delaunay triangles produced.

    Filtre 'Polygonize (Delaunay)' Fig. 5.5: The different rendering modes of the “Polygonize [Delaunay]“ filter.

6. Other important highlights

6.1. Improvements of the plug-in

Of course, the new features in G’MIC are not limited to just image processing filters! For instance, a lot of work has been done on the graphical interface of the plug-in G’MIC-Qt for GIMP and Krita:

  • Filters of the plug-in are now allowed to define a new parameter type point(), which displays as a small colored circle over the preview window. The user can drag this circle and move it with the mouse. As a result this can give the preview widget a completely new type of user interaction, which is no small thing! A lot of filters now use this feature, making them more pleasant to use and intuitive (look at this video for some examples). The animation below shows for instance how these new interactive points has been used in the filter « Stereographic projection » described in previous sections.
Interactive preview window Fig. 6.1: The preview window of the G’MIC-Qt plug-in gets new user interaction abilities.
  • In addition, introducing these interactive points has allowed improving the split preview modes, available in many filters to display the « before/ after » views side by side when setting the filter parameters in the plug-in. It is now possible to move this « before/ after » separator, as illustrated by the animation below. Two new splitting modes (« Checkered » and « Inverse checkered » ) have been also included alongside it.
Division de prévisualisation interactive Fig. 6.2: The division modes of the preview now have a moveable “before / after” boundary.

A lot of other improvements have been made to the plug-in: the support of the most recent version of GIMP (2.10), of Qt 5.11, improved handling of the error messages displayed over the preview widget, a cleaner designed interface, and other small changes have been made under the hood, which are not necessarily visible but slightly improve the user experience (e.g. an image cache mechanism for the preview widget). In short, that’s pretty good!

6.2. Improvements in the software core

Some new refinements of the G’MIC computational core have been done recently:

  • The “standard library” of the G’MIC script language was given new commands for computing the inverse hyperbolic functions (acoss, asinh and atanh), as well as a command tsp (travelling salesman problem) which estimates an acceptable solution to the well-known Travelling salesman problem, and this, for a point cloud of any size and dimension.

    Travelling salesman problem in 2D Fig. 6.3: Estimating the shortest route between hundreds of 2D points, with the G’MIC command tsp.
    Travelling salesman problem in 2D Fig. 6.4: Estimating the shortest route between several colors in the RGB cube (thus in 3D), with the G’MIC command tsp.
  • The demonstration window, which appears when gmic is run without any arguments from the command line, has been also redesigned from scratch.

Demonstration window Fig. 6.5: The new demonstration window of gmic, the command line interface of G’MIC.
  • The embedded JIT compiler used for the evaluation of mathematical expressions has not been left out and was given new functions to draw polygons (function polygon()) and ellipses (function ellipse()) in images. These mathematical expressions can in fact define small programs (with local variables, user-defined functions and control flow). One can for instance easily generate synthetic images from the command line, as shown by the two examples below.

Example 1

$ gmic 400,400,1,3 eval "for (k = 0, k<300, ++k, polygon(3,u([vector10(0),[w,h,w,h,w,h,0.5,255,255,255])))"

Result:

Function 'polygon()'' Fig. 6.6: Using the new function polygon() from the G’MIC JIT compiler, to render a synthetic image made of random triangles.

Example 2

$ gmic 400,400,1,3 eval "for (k=0, k<20, ++k, ellipse(w/2,h/2,w/2,w/8,k*360/20,0.1,255))"

Result:

Function 'ellipse()'' Fig. 6.7: Using the new function ellipse() from the G’MIC JIT compiler, to render a synthetic flower image.
  • Note also that NaN values are now better managed when doing calculus in the core, meaning G’MIC maintains coherent behavior even when it has been compiled with the optimisation -ffast-math. Thus, G’MIC can be flawlessly compiled now the maximum optimization level -Ofast supported by the compiler g++, whereas we were restricted to the use of -O3 before. The improvement in computation speed is clearly visible for some of the offered filters !

6.3. Distribution channels

A lot of changes have also been made to the distribution channels used by the project:

  • First of all, the project web pages (which are now using secured https connections by default) have a new image gallery. This gallery shows both filtered image results from G’MIC and the way to reproduce them (from the command line). Note that these gallery pages are automatically generated by a dedicated G’MIC script, which ensures the displayed command syntax is correct.

    galerie d'image Fig. 6.8: The new image gallery on the G’MIC web site.

This gallery is split into several sections, depending on the type of processing done (Artistic, Black & White, Deformations, Filtering, etc.). The last section « Code sample » is my personal favorite, as it exhibits small animations (shown as looping animated GIFs) which have been completely generated from scratch by short scripts, written in the G’MIC language. Quite a surprising use of G’MIC that shows its potential for generative art.

Code sample1 Code sample2 Fig. 6.9: Two small GIF animations generated by G’MIC_ scripts that are visible in the new image gallery._
  • We have also moved the main git source repository of the project to Framagit, still keeping one synchronized mirror on Github at the same place as before (to benefit from the fact that a lot of developers have already an account on Github which makes it easier for them to fork the project and write bug reports).

7. Conclusions and Perspectives

Voilà! Our tour of news (and the last six months of work) on the G’MIC project comes to an end.

We are happy to be celebrating 10 years with the creation and evolution of this Free Software project, and to be able to share with everyone all of these advanced image processing techniques. We hope to continue doing so for many years to come!

Note that next year, we will also be celebrating the 20th anniversary of CImg, the C++ image processing library (started in November 1999) on which the G’MIC project is based, proof that interest in free software is enduring.

As we wait for the next release of G’MIC, don’t hesitate to test the current version. Freely and creatively play with and manipulate your images to your heart’s content!

Thank you, Translators: (ChameleonScales, Pat David)

August 27, 2018

Realtek on the LVFS!

For the last week I’ve been working with Realtek engineers adding USB3 hub firmware support to fwupd. We’re still fleshing out all the details, as we also want to also update any devices attached to the hub using i2c – which is more important than it first seems. A lot of “multifunction” dongles or USB3 hubs are actually USB3 hubs with other hardware connected internally. We’re going to be working on updating the HDMI converter firmware next, probably just by dropping a quirk file and adding some standard keys to fwupd. This will let us use the same plugin for any hardware that uses the rts54xx chipset as the base.

Realtek have been really helpful and open about the hardware, which is a refreshing difference to a lot of other hardware companies. I’m hopeful we can get the new plugin in fwupd 1.1.2 although supported hardware won’t be available for a few months yet, which also means there’s no panic getting public firmware on the LVFS. It will mean we get a “works out of the box” experience when the new OEM branded dock/dongle hardware starts showing up.

August 24, 2018

Best joke of the year

This joke from Michelle Wolf’s routine at the White House Correspondents’ Dinner has stuck with me. I can’t explain why I love it so much, but it is great:

“Mike Pence is the kind of guy that brushes his teeth and then drinks orange juice and thinks, ‘Mmm.'”

~ Michelle Wolf, April 2018

Making Sure the Debian Kernel is Up To Date

I try to avoid Grub2 on my Linux machines, for reasons I've discussed before. Even if I run it, I usually block it from auto-updating /boot since that tends to overwrite other operating systems. But on a couple of my Debian machines, that has meant needing to notice when a system update has installed a new kernel, so I can update the relevant boot files. Inevitably, I fail to notice, and end up running an out of date kernel.

But didn't Debian use to have a /boot/vmlinuz that always linked to the latest kernel? That was such a good idea: what happened to that?

I'll get to that. But before I found out, I got sidetracked trying to find a way to check whether my kernel was up-to-date, so I could have it warn me of out-of-date kernels when I log in.

That turned out to be fairly easy using uname and a little shell pipery:

# Is the kernel running behind?
kernelvers=$(uname -a | awk '{ print $3; }')
latestvers=$(cd /boot; ls -1 vmlinuz-* | sort --version-sort | tail -1 | sed 's/vmlinuz-//')
if [[ $kernelvers != $latestvers ]]; then
    echo "======= Running kernel $kernelvers but $latestvers is available"
else
    echo "The kernel is up to date"
fi

I put that in my .login. But meanwhile I discovered that that /boot/vmlinuz link still exists -- it just isn't enabled by default for some strange reason. That, of course, is the right way to make sure you're on the latest kernel, and you can do it with the linux-update-symlinks command.

linux-update-symlinks is called automatically when you install a new kernel -- but by default it updates symlinks in the root directory, /, which isn't much help if you're trying to boot off a separate /boot partition.

But you can configure it to notice your /boot partition. Edit /etc/kernel-img.conf and change link_in_boot to yes:

link_in_boot = yes

Then linux-update-symlinks will automatically update the /boot/vmlinuz link whenever you update the kernel, and whatever bootloader you prefer can point to that image. It also updates /boot/vmlinuz.old to point to the previous kernel in case you can't boot from the new one.

August 23, 2018

Fun with SuperIO

While I’m waiting back for NVMe vendors (already one tentatively onboard!) I’ve started looking at “embedded controller” devices. The EC on your laptop historically used to just control the PS/2 keyboard and mouse, but now does fan control, power management, UARTs, GPIOs, LEDs, SMBUS, and various tasks the main CPU is too important to care about. Vendors issue firmware updates for this kind of device, but normally wrap up the EC update as part of the “BIOS” update as the system firmware and EC work together using various ACPI methods. Some vendors do the EC update out-of-band and so we need to teach fwupd about how to query the EC to get the model and version on that specific hardware. The Linux laptop vendor Tuxedo wants to update the EC and system firmware separately using the LVFS, and helpfully loaned me an InfinityBook Pro 13 that was immediately disassembled and connected to all kinds of exotic external programmers. On first impressions the N131WU seems quick, stable and really well designed internally — I’m sure would get a 10/10 for repairability.

At the moment I’m just concentrating on SuperIO devices from ITE. If you’re interested what SuperIO chip(s) you have on your machine you can either use superiotool from coreboot-utils or sensors-detect from lm_sensors. If you’ve got a SuperIO device from ITE please post what signature, vendor and model machine you have in the comments and I’ll ask if I need any more information from you. I’m especially interested in vendors that use devices with the signature 0x8587, which seems to be a favourite with the Clevo reference board. Thanks!

Please welcome AKiTiO to the LVFS

Another week, another vendor. This time the vendor is called AKiTiO, a vendor that make a large number of very nice ThunderBolt peripherals.

Over the last few weeks AKiTiO added support for the Node and Node Lite devices, and I’m sure they’ll be more in the future. It’s been a pleasure working with the engineers and getting them up to speed with uploading to the LVFS.

In other news, Lenovo also added support for the ThinkPad T460 on the LVFS, so get any updates while they’re hot. If you want to try this you’ll have to enable the lvfs-testing remote either using fwupdmgr enable-remote lvfs-testing or using the sources dialog in recent versions of GNOME Software. More Lenovo updates coming soon, and hopefully even more vendor announcements too.

August 22, 2018

SIGGRAPH 2018 Report

View on Vancouver downtown from the north side of the lake. Here hotels were stil affordable (yachts not included :).

My annual trip to SIGGRAPH took me to Vancouver again. A place I’ve started to love a lot. A young and vibrant city, with a stunning skyline – and one main drawback: prices have doubled in the past 5 years. Airbnb and hotels alike – housing a crew for a week here would have costed a fortune. It’s one of the reasons we skipped a booth this year.

As usual it was a great visit – my 20th SIGGRAPH in a row even. It’s a place where you see old friends once a year. Aside of all the fun, here is a list of notes I made to sketch an impression for you.

Blender Birds of a Feather, Foundation-Community meeting by Ton Roosendaal

Birds of a Feather

Birds of a Feather events were not part of the main schedule booklet this year. Instead it had just one page in the back with a note “please check online for schedule and locations”. Our event was put in a hotel without any signage to the BoF rooms. It’s clearly being marginalized. I know SIGGRAPH is struggling with the BoF concept. For Open Source projects (about one third of the BoFs) it’s the only way to get an event scheduled there. But there’s also abuse (recruiters, commercial sowftware) and in a sense it became some kindof parallel conference-in-a-conference. I’m curious what the future for this will be.

It’s an old tradition to start the Blender BoF with giving everyone a short moment to introduce self, occupation and what they do with Blender (or want to know). Visitors came again from many different areas, including from Netflix, Amazon, NASA, Microsoft, Sidefx.

In my talk I used a lot of videos (copied from the great Code Quest collection), so can’t share it as slides easily… basically I had no real news, with one exception:  announcing the “Dev Fund 2.0” project. We are working on giving the Development Fund a major boost, especially by giving the members more recognition. Will go live mid September!

Ton Roosendaal presenting Birds of Feather

The 2nd BoF was the “Blender Spotlight” – organized by David Andrade. Members of the audience were invited to step forward and show what they do with Blender.
For the evening David had a nice surprise, invited me to join the Theory Studios boat-cruise-dinner!

Note to self: maybe record the BoF’s next time…

Meetings / sessions

  • Met with 3D Artist Magazine – they’re covering Blender very well for years. Made sure we’re well lined up for a Blender 2.8 themed issue.
  • Met with the CEO of Original Force studios China. They’ve been eying Blender already a while but didn’t make a step yet. I had two more meetings with him on the days after, also with his CTO. It’s very interesting to hear the point of view of bigger studios (1400 people) towards Blender. It would probably be best to send them a couple of good trainers or artists over. Conversations are ongoing.
  • Had a nice talk with fellow giant (2 meters) Jos Stam. He was granted the SIGGRAPH academy membership this year. And – he left Autodesk.
  • Nvidia’s keynote was actually a fun watch. CEO Jensen Huang managed to keep the audience involved for 90 minutes. Best trick he used: center everything around one message, “real-time ray-tracing”.
  • The day after the keynote I had an hour meeting with 9 (!) Nvidia product managers/engineers. Needless to say, just only about one topic: how do we get Optix and MDL in Blender. It was a productive meeting with a good outcome. Nvidia has  to internally sign this off still, so I can’t say more now. :)
  • Blender will become official member of the Academy Software Foundation! Met with a Linux Foundation representative who makes it happen for us now.
  • I was on a panel. Invited by Jon Peddie, on his famous annual Luncheon. Topic was “virtual studios”.

And then there was the tradeshow. Blender was prominently present at the AMD booth.

Mike Pan demoing Blender in the AMD booth theater.

Permanent Blender demo station at AMD booth, right at show entran

  •  Met with a representative of Oculus (someone I already knew a long time and who switched jobs :). We’re discussing to setup a project around VR authoring together.
  • Had several good meetings with our AMD friends. Aside of hardware seeds (threadripper 2 has a special thread balance issue we need to tackle) we also discussed renewing development support. More to follow.
  • Had two meetings with Intel. This was also to check on some details for the Blender Conference Sponsoring (Intel = Main Sponsor!) but I also wanted to warm them up for joining the Development Fund.
  • Had a really cool meeting with Wacom too. They always support Blender very well, whatever is needed we can ask them. Good tablet support is essential for Blender users, especially with the rise of Grease Pencil and 2D/3D animation tools in Blender. Wacom *loves* Grease Pencil, we will work with them on demo files they can share via their own channels (and demo themselves on shows).
  • Met with a guy who used Blender at Lucas Arts in London, for concept development of Jurassic World. Trying to get him to present it at the Blender Conference.
  • Had a long chat with Neil Trevett of Khronos. They’re actually funding Blender development now (glTF exporter). He also would look forward seeing us moving to Vulkan – we can make a project plan for it. And he knows that Apple Metal and Vulkan will get a good compatibility library as well. No worries for the Apple users then!

And then I had notes about Sketchfab (free 3D model importer addon), Lulzbot printers (much improved resolution), Adobe Dimension (heavily dominated by Blender artists), Nimble Collective (interested to help us out with animation/rigging tools), Epic Games (interested to support us), but I think it’s best to leave it with this!

Just one closing note. A couple of years ago I found out that Blender was finally being taken seriously in the industry. In this edition of SIGGRAPH it was the first time I noticed people want to do business with us. A new milestone.

Ton Roosendaal
Chairman Blender Foundation

August 21, 2018

Adventures with NVMe, part 2

A few days ago I asked people to upload their NVMe “cns” data to the LVFS. So far, 908 people did that, and I appreciate each and every submission. I promised I’d share my results, and this is what I’ve found:

Number of vendors implementing slot 1 read only “s1ro” factory fallback: 10 – this was way less than I hoped. Not all is lost: the number of slots in a device “nfws”indicate how many different versions of firmware the drive can hold, just like some wireless broadband cards. The idea is that a bad firmware flash means you can “fall back” to an old version that actually works. It was surprising how many drives didn’t have this feature because they only had one slot in total:

I also wanted to know how many firmware versions there were for a specific model (deduping by removing the capacity string in the model); the idea being that if drives with the same model string all had the same version firmware then the vendor wasn’t supplying firmware updates at all, and might be a lost cause, or have perfect firmware. Vendors don’t usually change shipped firmware on NMVe drives for no reason, and so a vendor having multiple versions of firmware for a given model could indicate a problem or enhancement important enough to re-run all the QA checks:

So, not all bad, but we can’t just assume that trying to flash a firmware is a safe thing to do for all drives. The next, much bigger problem was trying to identify which drives should be flashed with a specific firmware. You’d think this would be a simple problem, where the existing firmware version would be stored in the “fr” firmware revision string and the model name would be stored in the “mn” string. Alas, only Lenovo and Apple store a sane semver like 1.2.3, other vendors seem to encode the firmware revision using as-yet-unknown methods. Helpfully, the model name alone isn’t all we need to identify the firmware to flash as different drives can have different firmware for the laptop OEM without changing the mn or fr. For this I think we need to look into the elusive “vs” vendor-defined block which was the reason I was asking for the binary dump of the CNS rather than the nvme -H or nvme -o json output. The vendor block isn’t formally defined as part of the NVMe specification and the ODM (and maybe the OEM?) can use this however they want.

Only 137 out of the supplied ~650 NVMe CNS blobs contained vendor data. SK hynix drives contain an interesting-looking string of something like KX0WMJ6KS0760T6G01H0, but I have no idea how to parse that. Seagate has simply 2002. Liteon has a string like TW01345GLOH006BN05SXA04. Some Samsung drives have things like KR0N5WKK0184166K007HB0 and CN08Y4V9SSX0087702TSA0 – the same format as Toshiba CN08D5HTTBE006BEC2K1A0 but it’s weird that the blob is all ASCII – I was somewhat hoping for a packed GUID in the sea of NULs. They do have some common sub-sections, so if you know what these are please let me know!

I’ve built a fwupd plugin that should be able to update firmware on NVMe drives, but it’s 100% untested. I’m going to use the leftover donation money for the LVFS to buy various types of NVMe hardware that I can flash with different firmware images and not cry if all the data gets wiped or the device get bricked. I’ve already emailed my contact at Samsung and fingers crossed something nice happens. I’ll do the same with Toshiba and Lenovo next week. I’ll also update this blog post next week with the latest numbers, so if you upload your data now it’s still useful.

August 20, 2018

security things in Linux v4.18

Previously: v4.17.

Linux kernel v4.18 was released last week. Here are details on some of the security things I found interesting:

allocation overflow detection helpers
One of the many ways C can be dangerous to use is that it lacks strong primitives to deal with arithmetic overflow. A developer can’t just wrap a series of calculations in a try/catch block to trap any calculations that might overflow (or underflow). Instead, C will happily wrap values back around, causing all kinds of flaws. Some time ago GCC added a set of single-operation helpers that will efficiently detect overflow, so Rasmus Villemoes suggested implementing these (with fallbacks) in the kernel. While it still requires explicit use by developers, it’s much more fool-proof than doing open-coded type-sensitive bounds checking before every calculation. As a first-use of these routines, Matthew Wilcox created wrappers for common size calculations, mainly for use during memory allocations.

removing open-coded multiplication from memory allocation arguments
A common flaw in the kernel is integer overflow during memory allocation size calculations. As mentioned above, C doesn’t provide much in the way of protection, so it’s on the developer to get it right. In an effort to reduce the frequency of these bugs, and inspired by a couple flaws found by Silvio Cesare, I did a first-pass sweep of the kernel to move from open-coded multiplications during memory allocations into either their 2-factor API counterparts (e.g. kmalloc(a * b, GFP...) -> kmalloc_array(a, b, GFP...)), or to use the new overflow-checking helpers (e.g. vmalloc(a * b) -> vmalloc(array_size(a, b))). There’s still lots more work to be done here, since frequently an allocation size will be calculated earlier in a variable rather than in the allocation arguments, and overflows happen in way more places than just memory allocation. Better yet would be to have exceptions raised on overflows where no wrap-around was expected (e.g. Emese Revfy’s size_overflow GCC plugin).

Variable Length Array removals, part 2
As discussed previously, VLAs continue to get removed from the kernel. For v4.18, we continued to get help from a bunch of lovely folks: Andreas Christoforou, Antoine Tenart, Chris Wilson, Gustavo A. R. Silva, Kyle Spiers, Laura Abbott, Salvatore Mesoraca, Stephan Wahren, Thomas Gleixner, Tobin C. Harding, and Tycho Andersen. Almost all the rest of the VLA removals have been queued for v4.19, but it looks like the very last of them (deep in the crypto subsystem) won’t land until v4.20. I’m so looking forward to being able to add -Wvla globally to the kernel build so we can be free from the classes of flaws that VLAs enable, like stack exhaustion and stack guard page jumping. Eliminating VLAs also simplifies the porting work of the stackleak GCC plugin from grsecurity, since it no longer has to hook and check VLA creation.

Kconfig compiler detection
While not strictly a security thing, Masahiro Yamada made giant improvements to the kernel’s Kconfig subsystem so that kernel build configuration now knows what compiler you’re using (among other things) so that configuration is no longer separate from the compiler features. For example, in the past, one could select CONFIG_CC_STACKPROTECTOR_STRONG even if the compiler didn’t support it, and later the build would fail. Or in other cases, configurations would silently down-grade to what was available, potentially leading to confusing kernel images where the compiler would change the meaning of a configuration. Going forward now, configurations that aren’t available to the compiler will simply be unselectable in Kconfig. This makes configuration much more consistent, though in some cases, it makes it harder to discover why some configuration is missing (e.g. CONFIG_GCC_PLUGINS no longer gives you a hint about needing to install the plugin development packages).

That’s it for now! Please let me know if you think I missed anything. Stay tuned for v4.19; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Interview with Margarita Gadrat

Could you tell us something about yourself?

Hello! My name is Margarita Gadrat, I was born in Russia and live in France. Drawing is my favourite activity since my childhood. After some years working as a graphic designer in different companies, I decided to follow my dream and now I’m a freelance illustrator and graphic designer.

Do you paint professionally, as a hobby artist, or both?

Both. Personal paintings for experimenting and improving my technique. Professionally, I’m open to new interesting projects. There are still a lot to learn and this is so much fun!

What genre(s) do you work in?

I like painting nature inspired subjects, like landscapes, cute animals. And mysterious, dark atmospheres.

Whose work inspires you most — who are your role models as an artist?

Ketka, Ruan Jia, Hellstern, Pete Mohrbacher… I couldn’t put all of them 🙂

I love the works of classical masters too – Sargent, Turner, Ivan Shishkin, Diego Velasquez, Bosch. Aivazovsky’s sea is stunning! And the Pre-Raphaelites art has a magical aura.

How and when did you get to try digital painting for the first time?

10 years ago my husband offered me a Wacom tablet. After trying this tool on Photoshop, I was impressed.

What makes you choose digital over traditional painting?

The creative possibilities you have without buying oils, watercolor. No need to clean your table and material after! Also, you can easily correct details with filters and doing the Ctrl-Z 😉 Working fast too, thanks to useful tools: selections, transforming tools…

How did you find out about Krita?

My husband, who is into FOSS, told me about Krita.

What was your first impression?

Whoa, it’s so fluid and comfortable! Coming from Photoshop, I wasn’t lost with the general concepts (layers, filters, blending, masks…), but had to take time to understand how it was organized in Krita.

What do you love about Krita?

All those features to help in a work process: drawing assistants for perspective, the new reference tool where you can easily arrange your references and put it to your canvas, the freedom of the brush presets. And working with layers in the non destructive way I love so much. The animation section is great too.

What do you think needs improvement in Krita? Is there anything that
really annoys you?

Nothing that really annoys me. Krita is awesome and complete software! Maybe a couple of little things, but I don’t really use them. Like text tool, which is now getting better and better. And I’d like to be able to move the selection form not while selecting, but after it is selected.

What sets Krita apart from the other tools that you use?

Krita is really rich software. You can imitate the traditional materials, but also experimenting with blending to create original results. It permits a fast and quality workflow.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Lake house”

This work combines architecture and nature, it was a nice challenge working on the design of the house and the composition.

What techniques and brushes did you use in it?

I painted in Krita with grey scale values, mostly with the default round brush. Default blending brushes make smooth values. After that, I colorized it with color layers and adjusted levels with the filter layer.

Where can people see more of your work?

DeviantArt: https://www.deviantart.com/darkmagou/gallery/
My personal site with the illustration and graphic design works (in French): https://www.margarita-gadrat.xyz/book
Dribble: https://dribbble.com/mgadrat
Behance: https://www.behance.net/mgadrat

Anything else you’d like to share?

Thank you for Krita, it’s a wonderful program, working on all the platforms, free, open source and constantly including new features!

August 18, 2018

GIMP 2.10.6 Released

Almost four months have passed since GIMP 2.10.0 release, and this is already the fourth version in the series, bringing you bug fixes, optimizations, and new features.

The most notable changes are listed below (see also the NEWS file).

Main changes

Vertical text layers

GIMP finally gets support for vertical text (top-to-bottom writing)! This is a particularly anticipated feature for several East-Asian writing systems, but also for anyone wishing to design fancy vertical text.

Vertical text Vertical text in GIMP 2.10.6.

For this reason, GIMP provides several variants of vertical text, with mixed orientation (as is typical in East-Asian vertical writing) or upright orientation (more common for Western vertical writing), with right-to-left, as well as left-to-right columns.

Thanks to Yoshio Ono for the vertical text implementation!

New filters

Two new filters make an entrance in this release:

Little Planet

This new filter is built on top of the pre-existing gegl:stereographic-projection operation and is finetuned to create “little planets” from 360×180° equirectangular panorama images.

Little Planet filter Little Planet filter in GIMP 2.10.6.
Image on canvas: Luftbild Panorama der Isar bei Ettling in Niederbayern, by Simon Waldherr, (CC by-sa 4.0).

Long Shadow

This new GEGL-based filter simplifies creating long shadows in several visual styles.

There is a handful of configurable options, all helping you to cut extra steps from getting the desired effect.

The feature was contributed by Ell.

Improved straightening in the Measure tool

A lot of people appreciated the new Horizon Straightening feature added in GIMP 2.10.4. Yet many of you wanted vertical straightening as well. This is now possible.

Vertical Straightening Vertical straightening in GIMP 2.10.6.
Image on canvas: View of the western enclosing wall of the Great Mosque of Kairouan, by Moumou82, (CC by-sa 2.0).

In the Auto mode (default), Straighten will snap to the smaller angle to decide for vertical or horizontal straightening. You can override this behavior by specifying explicitly which it should be.

Optimized drawable preview rendering

Most creators working on complex projects in GIMP have had bad days when there are many layers in a large image, and GIMP can’t keep up with scrolling the layers list or showing/hiding layers.

Part of the reason was that GIMP couldn’t update user interface until it was done rendering layer previews. Ell again did some miracles here by having most drawable previews render asynchronously.

For now, the only exception to that are layer groups. Rendering them asynchronously is still not possible, so until we deal with this too, we made it possible for you to disable rendering layer group previews completely. Head over to Preferences > Interface and tick off the respective checkbox.

Disable preview of layer groups Disable preview of layer groups in GIMP 2.10.6.

One more thing to mention here. For technically-minded users, the Dashboard dockable dialog (introduced in GIMP 2.10.0) now displays the number of async operations running in the Misc group.

A new localization: Marathi

GIMP was already available in 80 languages. Well, it’s 81 languages now!

A team from the North Maharashtra University, Jalgaon, worked on a Marathi translation and contributed a nearly full translation of GIMP.

Of course, we should not forget all the other translators who do a wonderful work on GIMP. In this release, 13 other translations were updated: Brazilian Portuguese, Dutch, French, German, Greek, Italian, Latvian, Polish, Romanian, Russian, Slovenian, Spanish, and Swedish.

Thanks everyone!

GIMP in Marathi Marathi translation in GIMP 2.10.6.

File dialog filtering simplified

A common cause of confusion in the file dialogs (opening, saving, exporting…) was the presence of two file format lists, one for displaying files with a specific extension, the other for the actual file format choice. So we streamlined this.

There is just one list available now, and it works as both the filter for displayed images and the file format selector for the image you are about to save or export.

File Dialog File dialog in GIMP 2.10.6.

Additionally, a new checkbox allows you to display the full list of files, regardless of the currently chosen file format. This could be useful when you want to enforce an unusual file extension or reuse an existing file’s name by choosing it in the list and then appending your extension.

The end of DLL hell? A note to plug-in developers…

A major problem over the years, on Windows, was what developers call the DLL hell. This was mostly caused either by third-party software installing libraries in system folders or by third-party plug-ins installing themselves with shared libraries interfering with other plug-ins.

The former had already been mostly fixed by tweaking the DLL search priority order. This release provides an additional fix by taking into account 32-bit plug-ins running on 64-bit Windows systems (WoW64 mode).

The latter was fixed already since GIMP 2.10.0 if you installed your plug-ins in its own directory (which is not compulsory yet, but will be in GIMP 3).

E.g. if you have a plug-in named myplugin.exe, please install it under plug-ins/myplugin/myplugin.exe. This way, not only you won’t pollute other plug-ins if you ever included libraries, but your plug-in won’t be prevented from running by unwanted libraries as well. All our core plug-ins are now installed this way. Any third-party plug-ins should be as well.

Ongoing Development

Prepare for the space invasion!

Meanwhile, taking place simultaneously on the babl, GEGL, and GIMP 2.99 fronts, pippin and Mitch embarked on a project internally nicknamed the “space invasion“, the end goal of which is to simplify and improve color management in GIMP, as well as other GEGL-based projects.

Space invasion (ongoing development) Mutant goats from outer space, soon landing in GIMP.

About a year ago, babl, the library used by GIMP and GEGL to perform color conversions, gained the ability to tie arbitrary RGB color spaces to existing pixels formats. This, in turn, allowed GIMP to start using babl for performing conversions between certain classes of color profiles, instead of relying solely on the LCMS library, greatly improving performance. However, these conversions would only take place at the edges between the internal image representation used by GIMP, and the outside world; internally, the actual color profile of the image had limited effect, leading to inconsistent or incorrect results for certain image-processing operations.

The current effort seeks to change that, by having all image data carry around the information regarding its color profile internally. When properly handled by GEGL and GIMP, this allows babl to perform the right conversions at the right time, letting all image-processing operations be applied in the correct color space.

While the ongoing work toward this goal is already available in the mainline babl and GEGL versions, we are currently restricting it to the GIMP 2.99 development version (to become GIMP 3.0), but it will most likely make its way into a future GIMP 2.10.x release.

GIMP extensions

Lastly Jehan, from ZeMarmot project, has been working on extensions in GIMP. An extension could be anything from plug-ins to splash images, patterns, brushes, gradients… Basically anything which could be created and added by anyone. The end goal would be to allow creators of such extensions to upload them on public repositories, and for anyone to search and install them in a few clicks, with version management, updates, etc.

Extension (Ongoing development) Extension manager in future GIMP

This work is also only in the development branch for the time being, but should make it to a GIMP 2.10.x release at some point in the future as well.

Helping development

Keep in mind that pippin and Jehan are able to work on GEGL and GIMP thanks to crowdfunding and the support of the community. Every little bit helps to support their work and helps to make GIMP/GEGL even more awesome! If you have a moment, check out their support pages:

Most importantly: have fun with GIMP everyone!

Font sidecars: dream or nightmare?

At DebConf this year, I gave a talk about font-management on desktop Linux that focused, for the most part, on the importance of making font metadata accessible to users. That’s because I believe that working with fonts and managing a font collection are done primarily through the metadata — like other types of media content (audio files, video files, even still images, although most people fall back to looking at thumbnails on the latter). For example, you need to know the language support of a font, what features it offers (either OpenType or variable-font options), and you mentally filter fonts based on categories (“sans serif”, “handwriting”, “Garamond”, tags that you’ve assigned yourself, etc.).

That talk was aimed at Debian font-package maintainers, who voluntarily undertake the responsibility of all kinds of QA and testing for the fonts that end users get on their system, including whether or not the needed metadata in the font binaries is present and is correct. It doesn’t do much good to have a fancy font manager that captures foundry and designer information if those fields are empty in 100% of the fonts, after all.

At the end of the session, a question came up from the audience: what do we do about fonts that we can’t alter (even to augment metadata) from upstream? This is a surprisingly common problem. There are a lot of fonts in the Debian archive that have “if you change the font, you must change the name” licenses. Having to change the user-visible font name (which is what these licenses mean) defeats the purpose of filling in missing metadata. So someone asked whether or not a Debian package could, instead, ship a metadata “sidecar” file, much like photo editors use for camera-raw image files (which need to remain unmodified for archival purposes). Darktable, Digikam, RawTherapee, Raw Studio, and others do this.

It’s not a bad idea, of course. Anecdotally, I think a lot of desktop Linux users are unaware of some of the genuinely high-quality fonts available to them that just happen to be in older binary formats, like Adobe Utopia. The question is what format a metadata sidecar file would use.

The wonderful world of sides of cars

Photo editors use XMP, the Extensible Metadata Platform, created by Adobe. It’s XML-based. So it could be adapted for fonts; all that’s needed would be for someone to draw up a suitable XML namespace (or is it schema? Maybe both?). XML is mildly human-readable, so the file might be enlightening early-evening reading in addition to being consumable by software. But it’s not trivial, and utilizing it would require XML support in all the relevant font utilities. And implementing new support for that new namespace/schema. I’d offhandedly give it a 4 out of 10 for solving-the-problemhood.

On the other hand … almost all font-binary internals are already representable in another XML format, TTX, which is defined as part of the free-software FontTools project. There are a few metadata properties that aren’t directly included in OpenType (and, for the record, OpenType wraps both TrueType and PostScript Type 1, so it’s maximal), but a lot of those are either “derived” properties (such as is-this-a-color-font) or might be better represented at a package level (such as license details). TTX has the advantage of already being directly usable by just about every font-engineering tool here on planet Earth.

In fact, theoretically, you could ship a TTX sidecar file that a simple script could use to convert the Adobe Utopia Type-1 file into a modernized OpenType file. For licensing reasons, the package installer probably cannot do that on the user’s behalf, and the package-build tool certainly could not, because it would trigger the renaming clause. But you could leave it as a fun option for the user to run, or build it into a font manager (or a office-application extension).

Anyway, the big drawback to TTX as a sidecar is that creating hand-crafted TTX files is not easy. I know it’s been done at least once, but the people involved don’t talk about it as a personal high point. In addition, TTX isn’t built to handle non-font-binary content. You could stuff extra metadata into unclaimed high-order name table slots, but roundtripping those is problematic. But for the software-integration features, I’d call it at least a 8 out of 10 possibility.

Another existing metadata XML option is the metadata block already defined by the W3C for WOFF, the compressed web-font-delivery file format. “But wait,” you cry, leaping out of your chair in excitement as USB cables and breakfast cereal goes flying, “if it’s already defined, isn’t the problem already solved?” Well, not exactly. The schema is minimal in scope: it’s intended to serve the WOFF-delivery mechanism only. So it doesn’t cover all of the properties you’d need for a desktop font database. Furthermore, the metadata block is a component of the WOFF binary, not a sidecar, so it’s not fully the right format. That said, it is possible that it could be extended (possibly with the W3C on board). There is a per-vendor “extension” element defined. For widespread usage, you’d probably want to just define additional elements. Here again, the drawback would be lack of tool support, but it’s certainly more complete than DIY XML would ever be. 6 out of 10, easily.

There are also text-based formats that already exist and cover some of what is desired in a sidecar. For example, the FEA format (also by Adobe) is an intentionally human-writeable format geared toward writing OpenType feature definitions (think: ligature-substitution rules, but way more flexible. You can do chaining rules that match pre-expressions and post-expressions, and so on). FEA includes formal support for (as far as I can tell) all the currently used name table metadata in OpenType, and there is an “anonymous” block type you could use to embed other properties. Tool support is stellar; FEA usage is industry standard these days.

That having been said, I’m not sure how widespread support for the name table and anonymous stuff is in software. Most people use FEA to write GSUB and GPOS rules—and nothing else. Furthermore, FEA is designed to be compiled into a font, and done so when building the font from source. So going from FEA to a metadata database is a different path altogether, and the GSUB-orientation of the format means it’s kind of a hack. If the tool support is good and people agreed on how to stuff things into anonymous blocks, call it 5.5 out of 10.

Finally, there’s the metadata sidecars used by Google Fonts. These are per-font-family, and quite lightweight. The format is actually plain-text Google protobuf, although the Google Fonts repository has them all using the binary file extension, *.pb. Also, the Google Fonts docs are out of date: they reference the old approach, which used JSON. While this format is quite easy to read, write, and do something with, at the moment it is also minimal in scope: basic font and family name information, weight & style, copyright. That’s it. Certainly the extension mechanism is easy to adopt, however; this is trivial “key”:”value” stuff. And there are a lot of tools capable of reading protobuf and similar structured formats; very database-friendly. 6 out of 10 or so.

Choose a side

So where does that leave us? Easy answer: I don’t know. If I were packaging fonts today and wanting to simply record missing metadata, unsure of what would happen to it down the line, I might do it in protobuf format—for the sake of simplicity. It’s certainly feasible to imagine that protobuf metadata files could be converted to TTX or to FEA for consumption by existing tools, after all.

If I was writing a sidecar specifically to use for the Utopia example from above, as a shortcut to turn an old Type-1 font into an OpenType font with updated metadata, I would do it in TTX, since I know that would work. But that’s not ultimately the driving metadata-sidecar question as raised in the audience Q&A after my talk. That was about filling in a desktop-font-metadata database. Which, at the moment, does not exist.

IF a desktop-font-metadata plan crystalizes and IF relying on an XML format works for everyone involved, it’d certainly be less effort to rely on TTX than anything else.

To get right down to it, though, all of this metadata is meant to be the human-readable kind. It’d probably be easier to forgo the pain of writing or extending an XML namespace (however much you love the W3C) and keep the sidecar simple. If it proves useful, at least it’s simpler for other programs and libraries to read it and, perhaps, transform it into the other formats that utilities and build tools can do more interesting things with. Or vice-versa, like fontmake taking anonymous FEA blocks and plopping them into a protobuf file when building.

All that having been said, I fully expect the legions of XML fans (fxans?) to rush the stage and cheer for their favorite option….

August 17, 2018

Easy DIY Cellphone Stand

Over the years I've picked up a couple of cellphone stands as conference giveaways. A stand is a nice idea, especially if you like to read news articles during mealtime, but the stands I've tried never seem to be quite what I want. Either they're not adjustable, or they're too bulky or heavy to want to carry them around all the time.

A while back, I was browsing on ebay looking for something better than the ones I have. I saw a few that looked like they might be worth trying, but then it occurred to me: I could make one pretty easily that would work better than anything I'd found for sale.

I started with plans that involved wire and a hinge -- the hinge so the two sides of the stand would fold together to fit in a purse or pocket -- and spent a few hours trying different hinge options.I wasn't satisfied, though. And then I realized: all I had to do was bend the wire into the shape I needed. Voilà -- instant lightweight adjustable cellphone stand.

And it has worked great. I've been using it for months and it's much better than any of the prefab stands I had before.

Bend a piece of wire

[Bent wire]

I don't know where this wire came from: it was in my spare-metal-parts collection. You want something a little thinner than coathanger wire, so you can bend it relatively easily; "baling wire" or "rebar wire" is probably about right.

Bend the tips around

[Tips adjusted to fit your phone's width]

Adjust the curve so it's big enough that your cellphone will fit in the crook of the wires.

Bend the back end down, and spread the two halves apart

[Bend the back end down]

Adjust so it fits your phone

[Instant cellphone stand]

Coat the finished stand with rubberized coating (available at your local hardware store in either dip or spray-on varieties) so it won't slide around on tables and won't scratch anything. The finished product is adjustable to any angle you need -- so you can adjust it based on the lighting in any room -- and you can fold the two halves together to make it easy to carry.

NVMe Firmware: I Need Your Data

In a recent Google Plus post I asked what kind of hardware was most interesting to be focusing on next. UEFI updating is now working well with a large number of vendors, and the LVFS “onboarding” process is well established now. On that topic we’ll hopefully have some more announcements soon. Anyway, back to the topic in hand: The overwhelming result from the poll was that people wanted NVMe hardware supported, so that you can trivially update the firmware of your SSD. Firmware updates for SSDs are important, as most either address data consistency issues or provide nice performance fixes.

Unfortunately there needs to be some plumbing put in place first, so don’t expect anything awesome very fast. The NVMe ecosystem is pretty new, and things like “what version number firmware am I running now” and “is this firmware OEM firmware or retail firmware” are still queried using vendor-specific extensions. I only have two devices to test with (Lenovo P50 and Dell XPS 13) and so I’m asking for some help with data collection. Primarily I’m trying to find out what NMVe hardware people are actually using, so I can approach the most popular vendors first (via the existing OEMs). I’m also going to be looking at the firmware revision string that each vendor sets to find quirks we need — for instance, Toshiba encodes MODEL VENDOR, and everyone else specifies VENDOR MODEL. Some drives contain the vendor data with a GUID, some don’t, I have no idea of the relative number or how many different formats there are. I’d also like to know how many firmware slots the average SSD has, and the percentage of drives that have a protected slot 1 firmware. This all lets us work out how safe it would be to attempt a new firmware update on specific hardware — the very last thing we want to do is brick an expensive new NMVe SSD with all your data on.

So, what do I would like you to do. You don’t need to reboot, unmount any filesystems or anything like that. Just:

  1. Install nvme (e.g. dnf install nvme-cli or build it from source
  2. Run the following command:
    sudo nvme id-ctrl --raw-binary /dev/nvme0 > /tmp/id-ctrl
    
  3. If that worked, run the following command:
    curl -F type=nvme \
        -F "machine_id="`cat /etc/machine-id` \
        -F file=@/tmp/id-ctrl \
        https://staging.fwupd.org/lvfs/upload_hwinfo

If you’re not sure if you have a NVMe drive you can check with the nvme command above. The command isn’t doing anything with the firmware; it’s just asking the NVMe drive to report what it knows about itself. It should be 100% safe, the kernel already did the same request at system startup.

We are sending your random machine ID to ensure we don’t record duplicate submissions — if that makes you unhappy for some reason just choose some other 32 byte hex string. In the binary file created by nvme there is the encoded model number and serial number of your drive; if this makes you uneasy please don’t send the file.

Many thanks, and needless to say I’ll be posting some stats here when I’ve got enough submissions to be statistically valid.

August 12, 2018

Prevent a Linux/systemd System from Auto-Sleeping

About three weeks ago, a Debian (testing) update made a significant change on my system: it added a 30-minute suspend timeout. If I left the machine unattended for half an hour, it would automatically go to sleep.

What's wrong with that? you ask. Why not just let it sleep if you're going to be away from it that long?

But sometimes there's a reason to leave it running. For instance, I might want to track an ongoing discussion in IRC, and occasionally come back to check in. Or, more important, a long-running job that doesn't require user input, like a system backup, or ripping a CD. or testing a web service. None of those count as "activity" to keep the system awake: only mouse and keyboard motions count.

There are lots of pages that point to the file /etc/systemd/logind.conf, where you can find commented-out lines like

#IdleAction=ignore
#IdleActionSec=30min

The comment at the top of the file says that these are the defaults and references the logind.conf man page. Indeed, man logind.conf says that setting IdleAction=ignore should prevent annything from happening, and that setting IdleActionSec=120min should lead to a longer delay.

Alas, neither is true. This file is completely ignored as far as I can tell, and I don't know why it's there, or where the 30 minute setting is coming from.

What actually did work was in Debian's Suspend wiki page. I was skeptical since the page hasn't been updated since Debian Jessie (Stretch, the successor to Jessie, has been out for more than a year now) and the change I was seeing just happened in the last month. I was also leery because the only advice it gives is "For systems which should never attempt any type of suspension, these targets can be disabled at the systemd level". I do suspend my system, frequently; I just don't want it to happen unless I tell it to, or with a much longer timeout than 30 minutes.

But it turns out the command it suggests does work:

sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
and it doesn't disable suspending entirely: I can still suspend manually, it just disables autosuspend. So that's good enough.

Be warned: the page says next:

Then run systemctl restart systemd-logind.service or reboot.

It neglects to mention that restarting systemd-logind.service will kill your X session, so don't run that command if you're in the middle of anything.

It would be nice to know where the 30-second timeout had been coming from, so I could enable it after, say, 90 or 120 minutes. A timeout sounds like a good thing, if it's something the user can configure. But like so many systemd functions, no one who writes documentation seems to know how it actually works, and those who know aren't telling.

August 11, 2018

All font metadata, all the time

A recurring obstacle within the problem of “making fonts work better for desktop Linux users” (see previous posts for context) is how much metadata is involved and, thus, needs to be shuffled around so that it can be exposed to the user or accessed by system utilities. People hunting for a font to install need to see metadata about it in the package manager / software center; people selecting a font from their collection need to see metadata about it in the application they’re using (and within the font-selection dialogs, if not elsewhere in the app’s UI). The computing environment needs to be aware of metadata to set up the correct rendering options and to pass along the right information to other components above and below in the stack.

As it stands now, there are several components on a GNOME-based system that already do track metadata about fonts: Pango, Fontconfig, AppStream, HarfBuzz, GTK+, etc. There are also a ton of metadata fields already defined in the OpenType font format specification. Out of curiosity, I decided to dig into the various data structures and see which properties are tracked where.

I put the results into a Markdown table as a GitLab “snippet” (a.k.a., gist…) and posted it as this font metadata cross-reference chart. For comparison’s sake, I also included the internal data structures of the old Fontmatrix font manager (which a lot of folks still reference as a high-water mark) and the … shall we say “terse” … font ontology defined in Nepomuk. I basically used the OpenType spec as the reference point, although some of the metadata in that column (the left-most) is more of a “derived property” than an actual field. Like “is this font a color font” — you might determine that by looking for one of the four possible color font tables (sbix, CBDT, SVG, or COLR), but there’s not a “color included” bit, exactly.

There are just over 50 properties, which is a soft number because it includes a lot of not-quite aligned items, such as where OpenType stores a separate field for the font vendor name and the font vendor’s URL, and some properties that are overly general in one or more implementations. That, and it includes several of the derived properties mentioned just above.

You’re welcome to suggest additions. I did focus on the user-visible metadata world, which means the table excludes some system-oriented properties like the em size and baseline height. My interest in primarily in things that directly tie in to how users select fonts for use. I’m definitely not averse to pull requests to add that system stuff to the table, though. That’s why I put it on a GitLab snipgist — personally, I find Markdown tables awful to work with. But I digress.

No idea where this goes from here. There was some talk around the Fonts BoF at GUADEC this past July about whether or not GNOME ought to cache font metadata in some sort of user-local database. My recollection is that this would be helpful in several places; certainly as it stands now, for example, the awesome work that Matthias Clasen has done reengineering the GTK+ font explorer seems to require mining each font binary, so saving those results for reuse seems like a clear win. If Pango and the top-level application are also parsing the font binary just to show stuff, that’s overkill.

In my impromptu GUADEC talk about font selection, I noted that other GNOME “content objects” like audio, video, and image files all seems to get their metadata taken care of by Tracker, and posited that Tracker could take that task on for fonts as well. Most developers there didn’t seem to think this was a great fit. Some people have issues with Tracker anyway, but Tracker is also (by design) a “user-level” service, so putting font metadata into it would not solve the caching needs of the various libraries and system components.

Nevertheless, I did toy with the idea of writing up a modern Nepomuk ontology for font objects based on the grand-unified-GitLab-snipgist table (Nepomuk being the standard that Tracker uses). That may just be out of personal self-interest in teasing out structure from the table itself. It might be useful, for example, to capture more structure about which properties are related in which ways — such as where they are nested within other properties, are duplicates, or are mutually exclusive. And a lot more for which Markdown isn’t remotely useful.

August 10, 2018

Introducing Blender Benchmark

Today we present the Blender Benchmark, a new platform to collect and display the results of hardware and software performance tests. With this benchmark we aim at an optimal comparison between system hardware and installations, and to assist developers to track performance during Blender development.

The benchmark consists of two parts: a downloadable package which runs Blender and renders on several production files, and the Open Data portal on blender.org, where the results will be (optionally) uploaded.

We’ve built the Blender Benchmark platform with maximum focus on transparency and privacy. We only use free and open source software (GNU GPL), the testing content is public domain (CC0), and the test results are being shared anonymized as public domain data – free for anyone to download and to process further.

We believe this is the best way to invite the Blender community to contribute the results of their performance tests, and create a world-class Open Dataset for the entire CG industry.

How does it work?

Users download the Benchmark Client and run one of the two benchmarks (‘quick’ or ‘complete’). The benchmark will gather information about the system, such as operating system, RAM, graphics cards, CPU model, as well as information about the performance of the system during the execution of the benchmark. After that, the user will be able to share the result online on the Blender Open Data platform, or to save the data locally.

In order to provide control over the data that is shared online, the benchmark result is first associated with the Blender ID of the user, and uploaded on mydata.blender.org, where the user will be able to redact and anonymize the parts containing personal information (Blender ID username and hostname). Currently this information is removed by default. No other personal information is collected.

Blender Open Data Architecture

Blender Open Data portal

In order to visualize, share and explore the data, we’ve built opendata.blender.org. The data hosted on the website is available under Public Domain, it is updated in near-realtime after every benchmark and it is easily processable and well documented.

While hosting Blender Benchmark results will be the initial purpose of the Open Data portal, we plan to host other data sets in the future. For example, information about Blender downloads, telemetry information, etc. Each data set published on the platform will adhere to our Open Data principles, and its collection will be clearly communicated.

Blender Open Data principles

As initial guideline for our definition of Blender Open Data, we were inspired by the Eight Principles of Open Data. We believe that Blender Open Data should be:

  • Complete. All public data is made available. Public data is data that is not subject to valid privacy, security or privilege limitations.
  • Primary. Data is as collected at the source, with the highest possible level of granularity, not in aggregate or modified forms.
  • Timely. Data is made available as quickly as necessary to preserve the value of the data.
  • Accessible. Data is available to the widest range of users for the widest range of purposes.
  • Machine processable. Data is reasonably structured to allow automated processing.
  • Non-discriminatory. Data is available to anyone, with no requirement of registration.
  • Non-proprietary. Data is available in a format over which no entity has exclusive control.
  • License-free. Data is not subject to any copyright, patent, trademark or trade secret regulation. Reasonable privacy, security and privilege restrictions may be allowed.
  • Online and Free. Information is not meaningfully public if it is not available on the Internet at no charge, or at least no more than the marginal cost of reproduction. It should also be findable.
  • Permanent. Data should be made available at a stable Internet location indefinitely and in a stable data format for as long as possible.

We take a privacy-conscious approach when handling Benchmark data.

Timeline

  • August 10th: public beta; a test run to verify if everything works. Send feedback to devtalk.blender.org.
  • September: first official release.

Credits

The project has been developed by the team at Blender: Brecht van Lommel, Dan MacGrath, Francesco Siddi, Markus Ritberger, Pablo Vazquez, Sybren Stüvel and Sergey Sharybin.This project was commissioned by Ton Roosendaal, chairman Blender Foundation.

August 09, 2018

Blender at SIGGRAPH 2018

SIGGRAPH is the largest annual conference for  computer graphics. Since 1999, Blender has had a presence there every year. This year’s edition is in Vancouver – 11 to 16 August.

Blender will take part in the following events:

Foundation and Community Meeting

Ton Roosendaal. Sunday 2 PM, Room Pan Pacific Hotel, Pacific Rim 2

Blender Spotlight

David Andrade & Ton Roosendaal. Sunday 3.30 PM, Room Pan Pacific Hotel, Pacific Rim 2

AMD Booth Theater

Mike Pan. Tuesday, 3-4 PM, Cycles production rendering. Wednesday 2-3 PM, EEVEE real-time 3D in Blender 2.8

August Hailstorm

We're still not getting the regular thunderstorms one would normally expect in the New Mexico monsoon season, but at least we're getting a little relief from the drought.

Last Saturday we had a fairly impressive afternoon squall. It only lasted about ten minutes but it dumped over an inch of rain and hail in that time. ("Over an inch" means our irritating new weather station stopped recording at exactly 1.0 even though we got some more rain after that, making us suspect that it has some kind of built-in "that can't be right!" filter. It reads in hundredths of an inch and it's hard to believe that we didn't even get another .01 after that.)

{Pile of hailstones on our deck} It was typical New Mexico hail -- lentil-sized, not like the baseballs we heard about in Colorado Springs a few days later that killed some zoo animals. I hear this area does occasionally get big hailstones, but it's fortunately rare.

There was enough hail on the ground to make for wintry snow scenes, and we found an enormous pile of hailstones on our back deck that persisted through the next day (that deck is always shady). Of course, the hail out in the yard disappeared in under half an hour once the New Mexico sun came out.

{Pile of hailstones on our deck} But before that, as soon as the squall ended, we went out to walk the property and take a look the "snow" and in particular at "La Cienega" or "the swamp", our fanciful name for an area down at the bottom of the hill where water collects and there's a little willow grove. There was indeed water there -- covered with a layer of floating hail -- but on the way down we also had a new "creek" with several tributaries, areas where the torrent carved out little streambeds.

It's fun to have our own creek ... even if it's only for part of a day.

More photos: August hailstorm.

August 08, 2018

GNOME Software and automatic updates

For GNOME 3.30 we’ve enabled something that people have been asking for since at least the birth of the gnome-software project: automatically installing updates.

This of course comes with some caveats. Since it’s still not safe to auto-update packages (trust me, I triaged the hundreds of bugs) we will restrict automatic updates to Flatpaks. Although we do automatically download things like firmware updates, ostree content, and package updates by default they’re deployed manually like before. I guess it’s important to say that the auto-update of Flatpaks is optional and can easily be turned off in the GUI, and that you’ll be notified when applications have been auto-updated and need restarting.

Another common complaint with gnome-software was that it didn’t show the same list of updates as command line tools like dnf. The internal refactoring required for auto-deploying updates also allows us to show updates that are available, but not yet downloaded. We’ll still try and auto-download them ahead of time if possible, but won’t hide them until that. This does mean that “new” updates could take some time to download in the updates panel before either the firmware update is performed or the offline update is scheduled.

This also means we can add some additional UI controlling whether updates should be downloaded and deployed automatically. This doesn’t override the existing logic regarding metered connections or available battery power, but does give the user some more control without resorting to using gsettings invocation on the command line.

Also notable for GNOME 3.30 is that we’ve switched to the new libflatpak transaction API, which both simplifies the flatpak plugin considerably, and it means we install the same runtimes and extensions as the flatpak CLI. This was another common source of frustration as anyone trying to install from a flatpakref with RuntimeRepo set will testify.

With these changes we’ve also bumped the plugin interface version, so if you have out-of-tree plugins they’ll need recompiling before they work again. After a little more polish, the new GNOME Software 2.29.90 will soon be available in Fedora Rawhide, and will thus be available in Fedora 29. If 3.30 is as popular as I think it might be, we might even backport gnome-software 3.30.1 into Fedora 28 like we did for 3.28 and Fedora 27 all those moons ago.

Comments welcome.

August 06, 2018

Please welcome Lenovo to the LVFS

I’d like to formally welcome Lenovo to the LVFS. For the last few months myself and Peter Jones have been working with partners of Lenovo and the ThinkPad, ThinkStation and ThinkCenter groups inside Lenovo to get automatic firmware updates working across a huge number of different models of hardware.

Obviously, this is a big deal. Tens of thousands of people are likely to be offered a firmware update in the next few weeks, and hundreds of thousands over the next few months. Understandably we’re not just flipping a switch and opening the floodgates, so if you’ve not seen anything appear in fwupdmgr update or in GNOME Software don’t panic. Over the next couple of weeks we’ll be moving a lot of different models from the various testing and embargoed remotes to the stable remote, and so the list of supported hardware will grow. That said, we’ll only be supporting UEFI hardware produced fairly recently, so there’s no point looking for updates on your beloved T61. I also can’t comment on what other Lenovo branded hardware is going to be supported in the future as I myself don’t know.

Bringing Lenovo to the LVFS has been a lot of work. It needed changes to the low level fwupdate library, fwupd, and even the LVFS admin portal itself for various vendor-defined reasons. We’ve been working in semi-secret for a long time, and I’m sure it’s been frustrating to all involved not being able to speak openly about the grand plan. I do think Lenovo should be applauded for the work done so far due to the enormity of the task, rather than chastised about coming to the party a little late. If anyone from HP is reading this, you’re now officially late.

We’re still debugging a few remaining issues, and also working on making the update metadata better quality, so please don’t judge Lenovo (or me!) too harshly if there are initial niggles with the update process. Updating the firmware is slightly odd in that it sometimes needs to reboot a few times with some scary-sounding beeps, and on some hardware the first UEFI update you do might look less than beautiful. If you want to do the firmware update on Lenovo hardware, you’ll have a lot more success with newer versions of fwupd and fwupdate, although we should do a fairly good job of not offering the update if it’s not going to work. All our testing has been done with a fully updated Fedora 28 workstation. It of course works with SecureBoot turned on, but if you’ve enabled the BootOrder lock manually you’ll need to turn that off first.

I’d like to personally thank all the Lenovo engineers and managers I’ve worked with over the last few months. All my time has been sponsored by Red Hat, and they rightfully deserve love too.

Interview with Serhii

Could you tell us something about yourself?

Hello everybody! My name is Serhii and I’m an artist from Ukraine. I like to draw illustrations and make videos about it. Lately my favourite theme has been fantasy, but I like other interesting themes. I also work with vector graphics, but that is another story. 🙂

Do you paint professionally, as a hobby artist, or both?

Both, but it’s a little more my work. In other words, my hobby turns into my work, and I think that isn’t so bad.

What genre(s) do you work in?

As I said earlier I prefer fantasy, it’s very exciting and you can create unreal worlds and characters. With your imagination, you can do whatever you want. You can turn any plot, even the simplest, into an interesting fantasy illustration. I think it’s cool!

Whose work inspires you most — who are your role models as an artist?

Hard to tell. I like a lot of artists, and all of them have different things to teach. Someone has wonderful graphics, someone has a great composition, and somebody else is an incredible colorist. Each one has their own amazing style and vision of art.

How and when did you get to try digital painting for the first time?

It was around 2000, when my friend bought a wacom tablet. I remember that there was a small drawing program in the set. On this day, I discovered digital painting for myself.

What makes you choose digital over traditional painting?

I like digital painting because it has a lot of opportunities, for example you can use different tools and effects in one illustration. And the main thing: you have the magic combination of keys, Ctrl+Z 😀 It’s also important that you can quickly show your work to viewers from all over the world.

How did you find out about Krita?

One day I decided to change my operating system and I chose Linux. And then I needed a program for drawing, so I tried some programs and Krita was the best for me.

What was your first impression?

Before my experience with Krita I used some painting applications, so it was easy for me, because all tools and options were understandable, comfortable and useful for my art.

What do you love about Krita?

Firstly, I like it that there are a lot of different brushes with the opportunity for rine-tuning as you want. In the latest version of Krita all brushes from the standard set became the best of the best. These brushes are all that I need for my artworks. Also there are many cool themes and interface settings.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I am satisfied with everything in Krita. I would like the big brushes to work a little faster, if it’s possible. Generally, developers of Krita in each version make this app better and I want them to continue their work in the same vein and don’t stop.

What sets Krita apart from the other tools that you use?

For me this is the only application for drawing and digital painting that I can use in Linux. Of course, there are Gimp and Mypaint, but they’re not really right for me, so Krita is the best app for digital painting in Linux.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?


Now I think that my favourite illustration is “Unstoppable”, because in this picture there is a funny plot, as it seems to me. Like many of my works it is in the fantasy genre, and I like that best.

What techniques and brushes did you use in it?

I used a technique that is similar to traditional – one canvas without any layers. What about brushes, I create my artwork with help of “Pencil-2” for sketch, “Texture Big” for basic colors, “Basic-2 Opacity” and “Bristles-2 Flat Rough” for details.

Where can people see more of your work?

Youtube: https://www.youtube.com/user/grafikwork
Instagram: https://www.instagram.com/serg_grafikwork/
Artstation: https://www.artstation.com/serggrafik
Twitter: https://twitter.com/Serg_Grafik
Facebook: https://www.facebook.com/serg.grafik
Deviantart: https://www.deviantart.com/grafikwork

Anything else you’d like to share?

I wish the team of developers to continue their fantastic work on the creation of this application and I want to see how many more artists use Krita for their creativity and making artworks.

August 01, 2018

FreeCAD BIM development news - July 2018

Hi folks, This is the monthly development post about the development of BIM functionalities of FreeCAD. This month we won't have a video here, because there is quite a lot of stuff to show already, but next month I'll resume making videos, and I'll try to begin to show real projects in them too (not sure...

July 30, 2018

Krita in the Windows Store: an update

We’ve been publishing Krita in the Windows store for quite some time now. Not quite a year, but we’ve updated our Store listing almost twenty times. By far the majority of users get Krita from this website: about 30,000 downloads a week. Store downloads are only about 125 a week. Still, the income generated makes it possible for the Krita maintainer to work on Krita full-time, which would not have been possible otherwise.

That’s good, because combining a day job and working on Krita is a sure recipe for a burn-out. (Donations fund Dmitry’s work, but there aren’t enough donations to fund two people at the same time: we have about 2000 euros per month in donations.)

What’s not so good is that having an app in a Store basically means you’re a sharecropper. You do the work, and the Store allows you whatever it wants to allow you. You’re absolutely powerless. All the rules are made by the Store. And if there’s one particular rule that gets interpreted by the Store curators in a way that’s incompatible with the English language, well, we’ll have to submit to it and be obedient.

Originally, we would mention in the Store listing that people could also get Krita for free from krita.org, and where you’d be able to get Krita’s source code:

 

This, Microsoft contends falls foul of its policy 10.8.5:

However, this is part of Policy 10.8:

Sounds clear, right? If your app includes those things, 10.8.5 applies. So, if your app doesn’t do that, it shouldn’t apply. At least, that was our naive interpretation. However, Microsoft disagrees. In a mail to the Krita maintainer they say:

And, of course, apart from 10.8 not being the case, so the “if” clause doesn’t apply, it’s not the Krita application that “promotes or distributes software” outside the Microsoft Store but the Store listing, and 10.8.5 doesn’t say anything about that.

Now Microsoft was certainly aware that Krita is open source software published in the GNU Public License, and Krita would be distributed outside the Windows Store. They actually helped us get Krita in the Windows Store to begin with, because, well, the Windows Store still is rather bare and doesn’t have that much good quality content.

In any case, since we’re absolutely powerless, we’ve had to change the Store listing…

Building Firefox: Changing the App Name

In my several recent articles about building Firefox from source, I omitted one minor change I made, which will probably sound a bit silly. A self-built Firefox thinks its name is "Nightly", so, for example, the Help menu includes About Nightly.

Somehow I found that unreasonably irritating. It's not a nightly build; in fact, I hope to build it as seldom as possible, ideally only after a git pull when new versions are released. Yet Firefox shows its name in quite a few places, so you're constantly faced with that "Nightly". After all the work to build Firefox, why put up with that?

To find where it was coming from, I used my recursive grep alias which skips the obj- directory plus things like object files and metadata. This is how I define it in my .zshrc (obviously, not all of these clauses are necessary for this Firefox search), and then how I called it to try to find instances of "Nightly" in the source:

gr() {
  find . \( -type f -and -not -name '*.o' -and -not -name '*.so' -and -not -name '*.a' -and -not -name '*.pyc' -and -not -name '*.jpg' -and -not -name '*.JPG' -and -not -name '*.png' -and -not -name '*.xcf*' -and -not -name '*.gmo' -and -not -name '.intltool*' -and -not -name '*.po' -and -not -name 'po' -and -not -name '*.tar*' -and -not -name '*.zip' -or -name '.metadata' -or -name 'build' -or -name 'obj-*' -or -name '.git' -or -name '.svn' -prune \) -print0 | xargs -0 grep $* /dev/null
}

gr Nightly | grep -v '//' | grep -v '#' | grep -v isNightly  | grep test | grep -v task | fgrep -v .js | fgrep -v .cpp | grep -v mobile >grep.out

Even with all those exclusions, that still ends up printing an enormous list. But it turns out all the important hits are in the browser directory, so you can get away with running it from there rather than from the top level.

I found a bunch of likely files that all had very similar "Nightly" lines in them:

  • browser/branding/nightly/branding.nsi
  • browser/branding/nightly/configure.sh
  • browser/branding/nightly/locales/en-US/brand.dtd
  • browser/branding/nightly/locales/en-US/brand.ftl
  • browser/branding/nightly/locales/en-US/brand.properties
  • browser/branding/unofficial/configure.sh
  • browser/branding/unofficial/locales/en-US/brand.dtd
  • browser/branding/unofficial/locales/en-US/brand.properties
  • browser/branding/unofficial/locales/en-US/brand.ftl

Since I didn't know which one was relevant, I changed each of them to slightly different names, then rebuilt and checked to see which names I actually saw while running the browser.

It turned out that browser/branding/unofficial/locales/en-US/brand.dtd is the file that controls the application name in the Help menu and in Help->About -- though the title of the About window is still "Nightly" and I haven't found what controls that.

branding/unofficial/locales/en-US/brand.ftl controls the "Nightly" references in the Edit->Preferences window.

I don't know what all the others do. There may be other instances of "Nightly" that appear elsewhere in the app, the other files, but I haven't seen them yet.

Past Firefox building articles: Building Firefox Quantum; Building Firefox for ALSA (non PulseAudio) Sound; Firefox Quantum: Fixing Ctrl W (or other key bindings).

July 29, 2018

How many open fonts licenses are there?

Recently I’ve been stump-speeching at the various free-software conferences I haunt on the topic of some plumbing-layer issues that affect using fonts on Linux systems.

One interesting rabbit hole in this field of stumps is the subject of font licenses. Today, we live in a harmonious world where open and libre fonts are uniformly distributed under the SIL Open Font License, and all is simple in our garden. Or so people think.

Technically speaking, of course, there have been two versions of the SIL OFL, so it’s actually possible that some of the fonts you refer to as “OFL licensed” are yours under the terms of OFL 1.0, and others under the current version, 1.1. It’s also possible that some of those fonts were published with the Reserved Font Name clause activated and some were not. So there are four possibilities, if you count how that clause alters compliance tracking.

Nevertheless, that’s still a complexity-free environment in which to roam: four license variants, demarking a known-and-knowable world like the pillars of Hercules & whatever was directly opposite the pillars of Hercules.

Five, if you count the fact that you may have fonts on your system that are published under the other major alternative, the GNU GPL with Font-Embedding Exception. Six if you include the MIT license, let’s say. Seven if you include Apache, eight or nine if you include BSD variants with a different number of clauses (ignoring for now how many, because I lose track; my point, after all, is that there are very few).

Still, that’s not that many. Ten if you include the IPA Font License. Eleven if you have X11 fonts. Twelve if you have an Artistic License font. Which exist.

That’s what we tell ourselves, anyhow. But although I didn’t mention it up front, we’re really just limiting the above list to the licenses currently included in the SPDX License List. Although SPDX is widely used as a machine-readable way to track licenses, it’s not the only place to look to see what licenses people are using for their open fonts.

If you run Debian or a Debian-based distribution, for example, then you don’t live in the small, confined world of ten-ish libre font licenses. You live, instead, in a funtastic world of broad historical context and twenty-plus-ish years of free software and, for much of that time period, there were no readymade, boiler-plate–like font licenses to select. Consequently, many people imbued with sudden inspiration rose up and boldly wrote their own licenses.

Since font packages tend to be things that reach a “completion point”, after which they get few-to-no further releases, many of these license-pioneering font packages persist to this day. As a result, their singleton licenses survive into the 21st Century as well.

A few such coelacanths belong to fonts well-known among Debian users, such as

  • The Bitstream Vera license (which also applies to DejaVu)
  • The Bitstream Charter license (which is not under same license as Vera, although it is comparable)
  • The Liberation Fonts License
  • The ParaType Free Font License
  • The Ubuntu Font License
  • The STIX Fonts License
  • The TUG Utopia License
  • The Larabie Fonts EULA

Others might not be as well-known, but are in fighting shape nonetheless:

  • The M+ Fonts Project License
  • The Arphic Public License
  • The GUST Font License
  • The Magenta Open License
  • The Mikachan Fonts License

If you have TeX installed, you have even more options to explore, including:

  • The Day-Roman Font License
  • The Luxi Fonts License
  • The Luxi Mono Font License
  • The Adobe Euro Font License
  • The Librerias Gandhi Font License
  • The Literat Font License
  • The IBM Courier License

That might seem like enough. But license authoring is a creative endeavor, and when the muse strikes, it can be hard to ignore. Yet not everyone so inspired is truly gifted with originality, so there are several  licenses to be found that may or may not actually be doppelgangers of other licenses (apart from the name itself). If you’re a lawyer, you can probably read and decide for yourself. If you’re unwilling to set aside that much of your day and that much of your soul, you might just assume that these other licenses are all distinct:

  • The Baekmuk License
  • The Open Government Data License
  • The Hanazono Font License
  • The UmeFont License
  • The BaKoMa Fonts License
  • The Misaki Fonts License
  • The Oradano Mincho Fonts License
  • The SazanamiFont License
  • The Hershey Fonts License

But that’s about it. Although, to be honest, that’s just as far as I got digging into packages in Debian, and there are a few I never could sort out, such as the license of WINE fonts and Symbola Fonts license. Maybe I should start looking again. Oh, and I guess there are LaTeX-licensed fonts.

July 24, 2018

Rain Song

We've been in the depths of a desperate drought. Last year's monsoon season never happened, and then the winter snow season didn't happen either.

Dave and I aren't believers in tropical garden foliage that requires a lot of water; but even piñons and junipers and other native plants need some water. You know it's bad when you find yourself carrying a watering can down to the cholla and prickly pear to keep them alive.

This year, the Forest Service closed all the trails for about a month -- too much risk of one careless cigarette-smoking hiker, or at least I think that was the reason (they never really explained it) -- and most the other trail agencies followed suit. But then in early July, the forecasts started predicting the monsoon at last. We got some cloudy afternoons, some humid days (what qualifies as humid in New Mexico, anyway -- sometimes all the way up to 40%), and the various agencies opened their trails again. Which came as a surprise, because those clouds and muggy days didn't actually include any significant rain. Apparently mere air humidity is enough to mitigate a lot of the fire risk?

Tonight the skies finally let loose. When the thunder and lightning started in earnest, a little after dinner, Dave and I went out to the patio to soak in the suddenly cool and electric air and some spectacular lightning bolts while watching the hummingbirds squabble over territory. We could see rain to the southwest, toward Albuquerque, and more rain to the east, toward the Sangres, but nothing where we were.

Then a sound began -- a distant humming/roaring, like the tires of a big truck passing on the road. "Are we hearing rain approaching?" we both asked at the same time. Since moving to New Mexico we're familiar with being able to see rain a long way away; and of course everyone has heard rain as it falls around them, either as a light pitter-patter or the louder sound from a real storm; but we'd never been able to hear the movement of a rainstorm as it gradually moved toward us.

Sure enough, the sound got louder and louder, almost unbearably loud -- and then suddenly we were inundated with giant-sized drops, blowing way in past the patio roof to where we were sitting.

I've heard of rain dances, and songs sung to bring the rain, but I didn't know it could sing back.

We ran for the door, not soon enough. But that was okay; we didn't mind getting drenched. After a drought this long, water from the sky is cause only for celebration.

The squall dumped over a third of an inch in only a few minutes. (This according to our shiny new weather station with a sensitive tipping-bucket rain gauge that measures in hundredths of an inch.) Then it eased up to a light drizzle for a while, the lightning moved farther away, and we decided it was safe to run down the trail to "La Cienega" (Spanish for swamp) at the bottom of the property and see if any water had accumulated. Sure enough! Lake La Senda (our humorous moniker for a couple of little puddles that sometimes persist as long as a couple of days) was several inches deep. Across the road, we could hear a canyon tree frog starting to sing his ratchety song -- almost more welcome than the sound of the rain itself.

As I type this, we're reading a touch over half an inch and we're down to a light drizzle. The thunder has receded but there's still plenty of lightning.

More rain! Keep it coming!

July 23, 2018

From Russia with Love


From Russia with Love

An Interview with Photographer Ilya Varivchenko

Ilya Varivchenko is a fashion and portrait photographer from Ivanovo, Russian Federation. He’s a UNIX administrator with a long-time passion for photography that has now become a second part-time job for him. Working on location and in his studio, he’s been producing a wonderful body of work specializing in portraiture, model tests, and more.

He’s a member of the community here (@viv), and he was kind enough to spare some time and answer a few questions (plus it gives me a good excuse to showcase some of his great work!).

by Ilya Varivchenko

Much of your work feels very classical in posing and light, particularly your studio portraits. What would you say are your biggest influences?

I am influenced by several classical painters and great modern photographers. Some of them are: Patrick Demarchelier, Steven Meisel and Peter Lindbergh. The general mood defines what I see around me. Russia is a very neglected but beautiful country and women around are an inexhaustible source of inspiration.

by Ilya Varivchenko

How would you describe your own style overall?

My style is certainly a classic portrait in its modern performance.

What motivates you when deciding who/how you shoot?

I usually plan shooting in advance. The range of models is rather narrow and it’s not so easy to get there. However, I am constantly looking for new faces. I choose the style and direction of a particular shooting based on my vision of the model and the current mood.

Why portraits? What about portraiture draws you to it?

I shoot portraits because people interest me. For me, photography is an instrument of knowing people and a means of communication.

by Ilya Varivchenko

If you had to pick your own favorite 3 photographs of your work, which ones would you choose and why?

It’s difficult to choose only three photographs, but maybe these:

by Ilya Varivchenko This photo was chosen by Olympus as a logo for their series of photo events in Russia 2017.
by Ilya Varivchenko This is one of my most reproducible photos. ;)
by Ilya Varivchenko This photo has a perfect mood in my opinion.

If you had to pick 3 favorite images from someone else, which ones would you choose and why?

It is very difficult to choose only three photos. The choice in any case will be incomplete, but here’s the first ones that comes to mind:

  1. The portrait of Heather Stewart-Whyte by Friedemann Hauss:
by Ilya Varivchenko
  1. The portrait of Monica Bellucci by Chico Bialas:
by Ilya Varivchenko

3) The portrait of Nicole Kidman by Patrick Demarchelier

by Ilya Varivchenko

How do you find your models usually?

Via social media which is the best means for model searching, but if I meet a girl I really like in the street, I can try and talk to her straight away. In fact, the problem is not to find a model, but to choose how to reject a request without offending a prospect model that is of no interest to me.

Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?

It’s always good to have a plan. It is also very good to have a spare plan.

Usually I discuss some common points with the model and stylist before shooting. But these plans are more connected with the mood and the general idea of the session. So when the magic of shooting begins, usually all the plans fly to hell. ;)

by Ilya Varivchenko

Do you have a shooting assistant with you, or is normally just you and the model?

The preparatory stage of shooting often requires participation of many people: a makeup artist, a hair stylist, etc., but shooting itself goes better when only two persons are involved. This is a fairly intimate process. Just like sex. :)

On the other hand, if we do a fashion shoot on order, then the presence of the customer representatives is a must.

Many shots have a strong editorial fashion feel to them: are those works for magazine/editorial use - or were they personal works you were planning to be that way?

I take pictures for local magazines and advertising agencies sometimes. Maybe it somehow influenced my other work.

by Ilya Varivchenko by Ilya Varivchenko by Ilya Varivchenko

What do you do with the photos you shoot?

Most of my works are for personal use. However, I often print them in a large format and I’ve also had two solo exhibitions. Prints with my works are sold and they can always be ordered. I also publish in photo magazines sometimes, but these magazines are Russian ones so they are hardly known to you.

By the way: I periodically take part in the events held by Olympus Russia, where I demonstrate my workflow.

This video shows that I use the RawTherapee as a raw converter :)

You’re shooting on Olympus gear quite a bit, are you officially affiliated with Olympus in some way?

On occasions I hold workshops as a part of the Olympus company marketing activities. Sometimes the Olympus company provides me with their products for testing and I am expected to follow up with a review.

Is your choice to use Free Software for pragmatic reasons, or more idealistic?

The choice was dictated by purely practical considerations. I found a tool, the results of which I am almost completely satisfied with. Detail for example is outstanding, comfortable work with color grading, excellent black and white conversion, and much more.

The fact that the product is free and (which is more important to me) I have an opportunity to communicate with its developers is a huge plus!

For example, with the output of Fuji X-T20, when it was required to add a new DCP profile to the converter I simply contacted the developers, shot the test target and got what I wanted.

by Ilya Varivchenko

Would you describe your workflow a bit? Which projects do you use regularly?

My workflow is quite simple:

  1. Shooting. I try to shoot in a way which will not require heavy postprocessing at all. It is much easier to set up light properly than to fix it in Photoshop later.

  2. Raw development with RawTherapee. My goal is to develop the image in a way which makes it as close to final as possible. Sometimes this is the end of my workflow. ;)

  3. Color correction (if necessary) with 3DLutCreator. In rare cases, it is more convenient to make complex color correction with the help of LUTs.

  4. Retouching with Adobe Photoshop. Nothing special. Removal of skin and hair defects, etc. Dodge and burn technique with a Wacom Intuos Pro.

by Ilya Varivchenko

Speaking of gear, what are you shooting with currently?

I have two systems now: Micro Four Thirds system from Olympus and X Series from Fujifilm. Typical setups are:

Studio: Olympus PEN-F + Panasonic G 42.5/1.7 Planair: Olympus PEN-F + M.Zuiko 75/1.8 or FujiFilm X-T20 + Fujinon 35/1.4

Many of your images appear make great use of natural light. For your studio lighting setup, what type of lighting gear are you using?

My studio equipment is a mix of Aurora Codis and Bowens studio lights + a lot of modifiers from large 2 meters parabolic octobox to narrow 40x150 strip boxes and so on.

by Ilya Varivchenko by Ilya Varivchenko

Is there something outside your comfort zone you wish you could try/shoot more of?

It is definitely landscape photography. And macro photography also attracts me - ants and snails are all great models in fact. :)

What is one piece of advice you would offer to another photographer?

Find in yourself what you want to share with others. Beauty is in the eye of the beholder. No beautiful models will help if you are empty inside.

by Ilya Varivchenko

I want to thank Ilya for taking the time to chat with me! Take some time to have a look through his blog and work (it’s chock full of wonderful work)!

All images copyright Ilya Varivchenko and used with permission.

Interview with Takiro Ryo

Could you tell us something about yourself?

Well, I’m not really sure what to tell about myself. My name is Takiro, I’m from Germany and I’ve liked drawing since I was a little kid although it wasn’t very easy because I never had decent paper and pens. Art wasn’t seen as an actual thing in my family, I guess. I started to take art more serious as a teenager, when I was 14 or 15, with traditional art. Everyone was into Mangas and Anime back then and so was I. I started out with poorly drawn fanarts of my favorite anthropomorphic characters before doing my own thing, and my style was typical Manga style until just a few years ago, when I transitioned to a more painterly style, trying to emulate the look of traditional paintings. I still do fanarts sometimes but nowadays I mostly paint anthropomorphic and feral animals often in magical fantasy settings inspired by video games and books.

Do you paint professionally, as a hobby artist, or both?

At the moment I’m more of a hobby artist, I sometimes do commissions for private clients but that rarely happens. I’d love to be a professional artist some day but I’m still not sure if I have what it takes because there is more to being a professional than just being good at art.

Timelapse of this painting

What genre(s) do you work in?

I love drawing pretty much anything animal and fantasy related. When I was a child I read a lot of books about Reynard the Fox and the other animals from the German fables. This influence lasts on and today I almost exclusively draw paintings of two-legged and four-legged anthropomorphic animals with foxes being my most recurring characters.

Whose work inspires you most — who are your role models as an artist?

To be honest, I never really thought about that. Definitely not one of the old masters. I had a pretty bad art education (maybe this is a German thing) and probably couldn’t tell a Picasso from a Da Vinci if my life depends on it. I think, when it comes to style, the artist who inspired me most in the early days was Michaela Frech with her realistic paintings.

Later, when I discovered that there is a whole community around anthropomorphic animals, I also found Tess Garman, who is part of Blotch, and Yee Chong, to just name a few. All of which had big influence on my style how it is now.

I had the amazing opportunity to meet Michaela Frech at an Art Show we both attended a year ago and later become friends with her. She helped me a lot with her professional advice and critique and enabled me to reach the skill level I now have.

As for my ideas, I get them mostly from fantasy movies and series, the novels and fables I read in my childhood and from video games. I often like to switch the characters with animals and think about how their animal behavior and characteristics could influences the plot.

Timelapse of this painting

How and when did you get to try digital painting for the first time?

Gosh, I think it was when I was in vocational school, 16 or so years ago. We had a class in PaintShop Pro, where we learned basic photo editing. Drawing and painting wasn’t really part of the class but shortly after a few lessons I discovered that it is fun, bought my very fist graphics tablet and made some baby steps in digital drawing and painting.

What makes you choose digital over traditional painting?

There are a few things. But I think what I like most about working digital is that it’s very forgiving. You can just throw some paint onto the digital canvas and look what sticks, then work with it and improve on it. For a long time my work flow was more like when you do a traditional painting, now I work more like a sculptor who adds or takes some clay and approximate the final result.

Another thing that made me choose digital over traditional is that you don’t need much space. As a teenager, I had only a small room and no real desk for a long time. Also didn’t had much space to store all my painting stuff.

And colors. I love the bright colors you usually only can have on a screen, which unfortunately is also a drawback when you print a painting.

How did you find out about Krita?

I just checked and it looks like I started using Krita in 2013. I’m not exactly sure how it happened but I remember that I was frustrated about all the other tools I tried by then, PaintShop, Corel, Photoshop, SAI, you name it. I already was almost completely transitioned to Linux, except for the painting part and then I remembered that my computer science teacher once mentioned a tool for Linux that was not gimp. Unfortunately I couldn’t remember the name so I typed “good painting tool for Linux -gimp” into a search engine and BAM! Krita.

What was your first impression?

I vaguely remember that it was like: “OMG, I can just draw. And brushes, it has nice brushes by default. And the example artworks are not shit.” Seriously, I looked into some other open source tools and the artworks they used to advertise the software often looked like they where made by someone who never held a pen and I thought: “That doesn’t look like you can do amazing stuff in that program.” But Krita made an amazing first impression.

What do you love about Krita?

I loved and still love that you can just pick it up and start painting. It’s default brushes already did a great job when I started to use it. I later found David Revoy’s brush pack which was awesome and I love what he did with the 4.0 brushes (although I had to bring two old friends back from 3.0). I think one of the greatest strengths of Krita today is the good choice of brushes that come with it. I still remember that it took me hours to configure some brushes in all the other tools I tried, before I could even start properly.

What do you think needs improvement in Krita? Is there anything that really annoys you?

If you have asked me back when I started using it, with version 2.7 I would have given you a page long list. There is an occasional crash now and then but now most of the major bugs are fixed and often, when I find a bug and want to report it, it’s already marked as resolved and I just have to wait for the next release.

The palette docker still has some issues and is a bit doggy. One of my favorite tools is the assistant tools, especially the vanishing points. I would like to group them and deactivate/move/hide whole groups. It can be very tricky if you have a lot of vanishing points in a picture.

What sets Krita apart from the other tools that you use?

For me it is the feeling that Krita is made with the artist in mind. Often when I use a tool I think: “Yeah that’s a cool tool with nice features but how am I supposed to use that?” This is a thing that I often find in a lot of Open Source software. Programmers put a lot of amazing know-how in the software that often can do awesome stuff but then the UI is terrible and the whole feel is off. And then an otherwise cool piece of software is unnecessary hard to use. I still find a few unpolished corners like that in Krita sometimes but other than that I felt comfortable from the start. It gives you just what you need.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Hm, I have a few but I think my latest is “About to Fail”, an artwork I just finished recently. It shows my character in a typical situation where he is careless and about to mess up an experiment. I like it because, not only is it a typical artwork that includes a lot of things i like (fantasy, magic, animals) but also because it was a milestone in my personal development as an artist.

What techniques and brushes did you use in it?

First I did a rough sketch with the “Charcoal Rock Soft” brush, slowly working out the details until I have a grayscale image, then I keep refining details with the “Bristles Wet”, a brush that is originally from the Krita 3.0 default brush pack. After that I use “Wet Bristles Rough” to add the fluffyness of the fur. The smear brushes are my favorites and I use them mostly like you would in traditional painting. I love how they mix colors directly on the canvas. After the whole grayscale image was done I added textures, like the wood of the desk or the cracks in the wall. I used the default texture brushes for that. I try to use the default brushes as much as I can to concentrate on drawing and not wasting to much time on creating a brush that I maybe never use after that. Sometimes I tweak them a little of course. At last I add color with some layers set to Color blending mode, more rendering after that because some colored areas have to overlap and than it’s usually done. I have some recordings that illustrate pretty well how I work, unfortunately not for this particular artwork.

Where can people see more of your work?

I have a website but it’s pretty much a work in progress and just a pet project I rarely find time for, but I plan to make it fully functional in the future. I also have other galleries, for example on FurAffinity, which unfortunately does not allow high resolution uploads – and on InkBunny. I only post my finished works to my galleries. For sketches, WIPs and other stuff I use my twitter @Takiro_Ryo. On my YouTube channel I upload timelapsed recordings of my paintings, and some day maybe tutorials but I haven’t decided yet.

Anything else you’d like to share?

Krita is already an amazing Program and I sense a bright future for it in my twitchy tail. If you’re just starting out with digital painting, you should definitely start with Krita. Save the 300 bucks Photoshop would cost you and invest it in a decent tablet instead. Once you spend some time in Krita, you probably don’t want to switch to something else. Whereas Photoshop crushes you under a pile of functionality you never need as a painter, Krita will give you everything you need, and just that without wanting anything in return.

I also want to thank the Krita team for being amazing and for this interview opportunity, which is my first interview as an artist ever. Keep on the good work. I definitely will keep spreading the word about Krita.

At last I want to thank Michaela Frech for her help and also my boyfriend who had to bear a lot of my art related frustrations over the years.

Best practices for diffing two online MySQL databases

We’ve had to move our internal Red Hat Beaker instance to a new MySQL database version. We made the jump with a 5min downtime of Beaker. One of the items we wanted to make sure is to not to loose any data.

Setup and Motivation

A database dump is about 135 GB compressed with gzip. The main database was being served by a MySQL 5.1 master/slave setup.

We discussed two possible strategies for switching to MariaDB. Either a dump and load which meant a downtime of 16h, or the use of an additional MariaDB slave which will be promoted to the new master. We chose the latter: a new MariaDB 10.2 slave promoted to be the new master.

We wanted to make sure that both slaves, the MySQL 5.1 and new MariaDB 10.2, were in sync and with promoting the MariaDB 10.2 slave to master we would not loose any data. To verify data consistency across the slaves, we diffed both databases.

Diffing

I went through a few iterations of dumping and diffing. Here are the items, which worked best.

Ignore mysql-utils if you only have read access

MySQL comes with a bunch of utilities and one of them is a tool to compare two databases, called mysqldbcompare and mysqldiff. I’ve tried mysqldiff first, but, after studying the source code, decided against using it. Reason being is that you will have to grant it additional write privileges to the databases which are arguably small, but still too much I was comfortable with.

Use the “at” utility to schedule mysqldump

The best way I found to kick off performing the database dumps at the same time is to use at. Scheduling a mysqldump manually for the two databases introduces way too much noisy differences. I guess, it goes without mention, that the database hosts clocks are synchronized (e.g. by the use of chronyd).

Dump the entire database at once

The mysqldump tool can dump each table separately, but that is not what you want. Also the default options which are geared towards a dump and load is not what you want.

Instead I dumped MySQL with:

mysqldump --single-transaction --order-by-primary --skip-extended-insert beaker | gzip > mysql.sql.gz;

while for MariaDB I used:

mysqldump --order-by-primary --skip-extended-insert beaker | gzip > mariadb.sql.gz;

The options used are aiding the later diff:

  • –order-by-primary orders every dumped table row consistently by their primary keys
  • –single-transaction keeps a transaction open until the dump has finished so you get a comparable database snapshot across the two databases for the same starting point
  • –skip-extended-inserts is used to have an INSERT statement for each row, otherwise they’re collapsed to multi-row insert statements which are harder to compare

Compression (GZip) and shell pipes are your friend

With big databases, like the Beaker production database, you want to avoid writing anything uncompressed. Linux ships additional gzip wrappers for cat (zcat), less (zless) and so on, which will help with creating shell pipes in order to process the data.

Cut up the dump

Once you have both database dumps, cut them up into their separate tables. Purpose of this is not to sift through the dumps with your own eye, but rather to cater for diff. The diff tool loads the entire file into memory and you will face, with large database dumps, it is running out of memory quickly:

diff mysql-beaker.sql.gz mariadb-replica-beaker.sql.gz
diff: memory exhausted

While I did found a tool to diff both large files, having a unified diff output is easier to compare data with.

Example: Using gzip and a pipe from my point above:

diff -u <(zcat mysql/table1.sql.gz) <(zcat mariadb/table1.sql.gz) > diffed/table1.diff

Now you can use your SHELL foo to loop over all cut up tables and write the diff into a separate folder which then lets you easily compare.

July 20, 2018

Pulseaudio: the more things change, the more they stay the same

Such a classic Linux story.

For a video I'll be showing during tonight's planetarium presentation (Sextants, Stars, and Satellites: Celestial Navigation Through the Ages, for anyone in the Los Alamos area), I wanted to get HDMI audio working from my laptop, running Debian Stretch. I'd done that once before on this laptop (HDMI Presentation Setup Part I and Part II) so I had some instructions to follow; but while aplay -l showed the HDMI audio device, aplay -D plughw:0,3 didn't play anything and alsamixer and alsamixergui only showed two devices, not the long list of devices I was used to seeing.

Web searches related to Linux HDMI audio all pointed to pulseaudio, which I don't use, and I was having trouble finding anything for plain ALSA without pulse. In the old days, removing pulseaudio used to be the cure for practically every Linux audio problem. But I thought to myself, It's been a couple years since I actually tried pulse, and people have told me it's better now. And it would be a relief to have pulseaudio working so things like Firefox would Just Work. Maybe I should try installing it and see what happens.

So I ran an aptitude search pulseaudio to find the package name I'd need to install. Imagine my surprise when it turned out that it was already installed!

So I did some more web searching to find out how to talk to pulse and figure out how to enable HDMI, or un-mute it, or whatever it was I needed. But to no avail: everything I found was stuff like "In the Ubuntu audio panel, do this". The few pages I found that listed commands to run didn't help -- the commands all gave errors.

Running short on time, I reverted to the old days: aptitude purge pulseaudio. Rebooted to make sure the audio system was reset, ran alsamixergui and sure enough, there were all my normal devices, including the IEC958 device for HDMI, which was indeed muted. I unmuted it, tried the video again -- and music blasted from my TV's speakers.

I'm sure there are machines where pulseaudio works. There are even a few people who have audio setups complicated enough to need something like pulseaudio. But in 2018, just as in 2006, aptitude purge pulseaudio is the easiest solution to a Linux sound problem.

July 18, 2018

Detail Considered Harmful

Ever since the dawn of times, we’ve been crafting pixel perfect icons, specifically adhering to the target resolution. As we moved on, we’ve kept with the times and added these highly detailed high resolution and 3D modelled app icons that WebOS and MacOS X introduced.

As many moons have passed since GNOME 3, it’s fair to stop and reconsider the aesthetic choices we made. We don’t actually present app icons at small resolutions anymore. Pixel perfection sounds like a great slogan, but maybe this is another area that dillutes our focus. Asking app authors to craft pixel precise variants that nobody actually sees? Complex size lookup infrastructure that prominent applications like Blender fail to utilize properly?

Blender: Linux is the only platform with botched app icon. Blender: Linux is the only platform with botched app icon.

For the platform to become viable, we need to cater to app developers. Just like Flatpak aims to make it easy to distribute apps, and does it in a completely decetralized manner, we should emphasize with the app developers to design and maintain their own identity.

Having clear and simple guidelines for other major platforms and then seeing our super complicated ones, with destructive mechanisms of theming in place, makes me really question why we have anyone actually targeting GNOME.

The irony of the previous blog post is not lost on me, as I’ve been seduced by the shading and detail of these highres artworks. But every day it’s more obvious that we need to do a dramatic redesign of the app icon style. Perhaps allowing to programatically generate the unstable/nightlies style. Allow a faster turnaround for keeping the style contemporary and in sync what other platforms are doing. Right now, the dated nature of our current guidelines shows.

Time to murder our darlings…

Krita 4.1.1 Released

Today we’re releasing Krita 4.1.1, the first bug fix release for Krita 4.1.0.

  • Fix loading PyKrita when using PyQt 5.11 (patch by Antonio Rojas, thanks!) (BUG:396381)
  • Fix possible crashes with vector objects (BUG:396145)
  • Fix an issue when resizing pixel brushes in the brush editor (BUG:396136)
  • Fix loading the system language on macOS if more than one language is enabled in macOS
  • Don’t show the unimplemented color picker button in the vector object tool properties docker (BUG:389525)
  • Fix activation of the autosave time after a modify, save, modify cycle (BUG:393266)
  • Fix out-of-range lookups in the cross-channel curve filter (BUG:396244)
  • Fix an assert when pressing PageUp into the reference images layer
  • Avoid a crash when merging layers in isolated mode (BUG:395981)
  • Fix loading files with a transformation mask that uses the box transformation filter (BUG:395979)
  • Fix activating the transform tool if the Box transformation filter was selected (BUG:395979)
  • Warn the user when using an unsupported version of Windows
  • Fix a crash when hiding the last visible channel (BUG:395301)
  • Make it possible to load non-conforming GPL palettes like https://lospec.com/palette-list/endesga-16
  • Simplify display of the warp transformation grid
  • Re-add the Invert Selection menu entry (BUG:395764)
  • Use KFormat to show formatted numbers (Patch by Pino Toscano, thanks!)
  • Hide the color sliders config page
  • Don’t pick colors from fully transparent reference images (BUG:396358)
  • Fix a crash when embedding a reference image
  • Fix some problems when saving and loading reference images (BUG:396143)
  • Fix the color picker tool not working on reference images (BUG:396144)
  • Extend the panning range to include any reference images

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.1 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

July 16, 2018

LWV National Convention, 2018: Plenary Sessions

or: How Sausage is Made

I'm a big fan of the League of Women Voters. Really. State and local Leagues do amazing work. They publish and distribute those non-partisan Voter Guides you've probably seen before each election. They register new voters, and advocate for voting rights and better polling access for everybody, including minorities and poor people. They advocate on lots of other issues too, like redistricting, transparency, the influence of money in politics, and health care. I've only been involved with the League for a few years; although my grandmother was active in her local League as far back as I can remember, somehow it didn't occur to me to get involved until I moved to a small town where it was more obvious what a difference the local League made.

So, local and state Leagues are great. But after returning from my second LWV national convention, I find myself wondering how all this great work manages to come out of an organization that has got to be the most undemocratic, conniving political body I've ever been involved with.

I have separate write-ups of the caucuses and other program sessions I attended at this year's convention, for other LWV members wanting to know what they missed. But the Plenary sessions are where the national League's business is conducted, and I felt I should speak publicly about how they're run.

In case there's any confusion, this article describes my personal reactions to the convention's plenary sessions. I am speaking only for myself, not for any state or local league.

The 2018 National Convention Plenary Sessions

I didn't record details of every motion; check the Convention 2018 Daily Briefing if you care. (You might think there would be a published official record of the business conducted at the national convention; good luck on finding it.)

The theme of the convention, printed as a banner on many pages of the convention handbook, was Creating a More Perfect Democracy. It should have been: Democracy: For Everyone Else.

Friday Plenary

In case you're unfamiliar with the term (as I was), "Plenary" means full or complete, from the Latin plenus, full. A plenary session is a session of a conference which all members of all parties are to attend. It doesn't seem to imply voting, though that's how the LWVUS uses the term.

After the national anthem, the welcome by a designated local official, a talk, an opening address, acceptance of various committee reports, and so on, the tone of the convention was set with the adoption of the convention rules.

A gentleman from the Oregon state League (LWVOR) proposed a motion that would have required internal decisions to be able to be questioned as part of convention business. This would include the controversial new values statement. There had been discussion of the values statement before the convention, establishing that many people disagreed with it and wanted a vote.

LWVUS president Chris Carson wasn't having any of it. First, she insisted, the correct parliamentary way to do this was to vote to approve the rest of the rules, not including this one. That passed easily. Then she stated that the motion on the table would require a 2/3 vote, because it was an amendment to the rules which had just passed. (Never mind that she had told us we were voting to pass all the rules except that one).

The Oregon delegate who had made the motion protested that the first paragraph of the convention rules on page 27 of the handbook clearly stated that amendment of the rules only requires a simple majority. Carson responded that would have been true before the convention rules were adopted, but now that we'd voted to adopt them, it now required a 2/3 vote to amend them due to some other rule somewhere else, not in the handbook. She was adamant that the motion could not now pass with a simple majority.

The Oregon delegate was incredulous. "You mean that if I'd known you were going to do this, I should have protested voting on adopting the rules before voting on the motion?"

The room erupted in unrest. Many people wanted to speak, but after only a couple, Carson unilaterally cut off further discussion. But then, after a lot of muttering with her Parliamentarian, she announced that she would take a show-of-hands vote on whether to approve her ruling requiring the 2/3 vote. She allowed only three people to speak on that motion (the motion to accept her ruling) and then called the question herself.

The vote was fairly close but was ruled to be in favor of her ruling, meaning that the original motion would require a 2/3 vote. When we finally voted on the original motion it looked roughly equal, not 2/3 in favor -- so the motion to allow debate on the values statement failed.

(We never did find out what this mysterious other rule was that supposedly mandated the 2/3 vote. The national convention has an official Parliamentarian sitting on the podium, as well as parliamentary assistants sitting next to each microphone in the audience, but somehow there's nobody who does much of a job of keeping track of what's going on or can state the rules under which we're operating. Several times during the three days of plenary, Carson and her parliamentarian lost track of things, for instance, saying she'd hear two pro and two con comments but actually calling three pro and one con.)

I notice in the daily briefing, this whole fracas is summarized as, "The motion was defeated by a hand vote."

Officer "Elections"

With the rules adopted by railroad, we were next presented with the slate of candidates for national positions. That sounds like an election but it's not.

During discussion of the previous motion, one national board member speaking against the motion (or for Carson's 2/3 ruling, I can't remember which) said "You elected us, so you should trust us." That spawned some audience muttering, too. See, in case there's any confusion, delegates at the convention do not actually get to vote for candidates. We're presented with a complete slate of candidates chosen by the nominating committee (for whom we also do not vote), and the only option is to vote yes or no on the whole slate "by acclamation".

There is one moment where it is possible to make a nomination from the floor. If nominated, such a nominee has one minute to make her case to the delegates before the final vote. Since there's obviously no chance, there are seldom any floor nominees, and on the rare occasion someone tries, they invariably lose.

Now, I understand that it's not easy getting volunteers for leadership positions in nonprofit organizations. It's fairly common, in local organizations, that you can't fill out all the available positions and have to go begging for people to fill officer positions, so you'll very often see a slate of officers proposed all at once. But in the nationwide LWVUS? In the entire US, in the (hundreds of thousands? I can't seem to find any membership figures, though I found a history document that says there were 157,000 members in the 1960s) of LWV members nationwide, there are not enough people interested in being a national officer that there couldn't be a competitive election? Really?

Though, admittedly ... after watching the sausage being made, I'm not sure I'd want to be part of that.

Not Recommended Items

Of course, the slate of officers was approved. Then we moved on to "Not Recommended Items". How that works: in the run-up to the convention, local Leagues propose areas the National board should focus on during the upcoming two years. The National board decides what they care about, and marks the rest as as "Not recommended". During the Friday plenary session, delegates can vote to reconsider these items.

I knew that because I'd gone to the Abolish the Electoral College caucus the previous evening, and that was the first of the not-recommended items proposed for consideration.

It turned out there were two similar motions: the "Abolish the Electoral College" proposal and the "Support the National Popular Vote Compact" proposal, two different approaches to eliminating the electoral college. The NPV is achievable -- quite a few states have already signed, totalling 172 electoral votes of the 270 that would be needed to bring the compact into effect. The "Abolish" side, on the other hand, would require a Constitutional amendment which would have to be ratified even by states that currently have a big advantage due to the electoral college. Not going to happen.)

Both proposals got enough votes to move on to consideration at Saturday's plenary, though. Someone proposed that the two groups merged their proposals, and met with the groups after the session, but alas, we found out on Saturday that they never came to agreement.

One more proposal that won consideration was one to advocate for implementation of the Equal Rights Amendment should it be ratified. A nice sentiment that everyone agreed with, and harmless since it's not likely to happen.

Friday morning "Transformation Journey" Presentation and Budget Discussion

I didn't take many notes on this, except during the presentation of the new IT manager, who made noise about reduced administrative burden for local Leagues and improving access to data for Leagues at all levels. These are laudable goals and badly needed, though he didn't go into any detail about how any of was going to work. Since it was all vague high-level hand waving I won't bother to write up my notes (ask me if you want to see them).

The only reason I have this section here is for the sharp-eyed person who asked during the budget discussion, "What's this line item about 'mailing list rental?'"

Carson dismissed that worry -- Oh, don't worry, there are no members on that list. That's just a list of donors who aren't members.

Say what? People who donate to the LWVUS, if they aren't members, get their names on a mailing list that the League then sells? Way to treat your donors with respect.

I wish nonprofits would get a clue. There are so many charities that I'd like to donate to if I could do so without resigning myself to a flood of paper in my mailbox every day for the rest of my life. If nonprofits had half a lick of sense, they would declare "We will never give your contact info to anyone else", and offer "check this box to be excluded even from our own pleas for money more than once or twice a year." I'd be so much more willing to donate.

Saturday Plenary

The credentials committee reported: delegates present represented 762 Leagues, with 867 voting delegates from 49 states plus the District of Columbia. That's out of 1709 eligible voting delegates -- about half. Not surprising given the expense of the convention. I'm told there have been proposals in past years to change the rules to make it possible to vote without attending convention, but no luck so far.

Consideration of not-recommended items: the abolition of the electoral college failed. Advocacy for the National Popular Vote Compact passed. So the delegates agreed with me on which of the two is achievable. Too bad the Electoral Abolition people weren't willing to compromise and merge their proposal with the NPV one.

The ERA proposal passed overwhelmingly.

Rosie Rios, 43rd Treasurer of the US, gave a terrific talk on, among other things, the visibility of women on currency, in public art and in other public places, and what that means for girls growing up. I say a little more about her talk in my Caucus Summary.

We had been scheduled to go over the bylaws before Rios' talk, but that plan had been revised because there was an immigration protest (regarding the separation of children from parents) scheduled some distance north of the venue, and a lot of delegates wanted to go. So the revised plan, we'd been told Friday, was to have Rios' talk and then adjourn and discuss the bylaws on Sunday.

Machinations

What actually happened: Carson asked for a show of hands of people who wanted to go to the protest, which looked like maybe 60% of the room. She dismissed those people with well wishes.

Then she looked over the people still in the room and said, "It looks like we might still have a quorum. Let's count."

I have no idea what method they used to count the people sitting in the room, or what count they arrived at: we weren't told, and none of this is mentioned in the daily summary linked at the top of this article. But somehow she decided we still had a quorum, and announced that we would begin discussion of the bylaws.

The room erupted in angry murmurs -- she had clearly stated before dismissing the other delegates that we were done for the day and would not be discussing the bylaws until Sunday.

"It's appalling", one of our delegation, a first-timer, murmured. Indeed.

But the plenary proceeded. We voted to pass the first bylaws proposal, an uncontroversial one that merely clarified some wording, and I'm sure the intent was to sneak the second proposal through as well -- a vague proposal making it easier to withdraw recognition from a state or local league -- but enough delegates remained who had actually read the proposals and weren't willing to let it by without discussion.

On the other hand, the discussion didn't come to anything. A rewording amendment that I'm told had been universally agreed to at the Bylaws caucus the previous evening failed to go through because too many of the people who understood the issue were away at the protest. The amendment failed, so even though we ran out of time and had to stop before voting on the proposal, the amended wording had already failed and couldn't be reconsidered on Sunday when the discussion was resumed.

(In case you're curious, this strategy is also how Pluto got demoted from being a planet. The IAU did almost exactly the same thing as the LWVUS, waiting until most of the voting members were out of the room before presenting the proposal to a small minority of delegates. Astronomers who were at the meeting but out of the room for the Pluto vote have spoken out, saying the decision was a bad one and makes little sense scientifically.)

Sunday Plenary

There's not much to say about Sunday. The bylaws proposal was still controversial, especially since half the delegation never had the chance to vote on the rewording proposal; the vote required a "card vote", meaning rather than counting hands or voices, delegates passed colored cards to the aisles to be counted. This was the only card vote of the convention.

Accessibility note: I was surprised to note that the voting cards were differentiated only by color; they didn't have anything like "yes" or "no" printed on them. I wonder how many colorblind delegates there were in that huge roomful of people who couldn't tell the cards apart.

The rest of Sunday's voting was on relatively unimportant, uncontroversial measures, ending with a bunch of proclamations that don't actually change anything. Those easily passed, rah, rah. We're against gun violence, for the ERA, against the electoral college, for pricing carbon emissions, for reproductive rights and privacy, and for climate change assessments that align with scientific principles. Nobody proposed anything about apple pie but I'm sure we would have been for that too.

And thus ended the conference and we all headed off to lunch or the airport. Feeling frustrated, a bit dirtied and not exactly fired up about Democracy.


Up: LWV National Convention, June-July 2018, Chicago

July 11, 2018

Welcoming the gPhoto Project to the PIXLS.US community!


Welcoming the gPhoto Project to the PIXLS.US community!

Helping the community one project at a time

A major goal of the PIXLS.US effort is to do whatever we can do to help developers of projects unburden themselves from administrating their project. We do this, in part, by providing forum hosting, participating in support, providing web design, and doing community outreach. With that in mind, we are excited to welcome the gPhoto Projects to our discuss forum!

The Entangle interface, which makes use of libgphoto The Entangle interface, which makes use of libgphoto.

You may not have heard of gPhoto, but there is a high chance that you’ve used the project’s software. At the heart of the project is libgphoto2, a portable library that gives application access to hundreds of digital cameras. On top of the foundational library is gphoto2, a command line interface to your camera that supports almost everything that the library can do. The library is used in a bunch of awesome photography applications, such as digiKam, darktable, entangle, and GIMP. There is even a FUSE module, so you can mount your camera storage as a normal filesystem.

gPhoto was recruited to the PIXLS.US community when @darix was sitting next to gPhoto developer Marcus. Marcus was using darix’s Fuji camera to test integration into libgphoto, then the magic happened! Not only will some Fuji models be supported, but our community is growing larger. This is also a reminder that one person can make a huge difference. Thanks darix!

Welcome, gPhoto, and thank you for the years and years of development!

July 09, 2018

Interview with Andrea Buso

Could you tell us something about yourself?

“I am in the middle of the journey of our life ..” (50 years). I was born in Padua in Italy. Since I was a child I’ve always designed; during primary school I created my first Japanese robot style cartoon (Mazinga). I attended art school. I attended computer graphics courses with Adobe products in 1995 and then a course of specialization at the internship school of Illustration for Children of Sarmede in Treviso (IT).

I worked as a freelancer in advertising and comics, I taught traditional and digital painting and drawing in my studio (now closed) La Casa Blu. Today I continue to draw as a hobby and some work, since I teach painting in a center for disabled people.

Do you paint professionally, as a hobby artist, or both?

These days I paint both for hobby and for work, even more as a hobby. Teaching in the special-needs center takes me a lot of time, but it also gives me a lot of satisfaction.

What genre(s) do you work in?

I do not have a specific genre, I like to change technique and style, I like to change, to find new ways. Even if my background is the comic. However, generally I prefer themes of fiction, fantasy. As you can guess I love mixing everything I like.

Whose work inspires you most — who are your role models as an artist?

In addition to loving Michelangelo Buonarotti and Caravaggio, I studied Richard Corben, Simon Bisley, Frank Frazetta. Currently I am following Mozart Couto, Ramon Miranda and David Revoy.

How and when did you get to try digital painting for the first time?

In 2000, my brother, a computer programmer, made me try OpenSuse. I used Gimp, and I felt good because I could draw what I wanted and how I wanted. Since then, I have abandoned Windows for Linux and I have discovered a series of wonderful programs which allow me to work professionally, giving me the advantage of digital.

What makes you choose digital over traditional painting?

In my opinion digital painting is infinity. You can do whatever you want and go back on your steps, whenever you want. It has an infinite number of techniques and tools to create, techniques and tools that you can create yourself. The limit is your own imagination.

How did you find out about Krita?

Watching Youtube videos by Mozart Couto, Ramon Miranda and David Revoy, I saw that they used Krita. I did not know what it was, I did some research on the Internet and found the site Voilà! Love is born! Today it is my favorite program (I’m not saying that to make a good impression on you!).

What was your first impression?

I must say that at the beginning the approach with Krita was a bit difficult. I came from the experience with Gimp and Mypaint, software that has a mild learning curve. But in the end, I managed to “tame” Krita at my will, now it’s my home.

What do you love about Krita?

Given that there are characteristics of Krita that I don’t know and that I will maybe never know, because they’re not necessary to my painting technique, I love everything about Krita!

Above all the panel to create the brushes. It’s wonderful, sometimes I spend hours creating brushes, which I’ll never use because they don’t make sense, but I create them to see how far Krita can get. I love the possibility of combining raster and vector levels; the ability to change the text as I want, level styles. Everything is perfect for my needs in Krita.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Improvements for Krita valid for me would be an implementation of the effects such as: Clouds, plasma, etcetera (I mention those of Gimp to give an example). Moreover, because I have the habit of adjusting lights and shadows at the end of work, I miss controls such as exposure or other typically photographic effects.

I have nothing negative to say about Krita.

What sets Krita apart from the other tools that you use?

The freedom to manage your work, the potential of the various tools and the stability of the software are the most salient features of Krita. When I use Krita, I feel free to create without technical limitations of the software. Also, the quality of the brushes is unparalleled, and when you print your work you realize how much they are cared for and real.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

North (spirit), is my favorite work done with Krita, so far.

In it there are my first (serious) created brushes. The texture of the paper was created with Krita. Also there is my passion for the peoples of the north: my great-grandfather was Swedish. Therefore, in the drawing there is a large part of myself and the drawing represents me.

What techniques and brushes did you use in it?

I used a brush of my own creation and I took inspiration from Mucha (an Art Nouveau painter). The coloring style is similar to cell-shading, while the reflections and the glow of the moonlight were created with the standard FX brushes. The setting is on a magical night.

Where can people see more of your work?

You can see my work on Deviantart: https://audector.deviantart.com/

Anything else you’d like to share?

Thank you for the interview, it was an honor for me, I would just invite all creative people to use Krita, but above all Linux and KDE. The possibilities to work well and professionally are concrete, there is no longer a gap between open source and Windows. However, with Linux and KDE there is the possibility to work better. Thanks to you!

July 07, 2018

Script to modify omni.ja for a custom Firefox

A quick followup to my article on Modifying Firefox Files Inside omni.ja:

The steps for modifying the file are fairly easy, but they have to be done a lot.

First there's the problem of Firefox updates: if a new omni.ja is part of the update, then your changes will be overwritten, so you'll have to make them again on the new omni.ja.

But, worse, even aside from updates they don't stay changed. I've had Ctrl-W mysteriously revert back to its old wired-in behavior in the middle of a Firefox session. I'm still not clear how this happens: I speculate that something in Firefox's update mechanism may allow parts of omni.ja to be overridden, even though I was told by Mike Kaply, the onetime master of overlays, that they weren't recommended any more (at least by users, though that doesn't necessarily mean they're not used for updates).

But in any case, you can be browsing merrily along and suddenly one of your changes doesn't work any more, even though the change is still right there in browser/omni.ja. And the only fix I've found so far is to download a new Firefox and re-apply the changes. Re-applying them to the current version doesn't work -- they're already there. And it doesn't help to keep the tarball you originally downloaded around so you can re-install that; firefox updates every week or two so that version is guaranteed to be out of date.

All this means that it's crazy not to script the omni changes so you can apply them easily with a single command. So here's a shell script that takes the path to the current Firefox, unpacks browser/omni.ja, makes a couple of simple changes and re-packs it. I called it kitfox-patch since I used to call my personally modified Firefox build "Kitfox".

Of course, if your changes are different from mine you'll want to edit the script to change the sed commands.

I hope eventually to figure out how it is that omni.ja changes stop working, and whether it's an overlay or something else, and whether there's a way to re-apply fixes without having to download a whole new Firefox. If I figure it out I'll report back.

July 03, 2018

GIMP 2.10.4 Released

The latest update of GIMP’s new stable series delivers bugfixes, simple horizon straightening, async fonts loading, fonts tagging, and more new features.

Simple Horizon Straightening

A common use case for the Measure tool is getting GIMP to calculate the angle of rotation, when horizon is uneven on a photo. GIMP now removes the extra step of performing rotation manually: after measuring the angle, just click the newly added Straighten button in the tool’s settings dialog.

Straightening images Straightening images in GIMP 2.10.4.

Asynchronous Fonts Loading

Loading all available fonts on start-up can take quite a while, because as soon as you add new fonts or remove existing ones, fontconfig (a 3rd party utility GIMP uses) has to rebuild the fonts cache. Windows and macOS users suffered the most from it.

Thanks to Jehan Pagès and Ell, GIMP now performs the loading of fonts in a parallel process, which dramatically improves startup time. The caveat is that in case you need to immediately use the Text tool, you might have to wait till all fonts complete loading. GIMP will notify you of that.

Fonts Tagging

Michael Natterer introduced some internal changes to make fonts taggable. The user interface is the same as for brushes, patterns, and gradients.

GIMP doesn’t yet automatically generate any tags from fonts metadata, but this is something we keep on our radar. Ideas and, better yet, patches are welcome!

Dashboard Updates

Ell added several new features to the Dashboard dockable dialog that helps debugging GIMP and GEGL or, for end-users, finetune the use of cache and swap.

New Memory group of widgets shows currently used memory size, the available physical memory size, and the total physical memory size. It can also show the tile-cache size, for comparison against the other memory stats.

Updated Dashboard Updated Dashboard in GIMP 2.10.4.

Note that the upper-bound of the meter is the physical memory size, so the memory usage may be over 100% when GIMP uses the swap.

The Swap group now features “read” and “written” fields which report the total amount of data read-from/written-to the tile swap, respectively. Additionally, the swap busy indicator has been improved, so that it’s active whenever data has been read-from/written-to the swap during the last sampling interval, rather than at the point of sampling.

PSD Loader Improvements

While we cannot yet support PSD features such as adjustment layers, there is one thing we can do for users who just need a file to render correctly in GIMP. Thanks to Ell, GIMP now can load a “merged”, pre-composited version of the image, that becomes available when a PSD file was saved with “Maximize Compatibility” option enabled in Photoshop.

This option is currently exposed as an additional file type (“Photoshop image (merged)”), which has to be explicitly selected from the filetype list when opening the image. GIMP then will render the file correctly, but drop certain additional data from the file, such as channels, paths, and guides, while retaining metadata.

Builds for macOS Make a Comeback

Beta builds of GIMP 2.10 for macOS are available now. We haven’t eliminated all issues yet, and we appreciate your feedback.

GEGL and babl

Ell further improved the Recursive Transform operation, allowing multiple transformations to be applied simultaneously. He also fixed the trimming of tile xache into the swap.

New Selective Hue-Saturation operation by Miroslav Talasek is now available in the workshop. The idea is that you can choose a hue, then select width of the hues range around that base hue, then tweak saturation of all affected pixels.

Øyvind Kolås applied various fixes to the Pixelize operation and added the “needs-alpha” meta-data to Color to Alpha and svg-luminancetoalpha operations. He also added a Threshold setting to the Unsharp Mask filter (now called Sharpen (Unsharp Mask)) to restore and improve the legacy Unsharp Mask implementation from GIMP prior to v2.10.

In babl, Ell introduced various improvements to the babl-palette code, including making the default palette initialization thread-safe. Øyvind Kolås added an R~G~B~ set of spaces (which for all BablSpaces mean use sRGB TRC), definitions of ACEScg and ACES2065-1 spaces, and made various clean-ups. Elle Stone contributed a fix for fixed-to-double conversions.

Ongoing Development

While we spend as much time on bugfixing in 2.10.x as we can, our main goal is to complete the GTK+3 port as soon as possible. There is a side effect of this work: we keep discovering old subpar solutions that frustrate us until we fix them. So there is both GTK+3 porting and refactoring, which means we can’t predict when it’ll be done.

Recently, we also revitalized an outdated subproject called ‘gimp-data-extras’ with the sole purpose of keeping the Alpha-to-Logo scripts that we removed from 2.10 due to poor graphics quality. Since some users miss those scripts, there is now a simple way to get them back: download gimp-data-extras v2.0.4, unpack the archive, and copy all ‘.scm’ files from the ‘scripts’ folder to your local GIMP’s ‘scripts’ folder.

July 02, 2018

Affiliated Vendors on the LVFS

We’ve just about to deploy another feature to the LVFS that might be interesting to some of you. First, some nomenclature:

OEM: Original Equipment Manufacturer, the user-known company name on the outside of the device, e.g. Sony, Panasonic, etc
ODM: Original Device Manufacturer, typically making parts for one or more OEMs, e.g. Foxconn, Compal

There are some OEMs where the ODM is the entity responsible for uploading the firmware to the LVFS. The per-device QA is typically done by the OEM, rather than the ODM, although it can be both. Before today we didn’t have a good story about how to handle this other than having a “fake” oem_odm@oem.com useraccounts that were shared by all users at the ODM. The fake account isn’t of good design from a security or privacy point of view and so we needed something better.

The LVFS administrator can now mark other vendors as “affiliates” of other vendors. This gives the ODM permission to upload firmware that is “owned” by the OEM on the LVFS, and that appears in the OEM embargo metadata. The OEM QA team is also able to edit the update description, move the firmware to testing and stable (or delete it entirely) as required. The ODM vendor account also doesn’t have to appear in the search results or the vendor table, making it hidden to all users except OEMs.

This also means if an ODM like Foxconn builds firmware for two different OEMs, they also have to specify which vendor should “own” the firmware at upload time. This is achieved with a simple selection widget on the upload page, but will only be shown if affiliations have been set up. The ODM is able to manage their user accounts directly, either using local accounts with passwords, or ODM-specific OAuth which is the preferred choice as it means there is only one place to manage credentials.

If anyone needs more information, please just email me or leave a comment below. Thanks!

fwupdate is {nearly} dead; long live fwupd

If the title confuses you, you’re not the only one that’s been confused with the fwupdate and fwupd project names. The latter used the shared library of the former to schedule UEFI updates, with the former also providing the fwup.efi secure-boot signed binary that actually runs the capsule update for the latter.

In Fedora the only user of libfwupdate was fwupd and the fwupdate command line tool itself. It makes complete sense to absorb the redundant libfwupdate library interface into the uefi plugin in fwupd. Benefits I can see include:

  • fwupd and fwupdate are very similar names; a lot of ODMs and OEMs have been confused, especially the ones not so Linux savy.
  • fwupd already depends on efivar for other things, and so there are no additional deps in fwudp.
  • Removal of an artificial library interface, with all the soname and package-induced pain. No matter how small, maintaining any project is a significant use of resources.
  • The CI and translation hooks are already in place for fwupd, and we can use the merging of projects as a chance to write lots of low-level tests for all the various hooks into the system.
  • We don’t need to check for features or versions in fwupd, we can just develop the feature (e.g. the BGRT localised background image) all in one branch without #ifdefs everwhere.
  • We can do cleverer things whilst running as a daemon, for instance uploading the fwup.efi to the ESP as required rather than installing it as part of the distro package.
    • The last point is important; several distros don’t allow packages to install files on the ESP and this was blocking fwupdate being used by them. Also, 95% of the failures reported to the LVFS are from Arch Linux users who didn’t set up the ESP correctly as the wiki says. With this new code we can likely reduce the reported error rate by several orders of magnitude.

      Note, fwupd doesn’t actually obsolete fwupdate, as the latter might still be useful if you’re testing capsule updates on something super-embedded that doesn’t ship Glib or D-Bus. We do ship a D-Bus-less fwupdate-compatible command line in /usr/libexec/fwupd/fwupdate if you’re using the old CLI from a shell script. We’re all planning to work on the new integrated fwupd version, but I’m sure they’ll be some sharing of fixes between the projects as libfwupdate is shipped in a lot of LTS releases like RHEL 7.

      All of this new goodness is available in fwupd git master, which will be the new 1.1.0 release probably available next week. The 1_0_X branch (which depends on libfwupdate) will be maintained for a long time, and is probably the better choice to ship in LTS releases at the moment. Any distros that ship the new 1.1.x fwupd versions will need to ensure that the fwup.efi files are signed properly if they want SecureBoot to work; in most cases just copying over the commands from the fwupdate package is all that is required. I’ll be updating Fedora Rawhide with the new package as soon as it’s released.

      Comments welcome.

FreeCAD BIM development news - June 2018

Hi all, Time for a new update on the development of BIM tools for FreeCAD. There is some exciting new stuff, most of it are things that I've been working for some time, that are now ready. As always, a bug thank you to everybody who helped me this month through Patreon or Liberapay! We are...

June 27, 2018

Krita 4.1.0 Released

Three months after the release of Krita 4.0, we’re releasing Krita 4.1!

This release includes the following major new features:

 

  • A new reference images tool that replaces the old reference images docker.
  • You can now save and load sessions: the set of images and views on images you were working on
  • You can create multi-monitor workspace layouts
  • An improved workflow for working with animation frames
  • An improved animation timeline display
  • Krita can now handle larger animation by buffering rendered frames to disk
  • The color picker now has a mixing option
  • Improved vanishing point assistant — and assistants can be painted with custom colors
  • Krita’s scripting module can now be built with Python 2
  • The first part of Ivan Yossi’s Google Summer of Code work on improving the performance of brush masks through vectorization is included as well!

And there are a host of bug fixes, of course, and improvements to the rendering performance and more features. Read the full release notes to discover what’s new in Krita 4.1!

Image by RJ Quiralta

Note!

We found a bug where activating the transform tool will cause a crash if you had selected the Box filter previously. if you experience a crash when enabling the transform tool in krita 4.1.0, go to your kritarc file () and remove the line that says “filterId=Box” in the [KisToolTransform] section. Sorry for the inconvenience. We will bring out a bug fix release as soon as possible.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.0 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 25, 2018

Interview with Natasa

Could you tell us something about yourself?

Hey, my name is Natasa, I’m a Greek illustrator from Athens currently living in Portugal. My nick is Anastasia_Arjuk. I get all of my inspiration from nature, mythology and people.

Do you paint professionally, as a hobby artist, or both?

I’ve been working on off professionally, did some book covers, children’s book illustration and a bit of jewelry design back home. But life happened and now I’m starting fresh trying to build something that’s all mine. I’ve never stopped drawing though, very happy about that.

What genre(s) do you work in?

The picture has to tell a story, that’s all I really look into. Other than that I just pick what feels right each time.

Whose work inspires you most — who are your role models as an artist?

They’re so many! Contemporary ones I’d say Gennady Spirin, Lisbeth Zwerger and Andrew Hem. From digital art Apterus is excellent in my opinion. I also love Byzantine art, Islamic art and a huge number of old painters, way too many to mention here. Don’t ignore history of art folks, you won’t believe the difference it will make to your work.

How and when did you get to try digital painting for the first time?

I actually started in early 2017, been working only traditional before that. Still not completely comfortable with it but getting there.

What makes you choose digital over traditional painting?

For practical reasons really, it’s so much easier to work professionally on digital art. From having more room, to mailing, to everything. I still prefer traditional art for my personal projects though.

How did you find out about Krita?

I was looking into YouTube for Photoshop lessons at the time, and ran into an artist’s channel who was using Krita. The brushwork seemed so creamy and rich, I had to try it out.

What was your first impression?

I loved the minimal UI and it felt very intuitive. Easy to pick up and go.

What do you love about Krita?

First of all it has an Animation Studio included, I haven’t done 2D animation in years and now I can do it at home, on my PC. Yay! The brush engine is second to none quite frankly and yes I’ve tried more than Krita before I reach that conclusion. I love the mirror tools, the eraser system and that little colour pick up docker where you can attach your favorite brushes as well. Love that little bugger, so practical. Oh and the pattern tool.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’d like to be able to lock the entire UI in place, not just the dockers, if possible. To be able to zoom in and out like it is on Photoshop, like the Z key in combination with the pen. An improved Text tool. Also probably a stronger engine, to handle larger files. Just nitpicking really.

What sets Krita apart from the other tools that you use?

It’s a very professional freeware program. I very much support what that stands for and like I said, amazing amazing brush engine. Coming from traditional media, textures are extremely important for me. Also the animation possibilities.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I don’t like to dwell on older pieces you can see all their mistakes after they’re done, but I’d say Anansi, the Spider. I learned a lot working on that piece.

What techniques and brushes did you use in it?

I just painted it the same way as traditional art, layers of colour on top of each other, new layer – paint – erase at certain spots, rinse and repeat, a bit like water colour technique. I want the underpainting to be visible an parts. I don’t remember the brushes now but they were all default brushes from the Paint tab which I also used as erasers. A little bit of overlay filters and color maps and voila. Like I said Krita is very intuitive.

Where can people see more of your work?

Artstation: https://www.artstation.com/anastasia_arjuk
Behance: https://www.behance.net/Anastasia_Arjuk
Instagram: https://www.instagram.com/anastasia_arjuk/
Twitter: https://twitter.com/Anastasia_Arjuk
Deviant Art: https://anastasia-arjuk.deviantart.com/
YouTube: https://www.youtube.com/channel/UCAy9Hg8ZaV87wqT6GO4kVnw

Anything else you’d like to share?

Thanks for having me first of all and keep up the good work. In all honesty Krita makes a huge difference to people who want to get involved with art but can’t afford (or don’t want to use) the industry standards. So such a professional open source program is a vital help.

June 24, 2018

Modifying Firefox Files Inside Omni.ja

My article on Fixing key bindings in Firefox Quantum by modifying the source tree got attention from several people who offered helpful suggestions via Twitter and email on how to accomplish the same thing using just files in omni.ja, so it could be done without rebuilding the Firefox source. That would be vastly better, especially for people who need to change something like key bindings or browser messages but don't have a souped-up development machine to build the whole browser.

Brian Carpenter had several suggestions and eventually pointed me to an old post by Mike Kaply, Don’t Unpack and Repack omni.ja[r] that said there were better ways to override specific files.

Unfortunately, Mike Kaply responded that that article was written for XUL extensions, which are now obsolete, so the article ought to be removed. That's too bad, because it did sound like a much nicer solution. I looked into trying it anyway, but the instructions it points to for Overriding specific files is woefully short on detail on how to map a path inside omni.ja like chrome://package/type/original-uri.whatever, to a URL, and the single example I could find was so old that the file it referenced didn't exist at the same location any more. After a fruitless half hour or so, I took Mike's warning to heart and decided it wasn't worth wasting more time chasing something that wasn't expected to work anyway. (If someone knows otherwise, please let me know!)

But then Paul Wise offered a solution that actually worked, as an easy to follow sequence of shell commands. (I've changed some of them very slightly.)

$ tar xf ~/Tarballs/firefox-60.0.2.tar.bz2
  # (This creates a "firefox" directory inside the current one.)

$ mkdir omni
$ cd omni

$ unzip -q ../firefox/browser/omni.ja
warning [../firefox-60.0.2/browser/omni.ja]:  34187320 extra bytes at beginning or within zipfile
  (attempting to process anyway)
error [../firefox-60.0.2/browser/omni.ja]:  reported length of central directory is
  -34187320 bytes too long (Atari STZip zipfile?  J.H.Holm ZIPSPLIT 1.1
  zipfile?).  Compensating...
zsh: exit 2     unzip -q ../firefox-60.0.2/browser/omni.ja

$ sed -i 's/or enter address/or just twiddle your thumbs/' chrome/en-US/locale/browser/browser.dtd chrome/en-US/locale/browser/browser.properties

I was a little put off by all the warnings unzip gave, but kept going.

Of course, you can just edit those two files rather than using sed; but the sed command was Paul's way of being very specific about the changes he was suggesting, which I appreciated.

Use these flags to repackage omni.ja:

$ zip -qr9XD ../omni.ja *

I had tried that before (without the q since I like to see what zip and tar commands are doing) and hadn't succeeded. And indeed, when I listed the two files, the new omni.ja I'd just packaged was about a third the size of the original:

$ ls -l ../omni.ja ../firefox-60.0.2/browser/omni.ja
-rw-r--r-- 1 akkana akkana 34469045 Jun  5 12:14 ../firefox/browser/omni.ja
-rw-r--r-- 1 akkana akkana 11828315 Jun 17 10:37 ../omni.ja

But still, it's worth a try:

$ cp ../omni.ja ../firefox/browser/omni.ja

Then run the new Firefox. I have a spare profile I keep around for testing, but Paul's instructions included a nifty way of running with a brand new profile and it's definitely worth knowing:

$ cd ../firefox

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile $(mktemp -d tmp-firefox-profile-XXXXXXXXXX) -offline about:blank

Also note the flags like safe-mode and no-remote, plus disabling plugins -- all good ideas when testing something new.

And it worked! When I started up, I got the new message, "Search or just twiddle your thumbs", in the URL bar.

Fixing Ctrl-W

Of course, now I had to test it with my real change. Since I like Paul's way of using sed to specify exactly what changes to make, here's a sed version of my Ctrl-W fix:

$ sed -i '/key_close/s/ reserved="true"//' chrome/browser/content/browser/browser.xul

Then run it. To test Ctrl-W, you need a website that includes a text field you can type in, so -offline isn't an option unless you happen to have a local web page that includes some text fields. Google is an easy way to test ... and you might as well re-use that firefox profile you just made rather than making another one:

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile tmp-firefox-profile-* https://google.com

I typed a few words in the google search field that came up, deleted them with Ctrl-W -- all was good! Thanks, Paul! And Brian, and everybody else who sent suggestions.

Why are the sizes so different?

I was still puzzled by that threefold difference in size between the omni.ja I repacked and the original that comes with Firefox. Was something missing? Paul had the key to that too: use zipinfo on both versions of the file to see what differed. Turned out Mozilla's version, after a long file listing, ends with

2650 files, 33947999 bytes uncompressed, 33947999 bytes compressed:  0.0%
while my re-packaged version ends with
2650 files, 33947969 bytes uncompressed, 11307294 bytes compressed:  66.7%

So apparently Mozilla's omni.ja is using no compression at all. It may be that that makes it start up a little faster; but Quantum takes so long to start up that any slight difference in uncompressing omni.ja isn't noticable to me.

I was able to run through this whole procedure on my poor slow netbook, the one where building Firefox took something like 15 hours ... and in a few minutes I had a working modified Firefox. And with the sed command, this is all scriptable, so it'll be easy to re-do whenever Firefox has a security update. Win!

Update: I have a simple shell script to do this: Script to modify omni.ja for a custom Firefox.

June 22, 2018

Thomson 8-bit computers, a history

In March 1986, my dad was in the market for a Thomson TO7/70. I have the circled classified ads in “Téo” issue 1 to prove that :)



TO7/70 with its chiclet keyboard and optical pen, courtesy of MO5.com

The “Plan Informatique pour Tous” was in full swing, and Thomson were supplying schools with micro-computers. My dad, as a primary school teacher, needed to know how to operate those computers, and eventually teach them to kids.

The first thing he showed us when he got the computer, on the living room TV, was a game called “Panic” or “Panique” where you controlled a missile, protecting a town from flying saucers that flew across the screen from either side, faster and faster as the game went on. I still haven't been able to locate this game again.

A couple of years later, the TO7/70 was replaced by a TO9, with a floppy disk, and my dad used that computer to write an educational software about top-down additions, as part of a training program run by the teachers schools (“Écoles Normales” renamed to “IUFM“ in 1990).

After months of nagging, and some spring cleaning, he found the listings of his educational software, which I've liberated, with his permission. I'm currently still working out how to generate floppy disks that are usable directly in emulators. But here's an early screenshot.


Later on, my dad got an IBM PC compatible, an Olivetti PC/1, on which I'd play a clone of Asteroids for hours, but that's another story. The TO9 got passed down to me, and after spending a full summer doing planning for my hot-dog and chips van business (I was 10 or 11, and I had weird hobbies already), and entering every game from the “102 Programmes pour...” series of books, the TO9 got put to the side at Christmas, replaced by a Sega Master System, using that same handy SCART connector on the Thomson monitor.

But how does this concern you. Well, I've worked with RetroManCave on a Minitel episode not too long ago, and he agreed to do a history of the Thomson micro-computers. I did a fair bit of the research and fact-checking, as well as some needed repairs to the (prototype!) hardware I managed to find for the occasion. The result is this first look at the history of Thomson.



Finally, if you fancy diving into the Thomson computers, there will be an episode coming shortly about the MO5E hardware, and some games worth running on it, on the same YouTube channel.

I'm currently working on bringing the “TeoTO8D emulator to Flathub, for Linux users. When that's ready, grab some games from the DCMOTO archival site, and have some fun!

I'll also be posting some nitty gritty details about Thomson repairs on my Micro Repairs Twitter feed for the more technically enclined among you.

June 21, 2018

First Beta Release of Krita 4.1

Three months after the release of Krita 4.0, we’re releasing the first (and probably only) beta of Krita 4.1, a new feature release! This release includes the following major new features:

  • A new reference images tool that replaces the old reference images docker.
  • You can now save and load sessions: the set of images and views on images you were working on
  • You can create multi-monitor workspace layouts
  • An improved workflow for working with animation frames
  • An improved animation timeline display
  • Krita can now handle larger animation by buffering rendered frames to disk
  • The color picker now has a mixing option
  • Improved vanishing point assistant — and assistants can be painted with custom colors
  • Krita’s scripting module can now be built with Python 2
  • The first part of Ivan Yossi’s Google Summer of Code work on improving the performance of brush masks through vectorization is included as well!

And there’s more. Read the full release notes to discover what’s new in Krita 4.1! With this beta release, the release notes are still work in progress, though.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.0-beta.2 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 18, 2018

Practical Printer Profiling with Gutenprint

Some time ago I purchased an Epson Stylus Photo R3000 printer, as I wanted to be able to print at A3 size, and get good quality monochrome prints. For a while I struggled a bit to get good quality color photo output from the R3000 using Gutenprint, as it took me a while to figure out which settings proved best for generating and applying ICC profiles.

Sidenote, if you happen to have a R3000 as well and you want to be able to get good results using Gutenprint, you can get some of my profiles here, not all of these profiles have been practically tested. Obviously your milage may vary.

When reading Gutenprint’s documentation they clearly indicated that you should use the “Uncorrected” Color Correction mode, which is very much good advice, as we need deterministic output to be able to generate and apply our ICC profiles in a consistent manner. What kinda threw me off, is that the “Uncorrected” Color Correction mode produces linear gamma output, which practically means very dark output, which the ICC profile is going to need to correct for. And while this is a valid approach, it does generally mean you need to generate a profile using more color patches, which means using more ink and paper for each profile you generate. A more practical approach would be to set Composite Gamma to a value of 1.0, which gamma corrects the output to look more perceptually natural, which consequently means the ICC profile has less to correct for, and thus can be generated using less color patches, and therefore using less ink and paper to do so.

Keep in mind that a printer profile is only valid for a particular combination of Printer, Ink set, Paper, Driver and Settings. Therefore you should document all these facets while generating a profile. This can be as simple as including a similarly named plain text file with each profile you create, for example:

Filename ............: epson_r3000_tecco_photo_matte_230.icc
MD5 Sum .............: 056d6c22ea51104b5e52de8632bd77d4

Paper Type ..........: Tecco Photo Matte 230

Printer Model .......: Epson Stylus Photo R3000
Printer Ink .........: Epson UltraChrome K3 with Vivid Magenta
Printer Firmware ....: AS25E3 (09/11/14)
Printer Driver ......: Gutenprint 5.2.13 (Ubuntu 18.04 LTS)

Color Model .........: RGB
Color Precision .....: Normal
Media Type ..........: Archival Matte Paper
Print Quality .......: Super Photo
Resolution ..........: 1440x1440 DPI
Ink Set .............: Matte Black
Ink Type ............: Standard
Quality Enhancement .: None
Color Correction ....: Uncorrected
Image Type ..........: Photograph
Dither Algorithm ....: EvenTone
Composite Gamma .....: 1.0

You’ll note I’m not using the maximum 5760×2880 resolution Gutenprint supports for this printer, as the quality increase seems almost negligible, and it slows down printing immensely, and might also increase ink consumption with little to show for it.

While the Matte Black (MK) ink set and Archival Matte Paper media type works very well for matte papers, you should probably use Photo Black (PK) ink set and Premium Glossy Photo Paper media type for glossy or Premium Semigloss Photo Paper for pearl, satin & lustre media types.

The following profiling procedure uses only a single sheet of A4 paper, with very decent results, you can use multiple pages by increasing the patch count, the increase in effective output quality will likely be underwhelming though, but your mileage may vary of course.

To proceed you’ll need a spectrophotometer (a colorimeter won’t suffice) supported by ArgyllCMS, like for example the X-Rite Color Munki Photo.

To install ArgyllCMS and other relevant tools on Debian (or one of its derivatives like Ubuntu):

apt-get install argyll liblcms2-utils imagemagick

First we’ll need to generate a set of color patches (we’re including a neutral grey axis, so the profile can more effectively neutralize Epson’s warm tone grey inks):

targen -v -d 3 -G -g 14 -f 210 myprofile
printtarg -v -i CM -h -R 42 -t 360 -M 6 -p A4 myprofile

This results in a TIF file, which you need to print at whatever settings you want to use the profile at. Make sure you let the print dry (and outgas) for an hour at the very least. After the print has dried we’ll need to start measuring the patches using our spectrophotometer:

chartread -v -H myprofile

Once all the patches have been read, we’re ready to generate the actual profile.

colprof -v -D "Tecco Photo Matte 230 for Epson R3000" \
           -C "Copyright 2018 Your Name Here" \
           -Zm -Zr -qm -nc \
           -S /usr/share/color/argyll/ref/sRGB.icm \
           -cmt -dpp myprofile

Note if you’re generating a profile for a glossy or lustre paper type remove the -Zm from the colprof commandline.

Evaluating Your Profile

After generating a custom print profile we can evaluate the profile using xicclu:

xicclu -g -fb -ir myprofile.icc

Looking at the graph above, there are a few things of note, you’ll notice the graph doesn’t touch the lower right corner, which represents a profiles black point, keep in mind that the blackest black any printer can print, still reflects some light, and thus isn’t perfectly black, i.e. 0.

Another point of interest is the curvature of lines, if the graph is bowing significantly to the upper right, it means the media type you have chosen for your profile is causing Gutenprint to put down more ink than the paper you’re using is capable of taking. And conversely if the graph is bowing significantly to the lower left, it means the media type you have chosen for your profile is causing Gutenprint to put down less ink than the paper you’re using is capable of taking. While a profile will compensate for either, having a profile compensate too strongly for either may cause banding artifacts in rare cases especially with an 8-bit workflow. While, I haven’t had a case yet where I needed to, you can use the Density control to adjust the amount of ink put on paper.

Visualizing Printer Gamut

To visualize the effective gamut of your profile you can generate a 3D Lab colorspace graph using iccgamut, which you can view with any modern web browser:

iccgamut -v -w -n myprofile.icc
xdg-open myprofile.x3d.htm

Comparing Gamuts

To compare the gamut of our new custom print profile against a standard working colorspace like sRGB follow these steps:

cp /usr/share/color/argyll/ref/sRGB.icm .
iccgamut -v sRGB.icmiccgamut -v myprofile.icc
viewgam -i -n myprofile.gam sRGB.gam srgb_myprofile
Intersecting volume = 406219.5 cubic units
'epson_r3000_hema_matt_coated_photo_paper_235.gam' volume = 464977.8 cubic units, intersect = 87.36%
'sRGB.gam' volume = 899097.5 cubic units, intersect = 45.18%
xdg-open srgb_myprofile.x3d.htm

From the above output we can conclude that our custom print profile covers about 45% of sRGB, meaning the printer has a gamut that is much smaller than sRGB. However we can also see that sRGB in turn covers about 87% of our custom print profile, which means that 13% of our custom print profile gamut is actually beyond the gamut of sRGB.

This is where gamut mapping comes in. This is where declared rendering intents actually affect how colors outside of the shared gamut is handled.

While a Relative Colorimetric rendering intent limits your prints to the shared area, effectively giving you the smallest practical gamut, it will however offer you the best color accuracy.

A Perceptual rendering intent will scale down colors from an area where a working space profile has a larger gamut (the other 55% of sRGB) into a smaller gamut.

A Saturation rendering intent will also scale up colors from an area where a working space profile has a smaller gamut into a larger gamut (the 13% of our custom print profile).

Manually Preparing Prints using liblcms2-utils

To test your profile, I suggest getting a good test image, like for example from SmugMug, and applying your new profile, using either Perceptual gamut mapping or Relative Colorimetric gamut mapping with Black Point Compensation respectively:

jpgicc -v -o printer.icc -t 0    -q 95 original.jpg print.jpg
jpgicc -v -o printer.icc -t 1 -b -q 95 original.jpg print.jpg

When you open either of the print corrected images, you’ll most likely find they’ll both look awful on your computer’s display, but keep in mind, this is because the images are correcting for printer, driver, ink & paper behavior. If you actually print either image, the printed image should look fairly close to the original image on your computer’s display (presuming you have your display setup properly and calibrated as well).

Manually Preparing Prints using ImageMagick

A more sophisticated way to prepare real images for printing would be using (for example) ImageMagick, these examples below illustrate how you can use ImageMagick to scale an image to a set resolution (360 DPI) for a given paper size, add print sharpening (this is why having a known static resolution is important, otherwise the sharpening would give inconsistent results across different images), then we add a thin black border, and a larger but equidistant (presuming a 3:2 image) white border, and finally we convert the image to our custom print profile:

A4 paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 2466^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 227x227 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

A3 paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 3487^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 333x333 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

A3+ paper

convert -profile /usr/share/color/argyll/ref/sRGB.icm \
        -resize 4320^ -density 360 -unsharp 2x2+1+0 \
        -bordercolor black -border 28x28 -bordercolor white -border 152x152 \
        -black-point-compensation -intent relative -profile myprofile.icc \
        -strip -sampling-factor 1x1 -quality 95 original.jpg print.jpg

Automatically Preparing Prints via colord

While the above method describes a way that gives you a lot of control on how to prepare images for printing, you may also want to use a profile for printing on plain paper, where the input is output of any random application, as opposed to a raster image file that can be very easily preprocessed.

Via colord you can assign a printer an ICC profile that will be automatically applied through cups-filters (pdftoraster), but keep in mind that this profile can only be changed through colormgr (or another colord frontend, like GNOME Control Center) and not through an application’s print dialog, sadly. To avoid messing with driver settings too much, I would suggesting duplicating your printer in CUPS, for example:

  • a printer instance for plain paper prints (with an ICC profile assigned through colord
  • a printer instance for matte color photographic prints (without a profile assigned through colord)
  • a printer instance for (semi)glossy color photographic prints
  • a printer instance for matte black and white photographic prints (likely without a need for a profile at all).
  • a printer instance for (semi)glossy black and white photographic prints (likely without a need for a profile at all).

One caveat of having a printer duplicated in CUPS is that it essentially also creates multiple print queues, which means if you have sent prints to multiple separate queues, you’ll have a race condition where it’s anybody’s guess which queue actually delivers the next print to your single physical printer, which may result in prints coming out in a different order as you had sent them. But it’s my guess that this disadvantage will hardly be noticeable for most people, and very tolerable to most who would notice it.

One thing to keep in mind is that pdftoraster applies an ICC profile by default using a Perceptual rendering intent, which means that out of gamut colors in a source image are scaled to fit inside the the print profile’s gamut. Fundamentally the Perceptual rendering intent makes the tradeoff to keep gradients intact, at the expense of color accuracy, which is most often a fairly sensible thing to do. Given this tidbit of information, and the fact that pdftoraster assumes sRGB input (unless explicitly told otherwise), I’d like to emphasize the importance of passing the -S parameter with an sRGB profile to colprof when generating print profiles for on Linux.

To assign an ICC profile to be applied automatically by cups-filters:

sudo cp navigator_colour_documents_120.icc /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr import-profile /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr find-profile-by-filename /var/lib/colord/icc/navigator_colour_documents_120.icc
colormgr get-devices-by-kind printer
colormgr device-add-profile \
         /org/freedesktop/ColorManager/devices/cups_EPSON_Epson_Stylus_Photo_R3000 \
         /org/freedesktop/ColorManager/profiles/icc_c43e7ce085212ba8f85ae634085ecfd3

More on Gutenprint media types

In contrast to commercial printer drivers, Gutenprint gives us the opportunity to peek under the covers, and find out more about the different media types Gutenprint supports for your printer, first lookup your printers model number:

$ grep 'R3000' /usr/share/gutenprint/5.2/xml/printers.xml 
<printer ... name="Epson Stylus Photo R3000" driver="escp2-r3000" ... model="115" ...

Then find the relevant media definition file:

$ grep 'media src' /usr/share/gutenprint/5.2/xml/escp2/model/model_115.xml 
<media src="escp2/media/f360_ultrachrome_k3v.xml"/>

Finally you can dig through the relevant media type definitions, where the Density parameter is of particular interest:

$ less /usr/share/gutenprint/5.2/xml/escp2/media/f360_ultrachrome_k3v.xml
<paper ... text="Plain Paper" ... PreferredInkset="ultra3matte">
  <ink ... name="ultra3matte" text="UltraChrome Matte Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Archival Matte Paper" ... PreferredInkset="ultra3matte">
  <ink ... name="ultra3matte" text="UltraChrome Matte Black">
    <parameter type="float" name="Density">0.920000</parameter>
<paper ... text="Premium Glossy Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Premium Semigloss Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">0.720000</parameter>
<paper ... text="Photo Paper" ... PreferredInkset="ultra3photo">
  <ink ... name="ultra3photo" text="UltraChrome Photo Black">
    <parameter type="float" name="Density">1.000000</parameter> 

Dedicated Grey Neutralization Profile

As mentioned earlier, the Epson R3000 uses warm tone grey inks, which results in very pleasant true black & white images, without any color inks used, at least when Gutenprint is told to print in Greyscale mode.

If, unlike me, you don’t like the warm tone effect, applying the ICC we generated should neutralize it mostly, but possibly not perfectly, which is fine for neutral area’s in color prints, but may be less satisfactory for proper black & white prints.

While I haven’t done any particular testing on this issue, you may want to consider doing a second profile dedicated and tuned to grey neutralization, just follow the normal profiling procedure with the following target generation command instead:

targen -v -d 3 -G -g 64 -f 210 -c previous_color_profile.icc -A 1.0 -N 1.0 grey_neutral_profile

Obviously you’ll need to use this particular profile in RGB color mode, even though your end goal may be monochrome, given that the profile needs to use color inks to compensate for the warm tone grey inks.

YouTube Blocks Blender Videos Worldwide

Thursday June 21 2018, by Ton Roosendaal

Last night all videos came back (except the one from Andrew Price, which still is blocked in USA).
According to another person in Youtube we now *do* have to sign the other agreement as well. You can read it here.

I’m not sure if we should accept this. Will be studied on.

Wednesday 17h, June 20 2018, by Ton Roosendaal

None of our videos play still.

Wednesday 10.30h, June 20 2018, by Ton Roosendaal

Last night the Youtube Support team contacted Francesco Siddi by phone. As we understand it now it’s a mix of coincidences, bad UIs, wrong error messages, ignorant support desks and our non-standard decision to not monetize a popular Youtube channel.

The coincidence is that Youtube is rolling out their subscription system in Europe (and Netherlands). This subscription system will allow users to stream music and enjoy Youtube ad-free. They updated terms and conditions for it and need to get monetized channel owners to approve that. Coincidentally our channel was set to allow monetization.

The bad UI was that the ‘please accept the new terms’ button was only visible if you go the new Youtube “Content Manager” account, which I was not aware of and which is not active when you login to Youtube using the Foundation account to manage videos. The channel was also set to monatization mode which has no option to reset it. To make us even more confused, yesterday the system generated the wrong agreement to be signed.

(Image: after logging in to the Foundation account, the menu “Switch Accounts” – shows the option to login as “Content Manager”).

Because of not accepting the new terms, the wrong error message was to put all videos on “Not available in your country” mode, which usually signals that there is a copyright issue. Similar happened for Andrew Price’s video last year, which (according to our new contact) was because of a trademark dispute but that was never made explicit to us.

All support desk people we contacted (since December last year) couldn’t find out what was wrong. They didn’t know that not accepting ‘terms and conditions’ could be causing this. Until yesterday they thought there was a technical error.

After reviewing the new terms and conditions (which basically is to accept the subscription system, I decided to accept that. According to the new Youtube contact our channel then would be back in a few hours.

Just while writing this, the video thumbnails appeared to be back! They don’t play yet.

Tuesday (afternoon) 19 June 2018, by Ton Roosendaal

We are doing a PeerTube test on video.blender.org. It is running on one of our own servers, in a European datacenter. Just click around and have some fun. We’re curious to see how it holds!

Tuesday 19 June 2018, by Ton Roosendaal

Last night we received a contract from Google. You can read it here. It’s six pages of legal talk, but the gist of the agreement appears to be about Blender Foundation accepting to monetize content on its Youtube channel.

However, BF already has an ad-free Youtube account since 2008. We have monetizing disabled, but it looks like Google is going to change this policy. For example, we now see a new section on our channel settings page: “Monetization enabled”.

However, the actual advertisement option is disabled in the advanced settings:

Now there’s another issue. Last year we were notified by US Youtube visitors that a very popular Blender Conference talk wasn’t visible for them – the talk Andrew Price gave in 2016; The 7 Habits of Highly Effective Artists. It had over a million views already.

With our channel reaching > 100k subscribers, we have special priority support. So we contacted them to ask what was wrong. After a couple of mails back and forth, the reply was as follows (22 dec 2017):

Thanks for your continued support and patience.

I’ve received an update from our experts stating that you need to enable ads for your video. Once you enable, your video will be available in the USA.

If there’s anything else you’d need help with, please feel free to write back to us anytime as we are available 24/7 to take care of every partner’s concerns.

Appreciate your understanding and thanks for being our valuable partner. Have an amazing day!

Which was quite a surprising statement for us. My reply therefore was (22 dec 2017):

I’m chairman of the Blender Foundation. We choose to use a 100% ad-free channel for our work, this to emphasis our public benefit and non-profit goals.

According to your answer we are being forced to enable advertising now.
I would like to know where this new Youtube policy has been published and made official.
We then had every other month a reply like this:

Please allow me some time to work with specialists on your issue. I’ll investigate further and will reach back to you with an update at the earliest possible.

Appreciate your patience and understanding in the interim.

Just last week, June 12, I mailed them again to ask for the status of this issue. The reply was:

I completely understand your predicament. Apologies for the unusual delay in hearing back from the Policy team. I’ve escalated this issue for further investigation and assistance. Kindly bear with us while we get this fixed.

Appreciate your understanding in this regard.

And then on June 15th the entire channel went black.
To us it is still unclear what is going on. It could be related to Youtube’s new “subscription” system. It can also be just a human error or a bug; our refusal to monetize videos on a massively popular channel isn’t common.
However – it remains a fair and relevant question to Google: do you allow adfree channels without monetization? Stay tuned!

Monday 18 June 2018, by Francesco Siddi

Since a few days all Blender videos on the OFFICIAL BLENDER CHANNEL have been blocked worldwide without explanation. We are working with YouTube to resolve the issue, but the support has been less than stellar. In the meantime you can find most of the videos on cloud.blender.org.

June 17, 2018

Blender at Annecy 2018

The Blender team is back from The Annecy International Animation Film Festival 2018 and MIFA, the industry marketplace which takes place during the festival. Annecy is a major international event for over 11000 animation industry professionals and having a Blender presence there was an extremely rewarding experience.


The MIFA 2018

The entrance of the MIFA, at the Hotel Imperial

Hundreds of people stopped by the Blender booth and were amazed by the upcoming Blender 2.8 feature videos, the Blender Open Movie reels, the Hero showcase and the live set of Grease Pencil demos prepared by Daniel M. Lara. Breaking down production files step-by-step was a crowd pleaser and got an impressive number of compliments, good feedback and follow-up requests.

Demo setup

Daniel M. Lara showcasing Grease Pencil

While two years ago our presence was more focused on the upcoming Agent 327 film project, having a clearer focus on software lead to active outreach from studios currently using Blender in their production pipeline. In France alone, there are dozens of small and medium studios using Blender to produce film and TV series. These companies are often looking for artists and professional trainers, and have expressed positive remarks about the Blender Network and the BFCT initiatives.

Café des Arts

Café des Arts is where the festival happens at night

Overall, this experience confirmed the growing appreciation and adoption of Blender as integral part of the production pipeline. This is made possible thanks to the Blender development team, and the Blender community, which is often seen as one of the main reasons for switching tools.

A shout out to Pablo, Hjalti and Daniel for the great work at the booth. Keeping the show running 10 hours a day for 4 consecutive days was no joke :)

Until next year!
Francesco

The Annecy 2018 Team

The Annecy 2018 Team

June 14, 2018

security things in Linux v4.17

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s it for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

The Font BoF at Libre Graphics Meeting 2018

(Call it What’s New in Open Fonts, № 003 if you must. At least this one didn’t take as long as the LibrePlanet one to get pushed out into the street.)

Libre Graphics Meeting (LGM) is the annual shindig for all FOSS creative-graphics-and-design projects and users. This year’s incarnation was held in Sevilla, Spain at the end of April. I barely got there in time, having been en route from Typo Labs in Berlin a week earlier.

I put a “typography BoF” onto the schedule, and a dozen-ish people turned up. BoFs vary wildly across instances; this one featured a lot of developers…

Help

…which is good. I got some feedback to a question I posed in my talk: where is the “right” place for a font package to integrate into a Linux/FOSS help system? I could see that working within an application (i.e., GIMP ingests any font-help files and makes them available via the GIMP help tool) or through the standard desktop “help.”

Currently, font packages don’t hook into the help system. At best, some file with information might get stuffed somewhere in /usr/share/doc/, but you’ll never get told about that. But they certainly could hook in properly, to provide quick info on the languages, styles, and features they support. Or, for newer formats like variable fonts or multi-layer color fonts, to provide info on the axes, layers, and color palettes. I doubt anyone could use Bungee without consulting its documentation first.

But a big question here (particularly for feature support) is what’s the distinction between this “help” documentation and the UI demo strings that Matthias is working on showing in the GTK+ font explorer (as discussed in the LibrePlanet post).

The short answer is that “demo strings” show you “what you will get” (and do so at a glance); help documentation tells you the “why this is implemented” … and, by extension, makes it possible to search for a use case. For example: SIL Awami implements a set of features that do Persian stylistic swashes etc.

You could show someone the affected character strings, but knowing the intent behind them is key, and that’s a help issue. E.g., you might open the help box and search for “Persian fonts” and you’d find the Awami help entry related to it. That same query wouldn’t work by just searching for the Arabic letters that happen to get the swash-variant treatment.

Anyway, GIMP already has hooks in place to register additions to its built-in help system; that’s how GIMP plug-ins get supported by the help framework. So, in theory, the same hooks could be used for a font package to let GIMP know that help docs are available. Inkscape doesn’t currently have this, but Tavmjong Bah noted that it wouldn’t be difficult to add. Scribus does not seem to have an entry point.

In any case, after just a couple of minutes it seemed clear that putting such help documentation into every application is the wrong approach (in addition to not currently being possible). It ought to be system wide. For desktop users, that likely means hooking into the GNOME or KDE help frameworks.

Styles and other human-centric metadata

The group (or is it birds?) also talked about font management. One of the goals of the “low hanging fruit” improvements to font packaging that I’ve been cheerleading for the past few months is that better package-level features will make it possible for *many* application types and/or utilities to query, explore, and help the user make font decisions. So you don’t get just the full-blown FontMatrix-style manager, but you might also get richer font exploration built into translation tools, and you might get a nice “help me find a replacement for the missing font in this document” extension for LibreOffice, etc. Maybe you could even get font identification.

Someone brought up the popular idea that users want to be able to search their font library via stylistic attributes. This is definitely true; the tricky part is that, historically, it’s proven to be virtually impossible to devise a stylistic-classification scheme that (a) works for more than just one narrow slice of the world’s typefaces and (b) works for more than just a small subset of users. PANOSE is one such system that hasn’t take the world over yet; another guy on Medium threw Machine Learning at the Google Fonts library and came up with his own classification set … although it’s not clear that he intends to make that data set accessible to anybody else.

And, even with a schema, you’d still have to go classify and tag all of the actual fonts. It’d be a lot of additional metadata to track; one person suggested that Fontbakery could include a check for it — someone else commented that you’d really want to track whether or not a human being had given those tags a QA/sanity check.

Next, you’d have to figure out where to store that stylistic metadata. Someone asked whether or not fontconfig ought to offer an interface to request fonts by style. Putting that a little more abstractly, let’s say that you have stylistic tags for all of your fonts (however those tags are generated); should they get stored in the font binary? In fonts.conf? Interestingly enough, a lot of old FontMatrix users still have their FontMatrix tags on their system, so they say, and supporting them is kind of a de-facto “tagging solution” for new development projects. Style tags (however they’re structured) are just a subset of user-defined tags anyway: people are certainly going to want to amend, adjust, overwrite, and edit tags until they’re useful at a personal level.

Of course, the question of where to store font metadata is a recurring issue. It’s also come up in Matthias Clasen’s work on implementing smart-font–feature previews for GTK+ and in the AppStream project <font>-object discussion. Storing everything in the binary is nice ‘n’ compact, but it means that info is only accessible where the binary is — not, for example, when you’re scrolling through a package manager trying to find a font you want to install. Storing everything in a sidecar file helps that problem, but it means you’ve got two files to lose instead of one. And, of course, we all die a little on the inside whenever we see “XML”.

Dave Crossland pointed out one really interesting tidbit here: the OpenType STAT table, which was ostensibly created in conjunction with the variable-fonts features, can be used in every single-master, non-variable font, too. There, it can store valuable metadata about where each individual font file sits on the standard axes of variation within a font family (like the weight, width, and optical size in relation to other font files). I wrote a brief article about STAT in 2016 for LWN if you want more detail. It would be a good thing to add to existing open fonts, certainly. Subsequently, Marc Foley at Google has started adding STAT tables to Google Fonts families; it started off as a manual process, but the hope is that getting the workflow worked out will lead to proper tooling down the line.

Last but not least, the Inkscape team indicated that they’re interested in expanding their font chooser with an above-the-family–level selector; something that lets the user narrow down the fonts to choose from in high-level categories like “text”, “handwriting”, “web”, and so on. That, too, requires tracking some stylistic information for each font.

The unanswered question remains whose job it should be to fix or extend this sort of metadata in existing open fonts? Particularly those that haven’t been updated in a while. Does this work rise to the level of taking over maintainership of the upstream source? That could be controversial, or at least complicated.

Specimens, collections, and more

We also revisited the question of specimen creation. We discussed several existing tools for creating specimens; when considering the problem of specimen creation for the larger libre font universe, however, the problem is more one of time x peoplepower. Some ideas were floated, including volunteer sprints, drives conducted in conjunction with Open Source Design, and design teachers encouraging/assigning/forcing(?)/cajoling students to work on specimens as a project. It would also certainly help matters to have several specimen templates of different varieties to help interested contributors get started.

Other topics included curated collections of open fonts. This is one way to get over the information-overload problem, and it has come up before; this time it was was Brendan Howell who broached the subject. Similar ideas do seem to work for Adobe Typekit. Curated collections could be done at the Open Font Library, but it would require reworking the site. That might be possible, but it’s a bit unclear at the moment how interested Fabricatorz (who maintains the site) would be in revisiting the project in such a disruptive way — much less devoting a big chunk of “company time” to it. More discussion needed.

I also raised the question of reverse-engineering the binary, proprietary VFB font-source format (from older versions of FontLab). Quite a few OFL-licensed fonts have source  available in VFB format only. Even if the design of the outlines is never touched again, this makes them hard to debug or rebuild. It’s worse for binary-only fonts, of course, but extending a VFB font is not particularly doable in free software.

The VFB format is deprecated, now replaced by VFC (which has not been widely adopted by OFL projects, so far). FontLab has released a freeware CLI utility that converts VFB smoothly to UFO, an open format. While VFB could perhaps be reverse-engineered by (say) the Document Liberation Project, that might be unnecessary work: batch converting all OFL-VFB fonts once and publishing the UFO source may suffice. It would be a static resource, but could be helpful to future contributors.

It probably goes without saying, but, just in case, the BoF attendees did find more than a few potential uses for shoehorning blockchain technology into fonts. Track versioning of font releases! Track the exact permission set of each font license sold! Solve all your problems! None of these may be practical, but don’t let that stand in the way of raking in heaps of VC money before the Bitcoin bubble crashes.

That about wraps it up. There is an Etherpad with notes from the session, but I’m not quite sure I’ve finished cleaning it up yet (for formatting and to reflect what was actually said versus what may have been added by later edits). I’ll append that once it’s done. For my part, I’m looking forward to GUADEC in a few weeks, where there will inevitably be even more excitement to report back on. Stay tuned.

Firefox Quantum: Fixing Ctrl W (or other key bindings)

When I first tried switching to Firefox Quantum, the regression that bothered me most was Ctrl-W, which I use everywhere as word erase (try it -- you'll get addicted, like I am). Ctrl-W deletes words in the URL bar; but if you type Ctrl-W in a text field on a website, like when editing a bug report or a "Contact" form, it closes the current tab, losing everything you've just typed. It's always worked in Firefox in the past; this is a new problem with Quantum, and after losing a page of typing for about the 20th time, I was ready to give up and find another browser.

A web search found plenty of people online asking about key bindings like Ctrl-W, but apparently since the deprecation of XUL and XBL extensions, Quantum no longer offers any way to change or even just to disable its built-in key bindings.

I wasted a few days chasing a solution inspired by this clever way of remapping keys only for certain windows using xdotool getactivewindow; I even went so far as to write a Python script that intercepts keystrokes, determines the application for the window where the key was typed, and remaps it if the application and keystroke match a list of keys to be remapped. So if Ctrl-W is typed in a Firefox window, Firefox will instead receive Alt-Backspace. (Why not just type Alt-Backspace, you ask? Because it's much harder to type, can't be typed from the home position, and isn't in the same place on every keyboard the way W is.)

But sadly, that approach didn't work because it turned out my window manager, Openbox, acts on programmatically-generated key bindings as well as ones that are actually typed. If I type a Ctrl-W and it's in Firefox, that's fine: my Python program sees it, generates an Alt-Backspace and everything is groovy. But if I type a Ctrl-W in any other application, the program doesn't need to change it, so it generates a Ctrl-W, which Openbox sees and calls the program again, and you have an infinite loop. I couldn't find any way around this. And admittedly, it's a horrible hack having a program intercept every keystroke. So I needed to fix Firefox somehow.

But after spending days searching for a way to customize Firefox's keys, to no avail, I came to the conclusion that the only way was to modify the source code and rebuild Firefox from source.

Ironically, one of the snags I hit in building it was that I'd named my key remapper "pykey.py", and it was still in my PYTHONPATH; it turns out the Firefox build also has a module called pykey.py and mine was interfering. But eventually I got the build working.

Firefox Key Bindings

I was lucky: building was the only hard part, because a very helpful person on Mozilla's #introduction IRC channel pointed me toward the solution, saving me hours of debugging. Edit browser/base/content/browser-sets.inc around line 240 and remove reserved="true" from key_closeWindow. It turned out I needed to remove reserved="true" from the adjacent key_close line as well.

Another file that's related, but more general, is nsXBLWindowKeyHandler.cpp around line 832; but I didn't need that since the simpler fix worked.

Transferring omni.ja -- or Not

In theory, since browser-sets.inc isn't compiled C++, it seems like you should be able to make this fix without building the whole source tree. In an actual Firefox release, browser-sets.inc is part of omni.ja, and indeed if you unpack omni.ja you'll see the key_closeWindow and key_close lines. So it seems like you ought to be able to regenerate omni.ja without rebuilding all the C++ code.

Unfortunately, in practice omni.ja is more complicated than that. Although you can unzip it and edit the files, if you zip it back up, Firefox doesn't see it as valid. I guess that's why they renamed it .ja: long ago it used to be omni.jar and, like other .jar files, was a standard zip archive that you could edit. But the new .ja file isn't documented anywhere I could find, and all the web discussions I found on how to re-create it amounted to "it's complicated, you probably don't want to try".

And you'd think that I could take the omni.ja file from my desktop machine, where I built Firefox, and copy it to my laptop, replacing the omni.ja file from a released copy of Firefox. But no -- somehow, it isn't seen, and the old key bindings are still active. They must be duplicated somewhere else, and I haven't figured out where.

It sure would be nice to have a way to transfer an omni.ja. Building Firefox on my laptop takes nearly a full day (though hopefully rebuilding after pulling minor security updates won't be quite so bad). If anyone knows of a way, please let me know!

June 13, 2018

Krita 4.0.4 released!

Today the Krita team releases Krita 4.0.4, a bug fix release of Krita 4.0.0. This is the last bugfix release for Krita 4.0.

Here is the list of bug fixes in Krita 4.0.4:

  • OpenColorIO now works on macOS
  • Fix artefacts when painting with a pixel brush on a transparency mask (BUG:394438)
  • Fix a race condition when using generator layers
  • Fix a crash when editing a transform mask (BUG:395224)
  • Add preset memory to the Ten Brushes Script, to make switching back and forward between brush presets more smooth.
  • Improve the performance of the stroke layer style (BUG:361130, BUG:390985)
  • Do not allow nesting of .kra files: using a .kra file with embedded file layers as a file layer would break on loading.
  • Keep the alpha channel when applying the threshold filter (BUG:394235)
  • Do not use the name of the bundle file as a tag automatically (BUG:394345)
  • Fix selecting colors when using the python palette docker script (BUG:394705)
  • Restore the last used colors on starting Krita, not when creating a new view (BUG:394816)
  • Allow creating a layer group if the currently selected node is a mask (BUG:394832)
  • Show the correct opacity in the segment gradient editor (BUG:394887)
  • Remove the obsolete shortcuts for the old text and artistic text tool (BUG:393508)
  • Allow setting the multibrush angle in fractions
  • Improve performance of the OpenGL canvas, especially on macOS
  • Fix painting of pass-through group layers in isolated mode (BUG:394437)
  • Improve performance of loading OpenEXR files (patch by Jeroen Hoolmans)
  • Autosaving will now happen even if Krita is kept very busy
  • Improve loading of the default language
  • Fix color picking when double-clicking (BUG:394396)
  • Fix inconsistent frame numbering when calling FFMpeg (BUG:389045)
  • Fix channel swizzling problem on macOS, where in 16 and 32 bits floating point channel depths red and blue would be swapped
  • Fix accepting touch events with recent Qt versions
  • Fix integration with the Breeze theme: Krita no longer tries to create widgets in threads (BUG:392190)
  • Fix the batch mode flag when loading images from Python
  • Load the system color profiles on Windows and macOS.
  • Fix a crash on macOS (BUG:394068)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.4 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

June 12, 2018

Fingerprint reader support, the second coming

Fingerprint readers are more and more common on Windows laptops, and hardware makers would really like to not have to make a separate SKU without the fingerprint reader just for Linux, if that fingerprint reader is unsupported there.

The original makers of those fingerprint readers just need to send patches to the libfprint Bugzilla, I hear you say, and the problem's solved!

But it turns out it's pretty difficult to write those new drivers, and those patches, without an insight on how the internals of libfprint work, and what all those internal, undocumented APIs mean.

Most of the drivers already present in libfprint are the results of reverse engineering, which means that none of them is a best-of-breed example of a driver, with all the unknown values and magic numbers.

Let's try to fix all this!

Step 1: fail faster

When you're writing a driver, the last thing you want is to have to wait for your compilation to fail. We ported libfprint to meson and shaved off a significant amount of time from a successful compilation. We also reduced the number of places where new drivers need to be declared to be added to the compilation.

Step 2: make it clearer

While doxygen is nice because it requires very little scaffolding to generate API documentation, the output is also not up to the level we expect. We ported the documentation to gtk-doc, which has a more readable page layout, easy support for cross-references, and gives us more control over how introductory paragraphs are laid out. See the before and after for yourselves.

Step 3: fail elsewhere

You created your patch locally, tested it out, and it's ready to go! But you don't know about git-bz, and you ended up attaching a patch file which you uploaded. Except you uploaded the wrong patch. Or the patch with the right name but from the wrong directory. Or you know git-bz but used the wrong commit id and uploaded another unrelated patch. This is all a bit too much.

We migrated our bugs and repository for both libfprint and fprintd to Freedesktop.org's GitLab. Merge Requests are automatically built, discussions are easier to follow!

Step 4: show it to me

Now that we have spiffy documentation, unified bug, patches and sources under one roof, we need to modernise our website. We used GitLab's CI/CD integration to generate our website from sources, including creating API documentation and listing supported devices from git master, to reduce the need to search the sources for that information.

Step 5: simplify

This process has started, but isn't finished yet. We're slowly splitting up the internal API between "internal internal" (what the library uses to work internally) and "internal for drivers" which we eventually hope to document to make writing drivers easier. This is partially done, but will need a lot more work in the coming months.

TL;DR: We migrated libfprint to meson, gtk-doc, GitLab, added a CI, and are writing docs for driver authors, everything's on the website!

What’s Worse?

  1. Getting your glasses smushed against your face.
  2. Having your earbuds ripped out of your ear when the cord catches on a doorknob.

June 11, 2018

Interview with Zoe Badini

Could you tell us something about yourself?

Hi, I’m Zoe and I live in Italy. Aside from painting I love cooking and spending my time outdoors, preferably snorkeling in the sea.

Do you paint professionally, as a hobby artist, or both?

I’m just now starting to take my first steps professionally after many years of painting as a hobby.

What genre(s) do you work in?

I love to imagine worlds and stories for my paintings, so most of what I’ve done is related to fantasy illustration and some concept art. I also do portraiture occasionally.

Whose work inspires you most — who are your role models as an artist?

There are way too many to mention, I try to learn as much as I can from other artists, so there are a lot of people I look up to. There are a few I often watch on Youtube, Twitch, or other platforms, I learned a lot from their videos: Clint Cearley, Marco Bucci, Suzanne Helmigh, David Revoy.

How and when did you get to try digital painting for the first time?

I was used to traditional drawing, then a few years ago I saw some beautiful digital illustrations and was curious to try my hand at it, there was this old graphic tablet at my parents’ house, so I tried it. What I made was atrocious, but it didn’t discourage me!

What makes you choose digital over traditional painting?

Working digitally I feel like a wizard, with a touch of my wand I have a huge array of tools at my disposal: different techniques, effects, trying out ideas and discarding them freely if they don’t work out. It’s also a big space saver!

How did you find out about Krita?

I had heard it mentioned a couple of times, then I posted a painting on reddit and a user recommended Krita to me, I was a bit uncertain because I was used to my setup, my brushes and so on… But the seed was planted, in the span of a few months I was using Krita exclusively and I never went back.

What was your first impression?

I was understandably a bit lost and watched a few tutorials, but I found the program intuitive and easy to navigate.

What do you love about Krita?

Its accessibility and completeness: there’s everything I may need to paint at a professional level and it’s easy to find and figure out. Krita also comes with a very nice selection of brushes right out of the box.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Nothing really annoys me, as for improvements I wanted to say the text tool, but I know you’re working on it and it was already improved in 4.0.

What sets Krita apart from the other tools that you use?

As I said it’s professional and easy to use, I feel like it’s made for me. It’s also free, which is great for people just starting out.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

My favourite is always one of the latest I did, just because I get better over time. In this case it’s “Big Game Hunt”.

What techniques and brushes did you use in it?

Nothing particular in terms of technique, my brushes come from the Krita presets, my own experiments and a lot of bundles I gathered from the internet over time.

Where can people see more of your work?

Artstation: https://www.artstation.com/zoebadini
Twitter: https://twitter.com/ZoeBadini

Anything else you’d like to share?

I want to thank the Krita team for making a great software and I encourage people to try it, you won’t be disappointed. If you use it and like it, consider donating to help fund the project!

June 09, 2018

Building Firefox for ALSA (non PulseAudio) Sound

I did the work to built my own Firefox primarily to fix a couple of serious regressions that couldn't be fixed any other way. I'll start with the one that's probably more common (at least, there are many people complaining about it in many different web forums): the fact that Firefox won't play sound on Linux machines that don't use PulseAudio.

There's a bug with a long discussion of the problem, Bug 1345661 - PulseAudio requirement breaks Firefox on ALSA-only systems; and the discussion in the bug links to another discussion of the Firefox/PulseAudio problem). Some comments in those discussions suggest that some near-future version of Firefox may restore ALSA sound for non-Pulse systems; but most of those comments are six months old, yet it's still not fixed in the version Mozilla is distributing now.

In theory, ALSA sound is easy to enable. Build pptions in Firefox are controlled through a file called mozconfig. Create that file at the top level of your build directory, then add to it:

ac_add_options --enable-alsa
ac_add_options --disable-pulseaudio

You can see other options with ./configure --help

Of course, like everything else in the computer world, there were complications. When I typed mach build, I got:

Assertion failed in _parse_loader_output:
Traceback (most recent call last):
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 260, in read_mozconfig
    parsed = self._parse_loader_output(output)
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 375, in _parse_loader_output
    assert not in_variable
AssertionError
Error loading mozconfig: /home/akkana/outsrc/gecko-dev/mozconfig

Evaluation of your mozconfig produced unexpected output.  This could be
triggered by a command inside your mozconfig failing or producing some warnings
or error messages. Please change your mozconfig to not error and/or to catch
errors in executed commands.

mozconfig output:

------BEGIN_ENV_BEFORE_SOURCE
... followed by a many-page dump of all my environment variables, twice.

It turned out that was coming from line 449 of python/mozbuild/mozbuild/mozconfig.py:

   # Lines with a quote not ending in a quote are multi-line.
    if has_quote and not value.endswith("'"):
        in_variable = name
        current.append(value)
        continue
    else:
        value = value[:-1] if has_quote else value

I'm guessing this was added because some Mozilla developer sets a multi-line environment variable that has a quote in it but doesn't end with a quote. Or something. Anyway, some fairly specific case. I, on the other hand, have a different specific case: a short environment variable that includes one or more single quotes, and the test for their specific case breaks my build.

(In case you're curious why I have quotes in an environment variable: The prompt-setting code in my .zshrc includes a variable called PRIMES. In a login shell, this is set to the empty string, but in subshells, I add ' for each level of shell under the login shell. So my regular prompt might be (hostname)-, but if I run a subshell to test something, the prompt will be (hostname')-, a subshell inside that will be (hostname'')-, and so on. It's a reminder that I'm still in a subshell and need to exit when I'm done testing. In theory, I could do that with SHLVL, but SHLVL doesn't care about login shells, so my normal shells inside X are all SHLVL=2 while shells on a console or from an ssh are SHLVL=1, so if I used SHLVL I'd have to have some special case code to deal with that.

Also, of course I could use a character other than a single-quote. But in the thirty or so years I've used this, Firefox is the first program that's ever had a problem with it. And apparently I'm not the first one to have a problem with this: bug 1455065 was apparently someone else with the same problem. Maybe that will show up in the release branch eventually.)

Anyway, disabling that line fixed the problem:

   # Lines with a quote not ending in a quote are multi-line.
    if False and has_quote and not value.endswith("'"):
and after that, mach build succeeded, I built a new Firefox, and lo and behond! I can play sound in YouTube videos and on Xeno-Canto again, without needing an additional browser.

June 06, 2018

Updating Wacom Firmware In Linux

I’ve been working with Wacom engineers for a few months now, adding support for the custom update protocol used in various tablet devices they build. The new wacomhid plugin will be included in the soon-to-be released fwupd 1.0.8 and will allow you safely update the bluetooth, touch and main firmware of devices that support the HID protocol. Wacom is planning to release a new device that will be released with LVFS support out-of-the-box.

My retail device now has a 0.05″ SWI debugging header installed…

In other news, we now build both flatpak and snap versions of the standalone fwupdtool tool that can be used to update all kinds of hardware on distributions that won’t (or can’t) easily update the system version of fwupd. This lets you easily, for example, install the Logitech unifying security updates when running older versions of RHEL using flatpak and update the Dell Thunderbolt controller on Ubuntu 16.04 using snapd. Neither bundle installs the daemon or fwupdmgr by design, and both require running as root (and outside the sandbox) for obvious reasons. I’ll upload the flatpak to flathub when fwupd and all the deps have had stable releases. Until then, my flatpak bundle is available here.

Working with the Wacom engineers has been a pleasure, and the hardware is designed really well. The next graphics tablet you buy can now be 100% supported in Linux. More announcements soon.