July 25, 2016

I hate deals

One of my favourite tech-writers, Paul Miller from The Verge, has articulated something I've always felt, but have never been able to express well: I hate deals.

From Why I'm a Prime Day Grinch: I hate deals by Paul Miller:

Deals aren't about you. They're about improving profits for the store, and the businesses who distribute products through that store. Amazon's Prime Day isn't about giving back to the community. It's about unloading stale inventory and making a killing.

But what about when you decide you really do want / need something, and it just happens to be on sale? Well, lucky you. I guess I've grown too bitter and skeptical. I just assume automatically that if something's on sale AND I want to buy it, I must've messed up in my decision making process somewhere along the way.

I also hate parties and fun.

July 24, 2016

Preparation to release of version 0.15.0

Greetings all!

We plan to release Stellarium 0.15.0 at the end of next week (31 July).

This is another major release, who has many changes in code and few new skycultures. If you can assist with translation to any of the 136 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

Thank you!

July 19, 2016

GUADEC Flatpak contest

I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.

Contest

To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

You will have to send me an email about the location of that repository.

I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.

Prize

A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

Good luck to one and all!

July 18, 2016

Breeze everywhere

The first half of this year, I had the chance to work on icon and design for two big free-software projects.

First, I’ve been hired to work on Mageia. I had to refresh the look for Mageia 6, which mostly meant making new icons for the Mageia Control Center and all the internal tools.

mageia-MCC

I proposed to replace the oxygen-like icons with some breeze-like icons.
This way it integrates much better with modern desktop, and of course it looks especially good with plasma.

mageia-MCC01

The result is around 1/3 of icons directly imported from breeze, 1/3 are modified versions and 1/3 are created from scratch. I tried to follow as much as possible the breeze guidelines, but had to adapt some rules to the context.

mageia-MCC02

I also made a wallpaper to go with it, which will be in the extra wallpaper package so not used by default:

Mageia-Default-1920x1200
available in different sizes on this link.

And another funny wallpaper for people that are both mageia users and Pepper & Carrot fans:

Extra-Background-01-PepperAndCarrot-1080
available in different sizes on this link
(but I’m not sure yet if this one will be packaged at all…)

Note that we still have some visual issues with the applets.
It seems to be a problem with how gtkcreate_pixbuf is used. But more important, those applet don’t even react to clic in plasma (while this seems at least to work fine in all other desktop).
Since no one seems to have an easy fix or workaround yet, if someone has an idea to help…

Soon after I finished my work on Mageia, I’ve been hired to work on fusiondirectory.
I had to create a new theme for the web interface, and again I proposed to base it on breeze, similar to what I did for Mageia but in another different context. Also, I modified the CSS to look like breeze-light interface theme. The result theme is called breezy, and is now used by default since the last release.

FD-Breezy01
FD-Breezy02

I had a lot of positive feedback on this new theme, people seem to really like it.

Before to finish, a special side note for the breeze team: Thank you so much for all the great work! It has been a pleasure to start from it. Feel free to look at the mageia and fusiondirectory git repositories to see if there are icons that could be interesting to push upstream to breeze icon set.

July 15, 2016

Fri 2016/Jul/15

  • Update from La Mapería

    La Mapería is working reasonably well for now. Here are some example maps for your perusal. All of these images link to a rather large PDF that you can print on a medium-format plotter — all of these are printable on a 61 cm wide roll of paper (or one that can put out US Arch D sheets).

    Valladolid
    Valladolid, Yucatán, México, 1:10,000

    Ciudad de México
    Centro de la Ciudad de México, 1:10,000

    Ajusco
    Ajusco y Sur de la Ciudad de México, 1:50,000

    Victoria, BC
    Victoria, British Columbia, Canada, 1:50,000

    Boston
    Boston, Massachusetts, USA, 1:10,000

    Walnut Creek
    Walnut Creek, California, USA, 1:50,000

    Butano State Park
    Butano State Park and Pescadero, California, USA, 1:20,000

    Provo
    Provo, Utah, USA, 1:50,000

    Nürnberg
    Nürnberg, Germany, 1:10,000

    Karlsruhe
    Karlsruhe, Germany, 1:10,000

    That last one, for Karlsruhe, is where GUADEC will happen this year, so enjoy!

    Next steps

    La Mapería exists right now as a Python program that downloads raster tiles from Mapbox Studio. This is great in that I don't have to worry about setting up an OpenStreetMap stack, and I can just worry about the map stylesheet itself (this is the important part!) and a little code to render the map's scale and frame with arc-minute markings.

    I would prefer to have a client-side renderer, though. Vector tiles are the hot new thing; in theory I should be able to download vector tiles and render them with Memphis, a Cairo-based renderer. I haven't investigated how to move my Mapbox Studio stylesheet to something that Memphis can use (... or that any other map renderer can use, for that matter).

    Also, right now making each map with La Mapería involves extracting geographical coordinates by hand, and rendering the map several times while tweaking it to obtain just the right area I want. I'd prefer a graphical version where one can just mouse around.

    Finally, the map style itself needs improvements. It works reasonably well for 1:10,000 and 1:50,000 right now; 1:20,000 is a bit broken but easy to fix. It needs tweaks to map elements that are not very common, like tunnels. I want to make it work for 1:100,000 for full-day or multi-day bike trips, an possibly even smaller scales for motorists and just for general completeness.

    So far two of my friends in Mexico have provided pull requests for La Mapería — to fix my not-quite-Pythonic code, and to make the program easier to use the first time. Thanks to them! Contributions are appreciated.

July 13, 2016

Too much of a good thing

So the last couple of months, after our return from Italy, were nicely busy. At the day job, we were getting ready to create an image to send to the production facility for the QML-based embedded application we had been developing, and besides, there were four reorganizations in one month, ending with the teams being reshuffled in the last week before said image had to be ready. It was enough pressure that I decided to take last week off from the day job, just to decompress a bit and focus on Krita stuff that was heaping up.

Then, since April, Krita-wise, there was the Kickstarter, the kick-off for the artbook, the Krita 3.0 release... The 3.0 release doubled the flow of bugs, donations, comments, mails to the foundation, questions on irc, reddit, forum and everywhere else. (There's this guy who has sent me over fifty mails asking for Krita to be released for Windows XP, OSX 10.5 and Ubuntu 12.02, for example). And Google Summer of Code kicked off, with three students working on Krita.

And, of course, daily life didn't stop, though more and more non-work, non-krita things got postponed or cut out. There were moments when I really wanted to cancel our bi-weekly RPG session just to have another Monday evening free for Krita-related work.

I don't mind being busy, and I like being productive, and I especially like shipping: at too many day jobs we never shipped, which was extremely frustrating.

But then last Wednesday evening, a week ago, I suddenly felt queer after dinner, just before we'd start he RPG session. A pressing, heavy pain on my breast, painful upper arms, sweating, nausea, dizziness... I spent the next day in hospital getting checked for heart problems. The conclusion was, it wasn't a heart attack, just was all the symptoms of one. No damage done, in any case, that the tests could figure out, and I am assured they are very acccurate.

Still, I'm still tired and slow and have a hard time focusing, so I didn't have time to prepare Krita 3.0.1. I didn't manage to finish the video-export refactoring (that will also make it possible to pass file export configurations to Krita on the command line). I also didn't get through all the new bugs, though I managed to fix over a dozen. The final bugs in the spriter export plugin also are waiting to be squashed. Setting up builds for the master branch for three operating systems and two architectures was another thing I had to postpone to later. And there are now so many donations waiting for a personal thank-you mail that I have decided to just stop sending them. One thing I couldn't postpone or drop was creating a new WBSO application for an income tax rebate for the hours spent on the research for Krita's scripting plugin.

I'm going forward with a bit of reduced todo list, so, in short, if you're waiting for me to do something for you, be aware that you might have to wait a bit longer or that I won't be able to do it. If you want your Krita bug fixed with priority, don't tell me to fix it NOW, because any kind of pressure will be answered with a firm nolle prosequi.

July 12, 2016

HD Photo Slideshow with Blender


HD Photo Slideshow with Blender

Because who doesn't love a challenge?

While I was out at Texas Linux Fest this past weekend I got to watch a fun presentation from the one and only Brian Beck. He walked through an introduction to Blender, including an overview of creating his great The Lady in the Roses image that was a part of the 2015 Libre Calendar project.

Coincidentally, during my trip home community member @Fotonut asked about software to create an HD slideshow with images. The first answer that jumped into my mind was to consider using Blender (a very close second was OpenShot because I had just spent some time talking with Jon Thomas about it).

Brian Beck Roses The Lady in the Roses by Brian Beck cba

I figured this much Blender being talked about deserved at least a post to answer @Fotonut‘s question in greater detail. I know that many community members likely abuse Blender in various ways as well – so please let me know if I get something way off!

Enter Blender

The reason that Blender was the first thing that popped into many folks minds when the question was posed is likely because it has been a go-to swiss-army knife of image and video creation for a long, long time. For some it was the only viable video editing application for heavy use (not that there weren’t other projects out there as well). This is partly due to to the fact that it integrates so much capability into a single project.

The part that we’re interested in for the context of Fotonut’s original question is the Video Sequence Editor (VSE). This is a very powerful (though often neglected) part of Blender that lets you arrange audio and video (and image!) assets along a timeline for rendering and some simple effects. Which is actually perfect for creating a simple HD slideshow of images, as we’ll see.

The Plan

Blenders interface is likely to take some getting used to for newcomers (right-click!) but we’ll be focusing on a very small subset of the overall program—so hopefully nobody gets lost. The overall plan will be:

  1. Setup the environment for video sequence editing
  2. Include assets (images) and how to manipulate them on the timeline
  3. Add effects such as cross-fades between images
  4. Setup exporting options

There’s also an option of using a very helpful add-on for automatically resizing images to the correct size to maintain their aspect ratios. Luckily, Blender’s add-on system makes it trivially easy to set up.

Setup

On opening Blender for the first time we’re presented with the comforting view of the default cube in 3D space. Don’t get too cozy, though. We’re about to switch up to a different screen layout that’s already been created for us by default for Video Editing.

Blender default main window The main blender default view.

The developers were nice enough to include various default “Screen Layout” options for different tasks, and one of them happens to be for Video Editing. We can click on the screen layout option on the top menu bar and choose the one we want from the list (Video Editing):

Blender screen layout options Choosing a new Screen Layout option.

Our screen will then change to the new layout where the top left pane is the F-curve window, the top right is the video preview, the large center section is the sequencer, and the very bottom is a timeline. Blender will let you arrange, combine, and collapse all the various panes into just about any layout that you might want, including changing what each of them are showing. For our example we will mostly leave it all as-is with the exception of the F-curve pane, which we won’t be using and don’t need.

Blender video editing layout The Video Editing default layout.

What we can do now is to define what the resolution and framerate of our project should be. This is done in the Properties pane, which isn’t shown right now. So we will change the F-Curve pane into the Properties pane by clicking on the button shown in red above to change the panel type. We want to choose Properties from the options in the list:

Blender change pane to properties

Which will turn the old F-Curve pane into the Properties pane:

Blender properties

You’ll want to set the appropriate X and Y resolution for your intended output (don’t forget to set the scaling from the default 50% to 100% now as well) as well as your intended framerate. Common rates might be 23.976 (23.98), 25, 30, or even 60 frames per second. If your intended target is something like YouTube or an HD television you can probably safely use 30 or 60 (just remember that a higher frame rate means a longer render time!).

For our example I’m going to set the output resolution to 1920 × 1080 at 30fps.

One Extra Thing

Blender does need a little bit of help when it comes to using images on the sequence editor. It has a habit of scaling images to whatever the output resolution is set to (ignoring the original aspect ratios). This can be fixed by simply applying a transform to the images but normally requires us to manually compute and enter the correct scaling factors to get the images back to their original aspect ratios.

I did find a nice small add-on on this thread at blenderartists.org that binds some handy shortcuts onto the VSE for us. The author kgeogeo has the add-on hosted on Github, and you can download the Python file directly from here: VSE Transform Tool (you can Right-Click and save the link). Save the .py file somewhere easy to find.

To load the add-on manually we’re going to change the Properties panel to User Preferences:

Blender change to preferences

Click on the Add-ons tab to open that window and at the bottom of the panel is an option to “Install from File…”. Click that and navigate to the VSE_Transform_Tool.py file that you downloaded previously.

Blender add-ons

Once loaded, you’ll still need to Activate the plugin by clicking on the box:

Blender adding add-ons

That’s it! You’re now all set up to begin adding images and creating a slideshow. You can set the User Preferences pane back to Properties if you want to.

Adding Images

Let’s have a look at adding images onto the sequencer.

You can add images by either choosing Add → Image from the VSE menu and navigating to your images location, choosing them:

Blender VSE add image

Or by drag-and-dropping your images onto the sequencer timeline from Nautilus, Finder, Explorer, etc…

When you do, you’ll find that a strip now appears on the VSE window (purple in my case) that represents your image. You should also see a preview of your video in the top-right preview window (sorry for the subject).

Blender VSE add image

At this point we can use the handy add-on we installed previously by Right-Clicking on the purple strip to make sure it’s activated and then hitting the “T” key on the keyboard. This will automatically add a transform to the image that scales it to the correct aspect ratio for you. A small green Transform strip will appear above your purple image strip now:

Blender VSE add transform strip

Your image should now also be scaled to fit at the correct aspect ratio.

Adjusting the Image

If you scroll your mouse wheel in the VSE window, you will zoom in and out of time editor based on time (the x-axis in the sequencer window). You’ll notice that the time compresses or expands as you scroll the mouse wheel.

The middle-mouse button will let you pan around the sequencer.

The right-mouse button will select things. You can try this now by extending how long your image is displayed in the video. Right-Click on the small arrow on the end of the purple strip to activate it. A small number will appear above it indicating which frame it is currently on (26 in my example):

Blender VSE

With the right handle active you can now either press “G” on the keyboard and drag the mouse to re-position the end of the strip, or Right-Click and drag to do the same thing. The timeline in seconds is shown along the bottom of the window for reference. If we wanted to let the image be visible for 5 seconds total, we could drag the end to the 5+00 mark on the sequencer window.

Since I set the framerate to 30 frames per second, I can also drag the end to frame 150 (30fps * 5s = 150 frames).

Blender VSE five seconds

When you drag the image strip, the transform strip will automatically adjust to fit (so you don’t have to worry about it).

If you had selected the center of the image strip instead of the handle on one end and tried to move it, you would find that you can move the entire strip around instead of one end. This is how you can re-position image strips, which you may want to do when you add a second image to your sequencer.

Add a new image to your sequencer now following the same steps as above.

When I do, it adds a new strip back at the beginning of the timeline (basically where the current time is set):

Blender VSE second image

I want to move this new strip so that it overlaps my first image by about half a second (or 15 frames). Then I will pull the right handle to resize the display time to about 5 seconds also.

Click on the new strip (center, not the ends), and press the “G” key to move it. Drag it right until the left side overlaps the previous image strip by a little bit:

Blender VSE drag strip

When you click on the strip right handle to modify it’s length, notice the window on the far right of the VSE. The Edit Strip window should also show the strip “Length” parameter in case you want to change it by manually inputting a value (like 150):

Blender VSE adjust strip

I forgot to use the add-on to automatically fix the aspect ratio. With the strip selected I can press “T” at any time to invoke the add-on and fix the aspect ratio.

Adding a Transition Effect

With the two image strips slightly overlapping, we now want to define a simple cross fade between the two images as a transition effect. This is actually something alreayd built into the Blender VSE for us, and is easy to add. We do need to be careful to select the right things to get the transition working correctly, though.

Once you’ve added a transform effect to a strip, you’ll need to make sure that subsequent operations use the transform strip as opposed to the original image strip.

For instance, to add a cross fade transition between these two images, click the first image strip transform (green), then Shift-Click on the second image transform strip (green). Now they are both selected, so add a Gamma Cross by using the Add menu in the VSE (Add → Effect Strip… → Gamma Cross):

Blender VSE add gamma cross

This will add a Gamma Cross effect as a new strip that is locked to the two images overlap. It will do a cross-fade between the two images for the duration of the overlap. You can Left-Click now and scrub over the cross-fade strip to see it rendered in the preview window if you’d like:

Blender Gamma Cross

At any time you can also use the hotkey “Alt-A” to view a render preview. This may run slow if your machine is not super-fast, but it should run enough to give you a general sense of what you’ll get.

If you want to modify the transition effect by changing its length, you can just increase the overlap between the strips as desired (using the original image strip — if you try to drag the transform strip you’ll find it locked to the original image strip and won’t move).

Repeat Repeat

You can basically follow these same steps for as many images as you’d like to include.

Exporting

To generate your output you’ll still need to change a couple of things to get what you want…

Render Length

You may notice on the VSE that there are vertical lines outside of which things will appear slightly grayed out. This is a visual indicator of the total start/end of the output. This is controlled via the Start and End frame settings on the timeline (bottom pane):

Blender VSE start and end

You’ll need to set the End value to match your last output frame from your video sequence. You can find this value by selecting the last strip in your sequence and pressing the “G” key: the start/end frame numbers of that last strip will be visible (you’ll want the last frame value, of course).

Blender VSE end frame Current last frame of my video is 284

In my example above, my anticipated last frame should be 284, but the last render frame is currently set to 250. I would need to update that End frame to match my video to get output as expected.

Render Format

Back on the Properties panel (assuming you set the top-left panel back to Properties earlier—if not do so now), if we scroll down a bit we should see a section dedicated to Output.

Blender Properties Output Options

You can change the various output options here to do frame-by-frame dumps or to encode everything into a video container of some sort. You can set the output directory to be something different if you don’t want it rendered into /tmp here.

For my example I will encode the video with H.264:

Blender output h264

By choosing this option, Blender will then expose a new section of the Properties panel for setting the Encoding options:

Blender output encoding options

I will often use the H264 preset and will enable the Lossless Output checkbox option. If I don’t have the disk space to spare I can also set different options to shrink the resulting filesize down further. The Bitrate option will have the largest effect on final file size and image quality.

When everything is ready (or you just want to test it out), you can render your output by scrolling back to the top of the Properties window and pressing the Animation button, or by hitting Ctrl-F12.

Blender Render Button

The Results

After adding portraits of all of the GIMP team from LGM London and adding gamma cross fade transitions, here are my results:


In Summary

This may seem overly complicated, but in reality much of what I covered here is the setup to get started and the settings for output. Once you’ve done this successfully it becomes pretty quick to use. One thing you can do is set up the environment the way you like it and then save the .blend file to use as a template for further work like this in the future. The next time you need to generate a slideshow you’ll have everything all ready to go and will only need to start adding images to the editor.

While looking for information on some VSE shortcuts I did run across a really interesting looking set of functions that I want to try out: the Blender Velvets. I’m going to go off and give it a good look when I get a chance as there’s quite a few interesting additions available.

For Blender users: did I miss anything?

July 10, 2016

How GNOME Software uses libflatpak

It seems people are interested in adding support for flatpaks into other software centers, and I thought I might be useful to explain how I did this in gnome-software. I’m lucky enough to have a plugin architecture to make all the flatpak code be self contained in one file, but that’s certainly not a requirement.

Flatpak generates AppStream metadata when you build desktop applications. This means it’s possible to use appstream-glib and a few tricks to just load all the enabled remotes into an existing system store. This makes searching the new applications using the (optionally stemmed) token cache trivial. Once per day gnome-software checks the age of the AppStream cache, and if required downloads a new copy using flatpak_installation_update_appstream_sync(). As if by magic, appstream-glib notices the file modification/creation and updates the internal AsStore with the new applications.

When listing the installed applications, a simple call to flatpak_installation_list_installed_refs() returns us the list we need, on which we can easily set other flatpak-specific data like the runtime. This is matched against the AppStream data, which gives us a localized and beautiful application to display in the listview.

At this point we also call flatpak_installation_list_installed_refs_for_update() and then do flatpak_installation_update() with the NO_DEPLOY flag set. This just downloads the data we need, and can be cancelled without anything bad happening. When populating the updates panel I can just call flatpak_installation_list_installed_refs() again to find installed applications that have downloaded updates ready to apply without network access.

For the sources list I’m calling flatpak_installation_list_remotes() then ignoring any set as disabled or noenumerate. Most remotes have a name and title, and this makes the UI feature complete. When collecting information to show in the ui like the size we have the metadata already, but we also add the size of the runtime if it’s not already installed. This is the same idea as flatpak_installation_install(), where we also install any required runtime when installing the main application. There is a slight impedance mismatch between the flatpak many-installed-versions and the AppStream only-one-version model, but it seems to work well enough in the current code. Flatpak splits the deployment into a runtime containing common libraries that can be shared between apps (for instance, GNOME 3.20 or KDE5) and the application itself, so the software center always needs to install the runtime for the application to launch successfully. This is something that is not enforced by the CLI tool. Rather than installing everything for each app, we can also install other so-called extensions. These are typically non-essential like the various translations and any debug information, but are not strictly limited to those things. libflatpak automatically keeps the extensions up to date when updating, so gnome-software doesn’t have to do anything special at all.

Updating single applications is trivial with flatpak_installation_update() and launching applications is just as easy with flatpak_installation_launch(), although we only support launching the newest installed version of an application at the moment. Reading local bundles works well with flatpak_bundle_ref_new(), although we do have to load the gzipped AppStream metadata and the icon ourselves. Reading a .flatpakrepo file is slightly more work, but the data is in keyfile format and trivial to parse with GKeyFile.

Overall I’ve found libflatpak to be surprisingly easy to work with, requiring none of the kludges of all the different package-based systems I’ve worked on developing PackageKit. Full marks to Alex et al.

July 08, 2016

Railway gauges

Episode 3 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

The standard railway gauge (that is, the distance between train rails) for over half of the world’s railways (including the USA and UK)  is 4′ 8.5″, or 1.435m. While a few other railway gauges are in common use, including, to my surprise, in Ireland, where the gauge is 5′ 3″, or 1.6m. If you’re like me, you’ve wondered where these strange numbers came from.

Your first guess might be that, similar to the QWERTY keyboard, it comes from the inventor of the first train, or the first successful commercial railway, and that there was simply no good reason to change it once the investment had been made in thbat first venture, in the interests of interoperability. There is some truth to this, as railways were first used in coal mines to extract coal by horse-drawn carriages, and in the English coal mines of the North East, the “standard” gauge of 4′ 8″ was used. When George Stephenson started his seminal work on the development of the first commercial railway and the invention of the Stephenson Rocket steam locomotive, his experience from the English coal mines led him to adopt this gauge of 4′ 8″. To allow for some wiggle room so that the train and carriages could more easily go around bends, he increased the gauge to 4′ 8.5″.

But why was the standard gauge for horse-drawn carriages 4′ 8″? The first horse-drawn trams used the same gauge, and all of their tools were calibrated for that width. That’s because most wagons, built with the same tools, had that gauge at the time. But where did it come from in the first place? One popular theory, which I like even if Snopes says it’s probably false, is that the gauge was the standard width of horse-drawn carriages all the way back to Roman times. The 4′ 8.5″ gauge roughly matches the width required to comfortably accommodate a horse pulling a carriage, and has persisted well beyond the end of that constraint.

 

 

July 07, 2016

QWERTY keyboards

Episode 2 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

American or English computer users are familiar with the QWERTY keyboard layout – which takes its name from the layout of letters on the first row of the traditional us and en_gb keyboard layouts. There are other common layouts in other countries, mostly tweaks to this format like AZERTY (in France) or QWERTZ (in Germany). There are also non-QWERTY related keyboard layouts like Dvorak, designed to allow increased typing speed, but which have never really gained widespread adoption. But where does the QWERTY layout come from?

The layout was first introduced with the Remington no. 1 typewriter (AKA the Scholes and Glidden typewriter) in 1874. The typewriter had a set of typebars which would strike the page with a single character, and these were arranged around a circular “basket”. The page was then moved laterally by one letter-width, ready for the next keystrike. The first attempt laid out the keys in alphabetical order, in two rows, like a piano keyboard. Unfortunately, this mechanical system had some issues – if two typebars situated close together were struck in rapid succession, they would occasionally jam the mechanism. To avoid this issue, common bigrams were distributed around the circle, to minimise the risk of jams.

The keyboard layout was directly related to the layout of typebars around the basket, since the keyboard was purely mechanical – pushing a key activated a lever system to swing out the correct typebar. As a result, the keyboard layout the company settled on, after much trial and error, had the familiar QWERTY layout we use today. At this point, too much is invested in everything from touch-type lessons and sunk costs of the population who have already learned to type for any other keyboard format to become viable, even though the original constraint which led to this format obviously no longer applies.

Edit: A commenter pointed me to an article on The Atlantic called “The Lies You’ve Been Told About the QWERTY Keyboard” which suggests an alternate theory. The layout changed to better serve the earliest users of the new typewriter, morse code transcribing telegraph operators. A fascinating lesson in listening to your early users, for sure, but also perhaps a warning on imposing early-user requirements on later adopters?

Cosmos Laundromat wins SIGGRAPH 2016 Computer Animation Festival Jury’s Choice Award

A few days ago we wrote about three Blender-made films being selected for the SIGGRAPH 43rd annual Computer Animation Festival. Today we are happy to announce that Cosmos Laundromat Open Movie (by Blender Institute) has won the Jury’s Choice Award!

Producer Ton Roosendaal says:

SIGGRAPH always brings the best content together for the Computer Animation Festival from the most talented artists and we are honoured to be acknowledged in this way for all our hard work and dedication.

et_16_winner

Get ready to see more and more pictures of Victor and Frank as Cosmos Laundromat takes over SIGGRAPH 2016!



Google Expeditions – Education in VR

By: Mike Pan, Lead Artist at Vida Systems

The concept of virtual-reality has been around for many decades now. However it is only in the last few years that technology has matured enough for VR to really take off. At Vida Systems, we have been at the forefront of this VR resurgence every step of the way.

vida_16_Vida

Vida Systems had the amazing opportunity to work with Google on their Expeditions project. Google Expeditions is a VR learning experience designed for classrooms. With a simple smartphone and a Cardboard viewer, students can journey to far-away places and feel completely immersed in the environment. This level of immersion not only delights the students, it actually helps learning as they are able to experience places in a much more tangible way.

vida_16_Landscape

To fulfill the challenge of creating stunning visuals, we rely on Blender and the Cycles rendering engine. First, each topic is carefully researched. Then the 3D artists work to create a scene based on the layout set by the designer. With Cycles, it is incredibly easy to create photorealistic artwork in a short period of time. Lighting, shading and effects can all be done with realtime preview.

vida_16_Blender

With the built-in VR rendering features including stereo camera support and equirectangular panoramic camera, we can render the entire scene with one click and deliver the image without stitching or resampling, saving us valuable time.

vida_16_Historical

For VR, the image needs to be noise-free, in stereo, and high resolution. Combining all 3 factors means our rendering time for a 4K by 4K frame is 8 times longer than a traditional 1080p frame. With two consumer-grade GPUs working together (980Ti and 780), Cycles was able to crunch through most of our scenes in under 3 hours per frame.

Working in VR has some limitations. The layout has to follow realworld scales, otherwise it would look odd in 3D. It is also more demanding to create the scene, as everything has to look good from every angle. We also spent a lot of time on the details. The images had to stand up to scrutiny. Any imperfection would be readily visible due to the level of immersion offered by VR.

vida_16_Zoom

For this project, we tackled a huge variety of topics, ranging from geography to anatomy. This was only possible thanks to the four spectacular artists we have: Felipe Torrents, Jonathan Sousa de Jesus, Diego Gangl and Greg Zaal.

vida_16_Bones

vida_16_Others

Our work can be seen in the Google Expeditions app available for Android.

On blender.org we are always looking for inspiring user stories! Share yours with foundation@blender.org.

Follow us on Twitter or Facebook to get the latest user stories!

July 06, 2016

GIMP at Texas LinuxFest

I'll be at Texas LinuxFest in Austin, Texas this weekend. Friday, July 8 is the big day for open source imaging: first a morning Photo Walk led by Pat David, from 9-11, after which Pat, an active GIMP contributor and the driving force behind the PIXLS.US website and discussion forums, gives a talk on "Open Source Photography Tools". Then after lunch I'll give a GIMP tutorial. We may also have a Graphics Hackathon/Q&A session to discuss all the open-source graphics tools in the last slot of the day, but that part is still tentative. I'm hoping we can get some good discussion especially among the people who go on the photo walk.

Lots of interesting looking talks on Saturday, too. I've never been to Texas LinuxFest before: it's a short conference, just two days, but they're packing a lot into those two days and but it looks like it'll be a lot of fun.

July 05, 2016

Flatpak and GNOME Software

I wanted to write a little about how Flatpak apps are treated differently to packages in GNOME Software. We’ve now got two plugins in master, one called flatpak-user and another called flatpak-system. They both share 99% of the same code, only differing in how they are initialised. As you might expect, -user does per-user installation and updating, and the latter does it per-system for all users. Per-user applications that are specific to just a single user account are an amazingly useful concept, as most developers found using tools like jhbuild. We default to installing software at the moment for all users, but there is actually a org.gnome.software.install-bundles-system-wide dconf key that can be used to reverse this on specific systems.

We go to great lengths to interoperate with the flatpak command line tool, so if you install the nightly GTK3 build of GIMP per-user you can install the normal version system-wide and they both show in the installed and updates panel without conflicting. We’ve also got file notifications set up so GNOME Software shows the correct application state straight away if you add a remote or install a flatpak app on the command line. At the moment we show both packages and flatpaks in the search results, but when we suggest apps on the overview page we automatically prefer the flatpak version if both are available. In Ubuntu, snappy results are sorted above package results unconditionally, but I don’t know if this is a good thing to do for flatpaks upstream, comments welcome. I’m sure whatever defaults I choose will mortally offend someone.

Screenshot from 2016-07-05 14-45-35

GNOME Software also supports single-file flatpak bundles like gimp.flatpak – just double click and you’re good to install. These files are somewhat like a package in that all the required files are included and you can install without internet access. These bundles can also install a remote (ie a reference to a flatpak repository) too, which allows them to be kept up to date. Such per-application remotes are only used for the specific application and not the others potentially in the same tree (for the curious, this is called a “noenumerate” remote). We also support the more rarely seen dummy.flatpakrepo files too; these allow a user to install a remote which could contain a number of applications and makes it very easy to set up an add-on remote that allows you browse a different set of apps than shipped, for instance the Endless-specific apps. Each of these files contains all the metadata we need in AppStream format, with translations, icons and all the things you expect from a modern software center. It’s a shame snappy decided not to use AppStream and AppData for application metadata, as this kind of extra data really makes the UI really beautiful.

Screenshot from 2016-07-05 14-54-18

With the latest version of flatpak we also do a much better job of installing the additional extensions the application needs, for instance locales or debug data. Sharing the same code between the upstream command line tool and gnome-software means we always agree on what needs installing and updating. Just like the CLI, gnome-software can update flatpaks safely live (even when the application is running), although we do a little bit extra compared to the CLI and download the data we need to do the update when the session is idle and on suitable unmetered network access. This means you can typically just click the ‘Update’ button in the updates panel for a near-instant live-update. This is what people have wanted for years, and I’ve told each and every bug-report that live updates using packages only works 99.99% of the time, exploding in a huge fireball 0.01% of the time. Once all desktop apps are packaged as flatpaks we will only need to reboot for atomic offline updates for core platform updates like a new glibc or the kernel. That future is very nearly now.

Screenshot from 2016-07-05 14-54-59

darktable 2.0.5 released

we're proud to announce the fifth bugfix release for the 2.0 series of darktable, 2.0.5!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.5.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.0.5.tar.xz
898b71b94e7ef540eb1c87c829daadc8d8d025b1705d4a9471b1b9ed91b90a02 darktable-2.0.5.tar.xz
$ sha256sum darktable-2.0.5.dmg
e0ae0e5e19771810a80d6851e022ad5e51fb7da75dcbb98d96ab5120b38955fd  darktable-2.0.5.dmg

and the changelog as compared to 2.0.4 can be found below.

New Features

  • Add geolocation to watermark variables

Bugfixes

  • Mac: bugfix + build fix
  • Lua: fixed dt.collection not working
  • Fix softproofing with some internal profiles
  • Fix non-working libsecret pwstorage backend
  • Fixed a few issues within (rudimentary) lightroom import
  • Some fixes related to handling of duplicates and/or tags

Base Support

  • Canon EOS 80D (no mRAW/sRAW support!)

White Balance Presets

  • Canon EOS 80D

Noise Profiles

  • Canon EOS 80D

Translations Updates

  • Danish
  • German
  • Slovak

July 04, 2016

Texas Linux Fest 2016


Texas Linux Fest 2016

Everything's Bigger in Texas!

While in London this past April I got a chance to hang out a bit with LWN.net editor and fellow countryman, Nathan Willis. (It sounds like the setup for a bad joke: “An Alabamian and Texan meet in a London pub…”). Which was awesome because even though we were both at LGM2014, we never got a chance to sit down and chat.

So it was super-exciting for me to hear from Nate about possibly doing a photowalk and Free Software photo workshop at the 2016 Texas Linux Fest, and as soon as I cleared it with my boss, I agreed!

Dot at LGM 2014 My Boss

So… mosey on down to Austin, Texas on July 8-9 for Texas Linux Fest and join Akkana Peck and myself for a photowalk first thing of the morning on Friday (July 8) to be immediately followed by workshops from both of us. I’ll be talking about Free Software photography workflows and projects and Akkana will be focusing on a GIMP workshop.

This is part of a larger “Open Graphics” track on the entire first day that also includes Ted Gould creating technical diagrams using Inkscape, Brian Beck doing a Blender tutorial, and Jonathon Thomas showing off OpenShot 2.0. You can find the full schedule on their website.

I hope to see some of you there!

July 03, 2016

Midsummer Nature Notes from Traveling

A few unusual nature observations noticed over the last few weeks ...

First, on a trip to Washington DC a week ago (my first time there). For me, the big highlight of the trip was my first view of fireflies -- bright green ones, lighting once or twice then flying away, congregating over every park, lawn or patch of damp grass. What fun!

Predatory grackle

[grackle]

But the unusual observation was around mid-day, on the lawn near the Lincoln Memorial. A grackle caught my attention as it flashed by me -- a male common grackle, I think (at least, it was glossy black, relatively small and with only a moderately long tail).

It turned out it was chasing a sparrow, which was dodging and trying to evade, but unsuccessfully. The grackle made contact, and the sparrow faltered, started to flutter to the ground. But the sparrow recovered and took off in another direction, the grackle still hot on its tail. The grackle made contact again, and again the sparrow recovered and kept flying. But the third hit was harder than the other two, and the sparrow went down maybe fifteen or twenty feet away from me, with the grackle on top of it.

The grackle mantled over its prey like a hawk and looked like it was ready to begin eating. I still couldn't quite believe what I'd seen, so I stepped out toward the spot, figuring I'd scare the grackle away and I'd see if the sparrow was really dead. But the grackle had its eye on me, and before I'd taken three steps, it picked up the sparrow in its bill and flew off with it.

I never knew grackles were predatory, much less capable of killing other birds on the wing and flying off with them. But a web search on grackles killing birds got quite a few hits about grackles killing and eating house sparrows, so apparently it's not uncommon.

Daytime swarm of nighthawks

Then, on a road trip to visit friends in Colorado, we had to drive carefully past the eastern slope of San Antonio Mountain as a flock of birds wheeled and dove across the road. From a distance it looked like a flock of swallows, but as we got closer we realized they were far larger. They turned out to be nighthawks -- at least fifty of them, probably considerably more. I've heard of flocks of nighthawks swarming around the bugs attracted to parking lot streetlights. And I've seen a single nighthawk, or occasionally two, hawking in the evenings from my window at home. But I've never seen a flock of nighthawks during the day like this. An amazing sight as they swoop past, just feet from the car's windshield.

Flying ants

[Flying ant courtesy of Jen Macke]

Finally, the flying ants. The stuff of a bad science fiction movie! Well, maybe if the ants were 100 times larger. For now, just an interesting view of the natural world.

Just a few days ago, Jennifer Macke wrote a fascinating article in the PEEC Blog, "Ants Take Wing!" letting everyone know that this is the time of year for ants to grow wings and fly. (Jen also showed me some winged lawn ants in the PEEC ant colony when I was there the day before the article came out.) Both males and females grow wings; they mate in the air, and then the newly impregnated females fly off, find a location, shed their wings (leaving a wing scar you can see if you have a strong enough magnifying glass) and become the queen of a new ant colony.

And yesterday morning, as Dave and I looked out the window, we saw something swarming right below the garden. I grabbed a magnifying lens and rushed out to take a look at the ones emerging from the ground, and sure enough, they were ants. I saw only black ants. Our native harvester ants -- which I know to be common in our yard, since I've seen the telltale anthills surrounded by a large bare area where they clear out all vegetation -- have sexes of different colors (at least when they're flying): females are red, males are black. These flying ants were about the size of harvester ants but all the ants I saw were black. I retreated to the house and watched the flights with binoculars, hoping to see mating, but all the flyers I saw seemed intent on dispersing. Either these were not harvester ants, or the females come out at a different time from the males. Alas, we had an appointment and had to leave so I wasn't able to monitor them to check for red ants. But in a few days I'll be watching for ants that have lost their wings ... and if I find any, I'll try to identify queens.

June 29, 2016

Color Manipulation with the Colour Checker LUT Module


Color Manipulation with the Colour Checker LUT Module

hanatos tinkering in darktable again...

I was lucky to get to spend some time in London with the darktable crew. Being the wonderful nerds they are, they were constantly working on something while we were there. One of the things that Johannes was working on was the colour checker module for darktable.

Having recently acquired a Fuji camera, he was working on matching color styles from the built-in rendering on the camera. Here he presents some of the results of what he was working on.

This was originally published on the darktable blog, and is being republished here with permission. —Pat


motivation

for raw photography there exist great presets for nice colour rendition:

unfortunately these are eat-it-or-die canned styles or icc lut profiles. you have to apply them and be happy or tweak them with other tools. but can we extract meaning from these presets? can we have understandable and tweakable styles like these?

in a first attempt, i used a non-linear optimiser to control the parameters of the modules in darktable’s processing pipeline and try to match the output of such styles. while this worked reasonably well for some of pat’s film luts, it failed completely on canon’s picture styles. it was very hard to reproduce generic colour-mapping styles in darktable without parametric blending.

that is, we require a generic colour to colour mapping function. this should be equally powerful as colour look up tables, but enable us to inspect it and change small aspects of it (for instance only the way blue tones are treated).

overview

in git master, there is a new module to implement generic colour mappings: the colour checker lut module (lut: look up table). the following will be a description how it works internally, how you can use it, and what this is good for.

in short, it is a colour lut that remains understandable and editable. that is, it is not a black-box look up table, but you get to see what it actually does and change the bits that you don’t like about it.

the main use cases are precise control over source colour to target colour mapping, as well as matching in-camera styles that process raws to jpg in a certain way to achieve a particular look. an example of this are the fuji film emulation modes. to this end, we will fit a colour checker lut to achieve their colour rendition, as well as a tone curve to achieve the tonal contrast.

target

to create the colour lut, it is currently necessary to take a picture of an it8 target (well, technically we support any similar target, but didn’t try them yet so i won’t really comment on it). this gives us a raw picture with colour values for a few colour patches, as well as a in-camera jpg reference (in the raw thumbnail..), and measured reference values (what we know it should look like).

to map all the other colours (that fell in between the patches on the chart) to meaningful output colours, too, we will need to interpolate this measured mapping.

theory

we want to express a smooth mapping from input colours \(\mathbf{s}\) to target colours \(\mathbf{t}\), defined by a couple of sample points (which will in our case be the 288 patches of an it8 chart).

the following is a quick summary of what we implemented and much better described in JP’s siggraph course [0].

radial basis functions

radial basis functions are a means of interpolating between sample points via

$$f(x) = \sum_i c_i\cdot\phi(| x - s_i|),$$

with some appropriate kernel \(\phi(r)\) (we’ll get to that later) and a set of coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at and in between the source colour positions \(s_i\). now to make sure the function actually passes through the target colours, i.e. \(f(s_i) = t_i\), we need to solve a linear system. because we want the function to take on a simple form for simple problems, we also add a polynomial part to it. this makes sure that black and white profiles turn out to be black and white and don’t oscillate around zero saturation colours wildly. the system is

$$ \left(\begin{array}{cc}A &P\\P^t & 0\end{array}\right) \cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) = \left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$

where

$$ A=\left(\begin{array}{ccc} \phi(r_{00})& \phi(r_{10})& \cdots \\ \phi(r_{01})& \phi(r_{11})& \cdots \\ \phi(r_{02})& \phi(r_{12})& \cdots \\ \cdots & & \cdots \end{array}\right),$$

and \(r_{ij} = | s_i - t_j |\) is the distance (CIE 76 \(\Delta\)E, \(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between source colour \(s_i\) and target colour \(t_j\), in our case

$$P=\left(\begin{array}{cccc} L_{s_0}& a_{s_0}& b_{s_0}& 1\\ L_{s_1}& a_{s_1}& b_{s_1}& 1\\ \cdots \end{array}\right)$$

is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial part. these are here so we can for instance easily reproduce \(t = s\) by setting \(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).

many options will do the trick and solve the system here. we use singular value decomposition in our implementation. one advantage is that it is robust against singular matrices as input (accidentally map the same source colour to different target colours for instance).

thin plate splines

we didn’t yet define the radial basis function kernel. it turns out so-called thin plate splines have very good behaviour in terms of low oscillation/low curvature of the resulting function. the associated kernel is

$$\phi(r) = r^2 \log r.$$

note that there is a similar functionality in gimp as a gegl colour mapping operation (which i believe is using a shepard-interpolation-like scheme).

creating a sparse solution

we will feed this system with 288 patches of an it8 colour chart. that means, with the added four polynomial coefficients, we have a total of 292 source/target colour pairs to manage here. apart from performance issues when executing the interpolation, we didn’t want that to show up in the gui like this, so we were looking to reduce this number without introducing large error.

indeed this is possible, and literature provides a nice algorithm to do so, which is called orthogonal matching pursuit [1].

this algorithm will select the most important hand full of coefficients \(\in \mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make best use of gui real estate.

the colour checker lut module

clut-iop

gui elements

when you select the module in darkroom mode, it should look something like the image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid instead). by default, it will load the 24 patches of a colour checker classic and initialise the mapping to identity (no change to the image).

  • the grid shows a list of coloured patches. the colours of the patches are the source points \(\mathbf{s}\).
  • the target colour \(t_i\) of the selected patch \(i\) is shown as offset controlled by sliders in the ui under the grid of patches.
  • an outline is drawn around patches that have been altered, i.e. the source and target colours differ.
  • the selected patch is marked with a white square, and the number shows in the combo box below.

interaction

to interact with the colour mapping, you can change both source and target colours. the main use case is to change the target colours however, and start with an appropriate palette (see the presets menu, or download a style somewhere).

  • you can change lightness (L), green-red (a), blue-yellow (b), or saturation (C) of the target colour via sliders.
  • select a patch by left clicking on it, or using the combo box, or using the colour picker
  • to change source colour, select a new colour from your image by using the colour picker, and shift-left-click on the patch you want to replace.
  • to reset a patch, double-click it.
  • right-click a patch to delete it.
  • shift-left-click on empty space to add a new patch (with the currently picked colour as source colour).

example use cases

example 1: dodging and burning with the skin tones preset

to process the following image i took of pat in the overground, i started with the skin tones preset in the colour checker module (right click on nothing in the gui or click on the icon with the three horizontal lines in the header and select the preset).

then, i used the colour picker (little icon to the right of the patch# combo box) to select two skin tones: very bright highlights and dark shadow tones. the former i dragged the brightness down a bit, the latter i brightened up a bit via the lightness (L) slider. this is the result:

original dialed down contrast in skin tones

example 2: skin tones and eyes

in this image, i started with the fuji classic chrome-like style (see below for a download link), to achieve the subdued look in the skin tones. then, i picked the iris colour and saturated this tone via the saturation slider.

as a side note, the flash didn’t fire in this image (iso 800) so i needed to stop it up by 2.5ev and the rest is all natural lighting..

original
+2.5ev classic chrome saturated eyes

use darktable-chart to create a style

as a starting point, i matched a colour checker lut interpolation function to the in-camera processing of fuji cameras. these have the names of old film and generally do a good job at creating pleasant colours. this was done using the darktable-chart utility, by matching raw colours to the jpg output (both in Lab space in the darktable pipeline).

here is the link to the fuji styles, and how to use them. i should be doing pat’s film emulation presets with this, too, and maybe styles from other cameras (canon picture styles?). darktable-chart will output a dtstyle file, with the mapping split into tone curve and colour checker module. this allows us to tweak the contrast (tone curve) in isolation from the colours (lut module).

these styles were created with the X100T model, and reportedly they work so/so with different camera models. the idea is to create a Lab-space mapping which is well configured for all cameras. but apparently there may be sufficient differences between the output of different cameras after applying their colour matrices (after all these matrices are just an approximation of the real camera to XYZ mapping).

so if you’re really after maximum precision, you may have to create the styles yourself for your camera model. here’s how:

step-by-step tutorial to match the in-camera jpg engine

note that this is essentially similar to pascal’s colormatch script, but will result in an editable style for darktable instead of a fixed icc lut.

  • need an it8 (sorry, could lift that, maybe, similar to what we do for basecurve fitting)

  • shoot the chart with your camera:

    • shoot raw + jpg
    • avoid glare and shadow and extreme angles, potentially the rims of your image altogether
    • shoot a lot of exposures, try to match L=92 for G00 (or look that up in your it8 description)
  • develop the images in darktable:

    • lens and vignetting correction needed on both or on neither of raw + jpg
    • (i calibrated for vignetting, see lensfun)
    • output colour space to Lab (set the secret option in darktablerc: allow_lab_output=true)
    • standard input matrix and camera white balance for the raw, srgb for jpg.
    • no gamut clipping, no basecurve, no anything else.
    • maybe do perspective correction and crop the chart
    • export as float pfm
  • darktable-chart

    • load the pfm for the raw image and the jpg target in the second tab
    • drag the corners to make the mask match the patches in the image
    • maybe adjust the security margin using the slider in the top right, to avoid stray colours being blurred into the patch readout
    • you need to select the gray ramp in the combo box (not auto-detected)
    • export csv
darktable-lut-tool-crop-01 darktable-lut-tool-crop-02 darktable-lut-tool-crop-03 darktable-lut-tool-crop-04

edit the csv in a text editor and manually add two fixed fake patches HDR00 and HDR01:

name;fuji classic chrome-like
description;fuji classic chrome-like colorchecker
num_gray;24
patch;L_source;a_source;b_source;L_reference;a_reference;b_reference
A01;22.22;13.18;0.61;21.65;17.48;3.62
A02;23.00;24.16;4.18;26.92;32.39;11.96
...
HDR00;100;0;0;100;0;0
HDR01;200;0;0;200;0;0
...

this is to make sure we can process high-dynamic range images and not destroy the bright spots with the lut. this is needed since the it8 does not deliver any information out of the reflective gamut and for very bright input. to fix wide gamut input, it may be needed to enable gamut clipping in the input colour profile module when applying the resulting style to an image with highly saturated colours. darktable-chart does that automatically in the style it writes.

  • fix up style description in csv if you want
  • run darktable-chart --csv
  • outputs a .dtstyle with everything properly switched off, and two modules on: colour checker + tonecurve in Lab

fitting error

when processing the list of colour pairs into a set of coefficients for the thin plate spline, the program will output the approximation error, indicated by average and maximum CIE 76 \(\Delta\)E for the input patches (the it8 in the examples here). of course we don’t know anything about colours which aren’t represented in the patch. the hope would be that the sampling is dense enough for all intents and purposes (but nothing is holding us back from using a target with even more patches).

for the fuji styles, these errors are typically in the range of mean \(\Delta E\approx 2\) and max \(\Delta E \approx 10\) for 24 patches and a bit less for 49. unfortunately the error does not decrease very fast in the number of patches (and will of course drop to zero when using all the patches of the input chart).

provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751

astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165

velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697

classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298

mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548

future work

it is possible to match the reference values of the it8 instead of a reference jpg output, to calibrate the camera more precisely than the colour matrix would.

  • there is a button for this in the darktable-chart tool
  • needs careful shooting, to match brightness of reference value closely.
  • at this point it’s not clear to me how white balance should best be handled here.
  • need reference reflectances of the it8 (wolf faust ships some for a few illuminants).

another next step we would like to take with this is to match real film footage (porta etc). both reference and film matching will require some global exposure calibration though.

references

  • [0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, “Scattered data interpolation for computer graphics” in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. pdf
  • [1] J. A. Tropp and A. C. Gilbert, “Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit”, in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.

Tue 2016/Jun/28

  • La Mapería

    It is Hack Week at SUSE, and I am working on La Mapería (the map store), a little program to generate beautiful printed maps from OpenStreetMap data.

    I've gotten to the point of having something working: the tool downloads rendered map tiles, assembles them with Cairo as a huge PDF surface, centers the map on a sheet of paper, and prints nice margins and a map scale. This was harder to me than it looks: I am pretty good at dealing with pixel coordinates and transformations, but a total newbie with geodetic calculations, geographical coodinate conversions, and thinking in terms of a physical map scale instead of just a DPI and a paper size.

    Printed map Printed map 2

    The resulting chart has a map and a frame with arc-minute markings, and a map scale rule. I want to have a 1-kilometer UTM grid if I manage to wrap my head around map projections.

    Coordinates and printed maps

    The initial versions of this tool evolved in an interesting way. Assembling a map from map tiles is basically this:

    1. Figure out the tile numbers for the tiles in the upper-left and the lower-right corners of the map.
    2. Composite each tile into a large image, like a mosaic.

    The first step is pretty easy if you know the (latitude, longitude) of the corners: the relevant conversion from coordinates to tile numbers is in the OpenStreetMap wiki. The second step is just two nested for() loops that paste tile images onto a larger image.

    When looking at a web map, it's reasonably easy to find the coordinates for each corner. However, I found that printed maps want one to think in different terms. The map scale corresponds to the center of the map (it changes slightly towards the corners, due to the map's projection). So, instead of thinking of "what fits inside the rectangle given by those corners", you have to think in terms of "how much of the map will fit given your paper size and the map scale... around a center point".

    So, my initial tool looked like

    python3 make-map.py
            --from-lat=19d30m --from-lon=-97d
            --to-lat=19d22m --to-lon=-96d47m
            --output=output.png

    and then I had to manually scale that image to print it at the necessary DPI for a given map scale (1:50,000). This was getting tedious. It took me a while to convert the tool to think in terms of these:

    • Paper size and margins
    • Coordinates for the center point of the map
    • Printed map scale

    Instead of providing all of these parameters in the command line, the program now takes a little JSON configuration file.

    La Mapería generates a PDF or an SVG (for tweaking with Inkscape before sending it off to a printing bureau). It draws a nice frame around the map, and clips the map to the frame's dimensions.

    La Mapería is available on github. It may or may not work out of the box right now; it includes my Mapbox access token — it's public — but I really would like to avoid people eating my Mapbox quota. I'll probably include the map style data with La Mapería's source code so that people can create their own Mapbox accounts.

    Over the rest of the week I will be documenting how to set up a Mapbox account and a personal TileStache cache to avoid downloading tiles repeatedtly.

June 26, 2016

How to un-deny a host blocked by denyhosts

We had a little crisis Friday when our server suddenly stopped accepting ssh connections.

The problem turned out to be denyhosts, a program that looks for things like failed login attempts and blacklists IP addresses.

But why was our own IP blacklisted? It was apparently because I'd been experimenting with a program called mailsync, which used to be a useful program for synchronizing IMAP folders with local mail folders. But at least on Debian, it has broken in a fairly serious way, so that it makes three or four tries with the wrong password before it actually uses the right one that you've configured in .mailsync. These failed logins are a good way to get yourself blacklisted, and there doesn't seem to be any way to fix mailsync or the c-client library it uses under the covers.

Okay, so first, stop using mailsync. But then how to get our IP off the server's blacklist? Just editing /etc/hosts.deny didn't do it -- the IP reappeared there a few minutes later.

A web search found lots of solutions -- you have to edit a long list of files, but no two articles had the same file list. It appears that it's safest to remove the IP from every file in /var/lib/denyhosts.

So here are the step by step instructions.

First, shut off the denyhosts service:

service denyhosts stop

Go to /var/lib/denyhosts/ and grep for any file that includes your IP:

grep aa.bb.cc.dd *

(If you aren't sure what your IP is as far as the outside world is concerned, Googling what's my IP will helpfully tell you, as well as giving you a list of other sites that will also tell you.)

Then edit each of these files in turn, removing your IP from them (it will probably be at the end of the file).

When you're done with that, you have one more file to edit: remove your IP from the end of /etc/hosts.deny

You may also want to add your IP to /etc/hosts.allow, but it may not make much difference, and if you're on a dynamic IP it might be a bad idea since that IP will eventually be used by someone else.

Finally, you're ready to re-start denyhosts:

service denyhosts start

Whew, un-blocked. And stay away from mailsync. I wish I knew of a program that actually worked to keep IMAP and mbox mailboxes in sync.

June 23, 2016

Siggraph 2016 Computer Animation Festival Selections

We are proud to share the news that 3 films completely produced with Blender have been selected for the 43rd Computer Animation Festival to be celebrated in Anaheim, California, 24-28 July 2016! The films are Cosmos Laundromat (Blender Institute, directed by Mathieu Auvray), Glass Half (Blender Institute, directed by Beorn Leonard) and Alike (directed and produced by Daniel M. Lara and Rafa Cano).

et_selection

The films are going to be screened at the Electronic Theater, which is one of the highlights of the SIGGRAPH conference. SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research and it is an honour to see such films in the same venue where computer graphics has been pioneered for decades.

Here you can see a trailer of the Animation Festival, where some shots of Cosmos Laundromat can be spotted.

June 22, 2016

Sharing is Caring


Sharing is Caring

Letting it all hang out

It was always my intention to make the entire PIXLS.US website available under a permissive license. The content is already all licensed Creative Commons, By Attribution, Share-Alike (unless otherwise noted). I just hadn’t gotten around to actually posting the site source.

Until now(ish). I say “ish“ because I apparently released the code back in April and am just now getting around to talking about it.

Also, we finally have a category specifically for all those darktable weenies on discuss!

Don’t Laugh

I finally got around to pushing my code for this site up to Github on April 27 (I’m basing this off git logs because my memory is likely suspect). It took a while, but better late than never? I think part of the delay was a bit of minor embarrassment on my part for being so sloppy with the site code. In fact, I’m still embarrassed - so don’t laugh at me too hard (and if you do, at least don’t point while laughing too).

Carrie White Brian De Palma’s interpretation of my fears…

So really this post is just a reminder to anyone that was interested that this site is available on Github:

https://github.com/pixlsus/

In fact, we’ve got a couple of other repositories under the Github Organization PIXLS.US including this website, presentation assets, lighting diagram SVG’s, and more. If you’ve got a Github account or wanted to join in with hacking at things, by all means send me a note and we’ll get you added to the organization asap.

Note: you don’t need to do anything special if you just want to grab the site code. You can do this quickly and easily with:

git clone https://github.com/pixlsus/website.git

You actually don’t even need a Github account to clone the repo, but you will need one if you want to fork it on Github itself, or to send pull-requests. You can also feel free to simply email/post patches to us as well:

git format-patch testing --stdout > your_awesome_work.patch

Being on Github means that we also now have an issue tracker to report any bugs or enhancements you’d like to see for the site.

So no more excuses - if you’d like to lend a hand just dive right in! We’re all here to help! :)

Speaking of Helping

Speaking of which, I wanted to give a special shout-out to community member @paperdigits (Mica), who has been active in sharing presentation materials in the Presentations repo and has been actively hacking at the website. Mica’s recommendations and pull requests are helping to make the site code cleaner and better for everyone, and I really appreciate all the help (even if I am scared of change).

Thank you, Mica! You rock!

Those Stinky darktable People

Yes, after member Claes asked the question on discuss about why we didn’t have a darktable category on the forums, I relented and created one. Normally I want to make sure that any category is going to have active people to maintain and monitor the topics there. I feel like having an empty forum can sometimes be detrimental to the perception of a project/community.

darktable logo

In this case, any topics in the darktable category will also show up in the more general Software category as well. This way the visibility and interactions are still there, but with the added benefit that we can now choose to see only darktable posts, ignore them, or let all those stinky users do what they want in there.

Besides, now we can say that we’ve sufficiently appeased Morgan Hardwood‘s organizational needs…

So, come on by and say hello in the brand new darktable category!

June 21, 2016

AAA game, indie game, card-board-box

Early bird gets eaten by the Nyarlathotep
 
The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
 
But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
 
Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
 
It's late here, I'll be off to do some testing I think :)

PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

Sharing Galore


Sharing Galore

or, Why This Community is Awesome

Community member and RawTherapee hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the Söderåsen National Park (Sweden!). Ofnuts is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject. After bugging David Tschumperlé he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.

So much neat content being shared for everyone to play with and learn from! Come see what everyone is doing!

Old Oak - A Tutorial

Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap. Just as Morgan Hardwood did on the forums a few days ago!

Old Oak by Morgan Hardoowd Old Oak by Morgan Hardwood cbsa

He introduces the image and post:

There is an old oak by the southern entrance to the Söderåsen National Park. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley rabbits sure love it.

The image itself is a treat. I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image. The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.

Of course, Morgan doesn’t stop there. You should absolutely go read his entire post. He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the flat-field, a shot of his color target + DCP, and finally his RawTherapee .PP3 file with all of his settings! Whew!

If you’re interested I urge you to go check out (and participate!) in his topic on the forums: Old Oak - A Tutorial.

I Will Burn This Place to the Ground

Speaking of sharing material, Ofnuts has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity. Why do I say this?

Kill It With Fire! Kill it with fire!

Because he started a topic appropriately entitled: “NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:

CarVac Version A version by CarVac
MLC Morgin Version By MLC/Morgin
By Jonas Wagner By Jonas Wagner
iarga By iarga
by PkmX By PkmX
by Kees Guequierre By Kees Guequierre

Of course, I had a chance to try processing it as well. Here’s what I ended up with:

Flames

Ahhhh, just writing this post is a giant bag of NOPE*. If you’d like to join in on the fun(?) and share your processing as well - go check out the topic!

Now let’s move on to something more cute and fuzzy, like an ALOT…

* I kid, I’m not really an arachnophobe (within reason), but I can totally see why someone would be.

Median Blending ALOT of Images with G’MIC

Hyperbole and a Half ALOT The ALOT. Borrowed from Allie Brosh and here because I really wanted an excuse to include it.

I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post). One of those friends is G’MIC creator and community member David Tschumperlé.

A few years back he helped me with some artwork I was generating with imagemagick at the time. I was averaging images together to see what an amalgamation would look like. For instance, here is what all of the Sports Illustrated swimsuit edition (NSFW) covers (through 2000) look like, all at once:

Sport Illustrated Swimsuit Covers Through 2000

A natural progression of this idea was to consider doing a median blend vs. mean. The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not. This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).

It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted. This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.

So it’s awesome that, yet again, David has found a solution to the problem! He explains it in greater detail on his topic:

A guide about computing the temporal average/median of video frames with G’MIC

He basically chops up the image frame into regions, then computes the pixel-median value for those regions. Here’s an example of his result:

P!nk Try Mean/Median Mean/Median samples from P!nk - Try music video.

Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!

Sharing Galore


Sharing Galore

or, Why This Community is Awesome

Community member and RawTherapee hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the Söderåsen National Park (Sweden!). Ofnuts is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject. After bugging David Tschumperlé he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.

So much neat content being shared for everyone to play with and learn from! Come see what everyone is doing!

Old Oak - A Tutorial

Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap. Just as Morgan Hardwood did on the forums a few days ago!

Old Oak by Morgan Hardoowd Old Oak by Morgan Hardwood cbsa

He introduces the image and post:

There is an old oak by the southern entrance to the Söderåsen National Park. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley rabbits sure love it.

The image itself is a treat. I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image. The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.

Of course, Morgan doesn’t stop there. You should absolutely go read his entire post. He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the flat-field, a shot of his color target + DCP, and finally his RawTherapee .PP3 file with all of his settings! Whew!

If you’re interested I urge you to go check out (and participate!) in his topic on the forums: Old Oak - A Tutorial.

I Will Burn This Place to the Ground

Speaking of sharing material, Ofnuts has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity. Why do I say this?

Kill It With Fire! Kill it with fire!

Because he started a topic appropriately entitled: “NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:

CarVac Version A version by CarVac
MLC Morgin Version By MLC/Morgin
By Jonas Wagner By Jonas Wagner
iarga By iarga
by PkmX By PkmX
by Kees Guequierre By Kees Guequierre

Of course, I had a chance to try processing it as well. Here’s what I ended up with:

Flames

Ahhhh, just writing this post is a giant bag of NOPE*. If you’d like to join in on the fun(?) and share your processing as well - go check out the topic!

Now let’s move on to something more cute and fuzzy, like an ALOT…

* I kid, I’m not really an arachnophobe (within reason), but I can totally see why someone would be.

Median Blending ALOT of Images with G’MIC

Hyperbole and a Half ALOT The ALOT. Borrowed from Allie Brosh and here because I really wanted an excuse to include it.

I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post). One of those friends is G’MIC creator and community member David Tschumperlé.

A few years back he helped me with some artwork I was generating with imagemagick at the time. I was averaging images together to see what an amalgamation would look like. For instance, here is what all of the Sports Illustrated swimsuit edition (NSFW) covers (through 2000) look like, all at once:

Sport Illustrated Swimsuit Covers Through 2000

A natural progression of this idea was to consider doing a median blend vs. mean. The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not. This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).

It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted. This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.

So it’s awesome that, yet again, David has found a solution to the problem! He explains it in greater detail on his topic:

A guide about computing the temporal average/median of video frames with G’MIC

He basically chops up the image frame into regions, then computes the pixel-median value for those regions. Here’s an example of his result:

P!nk Try Mean/Median Mean/Median samples from P!nk - Try music video.

Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!

June 18, 2016

Cave 6" as a Quick-Look Scope

I haven't had a chance to do much astronomy since moving to New Mexico, despite the stunning dark skies. For one thing, those stunning dark skies are often covered with clouds -- New Mexico's dramatic skyscapes can go from clear to windy to cloudy to hail or thunderstorms and back to clear and hot over the course of a few hours. Gorgeous to watch, but distracting for astronomy, and particularly bad if you want to plan ahead and observe on a particular night. The Pajarito Astronomers' monthly star parties are often clouded or rained out, as was the PEEC Nature Center's moon-and-planets star party last week.

That sort of uncertainty means that the best bet is a so-called "quick-look scope": one that sits by the door, ready to be hauled out if the sky is clear and you have the urge. Usually that means some kind of tiny refractor; but it can also mean leaving a heavy mount permanently set up (with a cover to protect it from those thunderstorms) so it's easy to carry out a telescope tube and plunk it on the mount.

I have just that sort of scope sitting in our shed: an old, dusty Cave Astrola 6" Newtonian on an equatorian mount. My father got it for me on my 12th birthday. Where he got the money for such a princely gift -- we didn't have much in those days -- I never knew, but I cherished that telescope, and for years spent most of my nights in the backyard peering through the Los Angeles smog.

Eventually I hooked up with older astronomers (alas, my father had passed away) and cadged rides to star parties out in the Mojave desert. Fortunately for me, parenting standards back then allowed a lot more freedom, and my mother was a good judge of character and let me go. I wonder if there are any parents today who would let their daughter go off to the desert with a bunch of strange men? Even back then, she told me later, some of her friends ribbed her -- "Oh, 'astronomy'. Suuuuuure. They're probably all off doing drugs in the desert." I'm so lucky that my mom trusted me (and her own sense of the guys in the local astronomy club) more than her friends.

The Cave has followed me through quite a few moves, heavy, bulky and old fashioned as it is; even when I had scopes that were bigger, or more portable, I kept it for the sentimental value. But I hadn't actually set it up in years. Last week, I assembled the heavy mount and set it up on a clear spot in the yard. I dusted off the scope, cleaned the primary mirror and collimated everything, replaced the finder which had fallen out somewhere along the way, set it up ... and waited for a break in the clouds.

[Hyginus Rille by Michael Karrer] I'm happy to say that the optics are still excellent. As I write this (to be posted later), I just came in from beautiful views of Hyginus Rille and the Alpine Valley on the moon. On Jupiter the Great Red Spot was just rotating out. Mars, a couple of weeks before opposition, is still behind a cloud (yes, there are plenty of clouds). And now the clouds have covered the moon and Jupiter as well. Meanwhile, while I wait for a clear view of Mars, a bat makes frenetic passes overhead, and something in the junipers next to my observing spot is making rhythmic crunch, crunch, crunch sounds. A rabbit chewing something tough? Or just something rustling in the bushes?

I just went out again, and now the clouds have briefly uncovered Mars. It's the first good look I've had at the Red Planet in years. (Tiny achromatic refractors really don't do justice to tiny, bright objects.) Mars is the most difficult planet to observe: Dave liks to talk about needing to get your "Mars eyes" trained for each Mars opposition, since they only come every two years. But even without my "Mars eyes", I had no trouble seeing the North pole with dark Acidalia enveloping it, and, in the south, the sinuous chain of Sini Sabaeus, Meridiani, Margaritifer, and Mare Erythraeum. (I didn't identify any of these at the time; instead, I dusted off my sketch pad and sketched what I saw, then compared it with XEphem's Mars view afterward.)

I'm liking this new quick-look telescope -- not to mention the childhood memories it brings back.

June 17, 2016

Appimages, Snaps, XDG-Apps^WFlatpaks

Lots of excitement... When Canonical announced that their snaps work on a number of other Linux distributions, the reactions were predictable, sort of amusing and missing the point.

In the end, all this going back and forth, these are just turf wars. There are Redhat/Fedora people scared and horrified that Canonical/Ubuntu might actually set a standard for once, there are probably Canonical/Ubuntu people scared that their might not set a standard (though after several days of this netstorm, I haven't seen anything negative from their side, there are traditional packagers worried that the world may change and that they lose their "curating" position.

And there's me scared that I'll have to maintain debs, rpms, flatpaks, snaps, appimages, OSX bundles, MSI installers, NSIS installers and portable zips. My perspective is a bit that of an outsider, I don't care about the politics, though I do wish that it isn't a dead certainty that we'll end up having both flatpaks (horrible name, by the way) and snaps in the Linux world.

Both the Canonical and the Fedora side claim to be working with the community, and, certainly, I was approached about snap and helped make a Krita snap. Which is a big win, both for me and for snap. But both projects ignore the appimage project, which is a real community effort, without corporate involvement. Probably because there is no way for companies to use appimage to create a lock-in effort or chance monetization, it'll always be a community project, ignored by the big Linux companies.

Here's my take, speaking a someone who is actually releasing software to end users using some of these new-fangled systems.

The old rpm/deb way of packaging is excellent for creating the base system. For software where having the latest version doesn't matter that much for productivity. It's a system that's been used for about twenty years and served us reasonably well. But if you are developing software for end users that is regularly updated, where the latest version is important because it always has improvements that let the users do more work, it's a problem. It's a ghastly drag having to actually make the packages if you're not part of a distribution, and having to make packages for several distributions is not feasible for a small team. And if we don't, then when there are distributions that do not backport new versions to old releases because they only backport bugfixes, not releases, users lose out.

Snap turns out to be pretty easy to make, and pretty easy to upload to Ubuntu's app store, and pretty easy to find once it's there, seeing that there were already more than a thousand downloads after a few days. I don't care about the security technology, that's just not relevant for Krita. If you use Krita, you want it to access your files. It takes about five minutes to make a new snap and upload it -- pretty good going. I was amazed and pleased that the snap now runs on a number of other distributions, and if Canonical/Ubuntu follows up on that, plugs the holes and fixes the bugs, it'll be a big plus. Snap also offers all kinds of flexibility, like adding a patched Qt, that I haven't even tried yet. I also haven't checked how to add translations yet, but that's also because the system we use to release translations for Krita needs changing, and I want to do that first.

I haven't got any experience with flatpak. I know there was a start on making a Krita flatpak, but I haven't seen any results. I think that the whole idea of a runtime, which is a dependency thing, is dumb, though. Sure, it'll save some disk space, but at the cost of added complexity. I don't want that. For flatpak, I'll strike a wait-and-see attitude: I don't see the need for it, but if it materializes, and takes as little of my time as snap, I might make them. Unless I need to install Fedora for it, because that's one Linux distribution that just doesn't agree with me.

Appimages, finally, are totally amazing, because they run everywhere. They don't need any kind of runtime or installation. Creating the initial AppImage recipe took a lot of time and testing, mainly because of the run-everywhere requirement. That means fiddly work trying to figure out which low-level libraries need to be included to make OpenGL work, and which don't. There might be bumps ahead, for instance if we want to start using OpenCL -- or so I was told in a comment on LWN. I don't know yet. Integration with the desktop environment is something Simon is working on, by installing a .desktop file in the user's home directory. Sandboxing is also being worked on, using some of the same technology as flatpak, apparently. Automatic updates is also something that is becoming possible. I haven't had time to investigate those things yet, because of release pressures, kickstarter pressures and all that sort of thing. One possible negative about appimages is that users have a hard time understanding them -- they just cannot believe that download, make executable, go is all there's to it. So much so that I've considered making a tar.xz with an executable appimage inside so users are in a more familiar territory. Maybe even change the extension from .appimage to .exe?

Anyway, when it comes to actually releasing software to end users in a way that doesn't drive me crazy, I love AppImages, I like snap, I hate debs, rpms, repositories, ppa's and their ilk and flatpak has managed to remain a big unknown. If we could get a third format to replace all the existing formats, say flatsnapimage, wouldn't that be lovely?

Wouldn't it?

June 16, 2016

silverorange job opening: Back-end Web Developer

Silverorange, the web design and development company where I work, is looking to hire another great back-end web developer. It’s a nice place to work.

Translation parameters in angular-gettext

As a general rule, I try not to include new features in angular-gettext: small is beautiful and for the most part I consider the project as finished. However, Ernest Nowacki just contributed one feature that was too good to leave out: translation parameters.

To understand what translation parameters are, consider the following piece of HTML:

<span translate>Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}.</span>

The resulting string that needs to be handled by your translators is both ugly and hard to use:

msgid "Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}."

With translation parameters you can add local aliases:

<span translate
      translate-params-date="post.modificationDate | date : 'yyyy-MM-dd HH:mm'"
      translate-params-author="post.author">
    Last modified: {{date}} by {{author}}.
</span>

With this, translators only see the following:

msgid "Last modified: {{date}} by {{author}}."

Simply beautiful.

You’ll need angular-gettext v2.3.0 or newer to use this feature.

More information in the documentation: https://angular-gettext.rocketeer.be/dev-guide/translate-params/.


Comments | More on rocketeer.be | @rubenv on Twitter

June 15, 2016

Running Krita Snaps on Other Distributions

This is pretty cool: in the week before the Krita release, Michael Hall submitted a snapcraft definition for making a Krita snap. A few iterations later, we have something that works (unless you're using an NVidia GPU with the proprietary drivers). Adding Krita to the Ubuntu app store was also really easy.

And now, if you go to snapcraft.io and click on a Linux distribution's logo, you'll get instructions on how to get snap running on your system -- and that means the snap package for Krita can work on Arch, Debian, Fedora, Gentoo -- and Ubuntu of course. Pretty unbelievable! OpenSUSE is still missing though...

Of course, running a snap still means you need to install something before you can run Krita while an AppImage doesn't need anything making it executable. Over the past month, I've encountered a lot of Linux users who just couldn't believe it's so easy, and were asking for install instructions :-)

June 11, 2016

The 2016 Kickstarter

This year's kickstarter fundraising campaign for Krita was more nerve-wracking than the previous two editions. Although we ended up 135% funded, we were almost afraid we wouldn't make it, around the middle. Maybe only the release of Krita 3.0 turned the campaign around. Here's my chaotic and off-the-cuff analysis of this campaign.

Campaign setup

We were ambitious this year and once again decided upon two big goals: text and vector, because we felt both are real pain points in Krita that really need to be addressed. I think now that we probably should have made both into super-stretch goals one level above the 10,000 euro Python stretch goal and let our community decide.

Then we could have made the base level one stretch goal of 15,000 euros, and we'd have been "funded" on the second day and made the Kickstarter expectation that a succesful campaign is funded immediately. Then we could have opened the paypal pledges really early into the campaign and advertise the option properly.

We also hadn't thought through some stretch goals in sufficient depth, so sometimes we weren't totally sure ourselves what we're offering people. This contrasts with last year, where the stretch goals were precisely defined. (But during development became gold-plated -- a 1500 stretch goal should be two weeks of work, which sometimes became four or six weeks.)

We did have a good story, though, which is the central part of any fundraiser. Without a good story that can be summarized in one sentence, you'll get nowhere. And text and vector have been painful for our users for years now, so that part was fine.

We're also really well-oiled when it comes to preparation: Irina, me and Wolthera sat together for a couple of weekends to first select the goals, then figure out the reward levels and possible rewards, and then to write the story and other text. We have lists of people to approach, lists of things that need to be written in time to have them translated into Russian and Japanese -- that's all pretty well oiled.

Not that our list of rewards was perfect, so we had to do some in-campaign additions, and we made at least one mistake: we added a 25 euro level when the existing 25 euros rewards had sold out. But the existing rewards re-used overstock from last year, and for the new level we have to have new goodies made. And that means our cost for those rewards is higher than we thought. Not high enough that those 25 euros pledges don't help towards development, but it's still a mistake.

Our video was very good this year: about half of the plays were watched to the end, which is an amazing score!

Kickstarter is becoming a tired formula

Already after two days, people were saying on the various social media sites that we wouldn't make it. The impression with Kickstarter these days is that if you're not 100% funded in one or two days, you're a failure. Kickstarter has also become that site where you go for games, gadgets and gags.

We also noticed less engagement: fewer messages and comments on the kickstarter site itself. That could have been a function of a less attractive campaign, of course.

That Kickstarter still hasn't got a deal with Paypal is incredible. And Kickstarter's campaign tools are unbelievably primitive: from story editor to update editor (both share the same wysiwyg editor which is stupidly limited, and you can only edit updates for 30 minutes) to the survey tools, which don't allow copy and paste between reward levels or any free text except in the intro. Basically, Kickstarter isn't spending any money on its platform any more, and it shows.

It is next to impossible to get news coverage for a fundraising campaign

You'd think that "independent free software project funds full-time development through community, not commercial, support" would make a great story, especially when the funding is a success and the results are visible for everyone. You'd think that especially the free software oriented media would be interested in a story like this. But, with some exceptions, no.

Last year, I was told by a journalist reporting on free and open source software that there are too many fundraising campaigns to cover. He didn't want to drown his readers in them, and it would be unethical to ignore some and cover others.

But are there so many fundraisers for free software? I don't know, since none get into the news. I know about a few, mostly in the graphics software category -- synfig, blender, Jehan's campaign for Zemarmot, the campaign by the Software Freedom Conversancy, KDE's Randa campaign. But that's really just a handful.

I think that the free and open source news media are doing their readers a disservice by not covering campaigns like ours; and they are doing the ecosystem a disservice. Healthy, independent projects that provide software in important categories, like Krita, are essential for free software to prosper.

Exhaustion

Without the release, we might not have made it. But doing a Kickstarter is exhausting: it's only a month, but feels like two or three. Doing a release and a Kickstarter is double exhausting. We did raise Krita's profile and userbase to a whole other level, though! (Which also translates into a flood of bug reports, and bugzilla basically has become unmanageable for us: we need more triagers and testers, badly!)

Right now, I'd like to take a few days off, and Dmitry smartly is taking a few days off, but there's still so much on my backlog that it's not going to happen.

I also had a day job for three days a week during the campaign, during which I wasn't available for social media work or promo, and I really felt that to be a problem. But I need that job to fund my own work on Krita...

Referrers

Kickstarter lets one know where the backers are coming from. Kickstarter itself is a source of backers: about 4500 euros came from Kickstarter itself. Next up is Reddit with 3000 euros, twitter with 1700, facebook 1400, krita.org 1000 and blendernation with 900. After that, the long tail starts. So, in the absence of news coverage, social media is really important and the Blender community is once again proven to be much bigger than most people in the free software community realize.

Conclusion

The campaign was a success, and the result pretty much the right size, I think. If we had double the result, we would have had to find another freelancer to work on Krita full-time. I'm not sure we're ready for that yet. We've also innovated this year, by deciding to offer artists in our communities commissions to create art for the rewards. That's something we'll be setting in motion soon.

Another innovation is that we decided to produce an art book with work by Krita artists. Calls for submissions will go out soon! That book will also go into the shop, and it's kind of an exercise for the other thing we want to do this year: publish a proper Pepper and Carrot book.

If sales from books will help fund development further, we might skip one year of Kickstarter-like fund raising, in the hope that a new platform will spring up that will offer a fresh way of doing fund raising.

June 10, 2016

Visual diffs and file merges with vimdiff

I needed to merge some changes from a development file into the file on the real website, and discovered that the program I most often use for that, meld, is in one of its all too frequent periods where its developers break it in ways that make it unusable for a few months. (Some of this is related to GTK, which is a whole separate rant.)

That led me to explore some other diff/merge alternatives. I've used tkdiff quite a bit for viewing diffs, but when I tried to use it to merge one file into another I found its merge just too hard to use. Likewise for emacs: it's a wonderful editor but I never did figure out how to get ediff to show diffs reliably, let alone merge from one file to another.

But vimdiff looked a lot easier and had a lot more documentation available, and actually works pretty well.

I normally run vim in an xterm window, but for a diff/merge tool, I want a very wide window which will show the diffs side by side. So I used gvimdiff instead of regular vimdiff: gvimdiff docs.dev/filename docs.production/filename

Configuring gvimdiff to see diffs

gvimdiff initially pops up a tiny little window, and it ignores Xdefaults. Of course you can resize it, but who wants to do that every time? You can control the initial size by setting the lines and columns variables in .vimrc. About 180 columns by 60 lines worked pretty well for my fonts on my monitor, showing two 80-column files side by side. But clearly I don't want to set that in .vimrc so that it runs every time I run vim; I only want that super-wide size when I'm running a side-by-side diff.

You can control that by checking the &diff variable in .vimrc:

if &diff
    set lines=58
    set columns=180
endif

If you do decide to resize the window, you'll notice that the separator between the two files doesn't stay in the center: it gives you lots of space for the right file and hardly any for the left. Inside that same &diff clause, this somewhat arcane incantation tells vim to keep the separator centered:

    autocmd VimResized * exec "normal \<C-w>="

I also found that the colors, in the vim scheme I was using, made it impossible to see highlighted text. You can go in and edit the color scheme and make your own, of course, but an easy way quick fix is to set all highlighting to one color, like yellow, inside the if $diff section:

    highlight DiffAdd    cterm=bold gui=none guibg=Yellow
    highlight DiffDelete cterm=bold gui=none guibg=Yellow
    highlight DiffChange cterm=bold gui=none guibg=Yellow
    highlight DiffText   cterm=bold gui=none guibg=Yellow

Merging changes

Okay, once you can view the differences between the two files, how do you merge from one to the other? Most online sources are quite vague on that, but it's actually fairly easy:

]c jumps to the next difference
[c jumps to the previous difference
dp makes them both look like the left side (apparently stands for diff put
do makes them both look like the right side (apparently stands for diff obtain

The only difficult part is that it's not really undoable. u (the normal vim undo keystroke) works inconsistently after dp: the focus is generally in the left window, so u applies to that window, while dp modified the right window and the undo doesn't apply there. If you put this in your .vimrc

nmap du :wincmd w<cr>:normal u<cr>:wincmd w<cr>
then you can use du to undo changes in the right window, while u still undoes in the left window. So you still have to keep track of which direction your changes are going.

Worse, neither undo nor this du command restores the highlighting showing there's a difference between the two files. So, really, undoing should be reserved for emergencies; if you try to rely on it much you'll end up being unsure what has and hasn't changed.

In the end, vimdiff probably works best for straightforward diffs, and it's probably best get in the habit of always merging from right to left, using do. In other words, run vimdiff file-to-merge-to file-to-merge-from, and think about each change before doing it to make it less likely that you'll need to undo.

And hope that whatever silly transient bug in meld drove you to use vimdiff gets fixed quickly.

June 09, 2016

Display Color Profiling on Linux


Display Color Profiling on Linux

A work in progress

This article by Pascal de Bruijn was originally published on his site and is reproduced here with permission.  —Pat


Attention: This article is a work in progress, based on my own practical experience up until the time of writing, so you may want to check back periodically to see if it has been updated.

This article outlines how you can calibrate and profile your display on Linux, assuming you have the right equipment (either a colorimeter like for example the i1 Display Pro or a spectrophotometer like for example the ColorMunki Photo). For a general overview of what color management is and details about some of its parlance you may want to read this before continuing.

A Fresh Start

First you may want to check if any kind of color management is already active on your machine, if you see the following then you’re fine:

$ xprop -display :0.0 -len 14 -root _ICC_PROFILE
_ICC_PROFILE: no such atom on any window.

However if you see something like this, then there is already another color management system active:

$ xprop -display :0.0 -len 14 -root _ICC_PROFILE
_ICC_PROFILE(CARDINAL) = 0, 0, 72, 212, 108, 99, 109, 115, 2, 32, 0, 0, 109, 110

If this is the case you need to figure out what and why… For GNOME/Unity based desktops this is fairly typical, since they extract a simple profile from the display hardware itself via EDID and use that by default. I’m guessing KDE users may want to look into this before proceeding. I can’t give much advice about other desktop environments though, as I’m not particularly familiar with them. That said, I tested most of the examples in this article with XFCE 4.10 on Xubuntu 14.04 “Trusty”.

Display Types

Modern flat panel displays are comprised of two major components for purposes of our discussion, the backlight and the panel itself. There are various types of backlights, White LED (most common nowadays), CCFL (most common a few years ago), RGB LED and Wide Gamut CCFL, the latter two of which you’d typically find on higher end displays. The backlight primarily defines a displays gamut and maximum brightness. The panel on the other hand primarily defines the maximum contrast and acceptable viewing angles. Most common types are variants of IPS (usually good contrast and viewing angles) and TN (typically mediocre contrast and poor viewing angles).

Display Setup

There are two main cases, there are laptop displays, which usually allow for little configuration, and regular desktop displays. For regular displays there are a few steps to prepare your display to be profiled, first you need to reset your display to its factory defaults. We leave the contrast at its default value. If your display has a feature called dynamic contrast you need to disable it, this is critical, if you’re unlucky enough to have a display for which this cannot be disabled, then there is no use in proceeding any further. Then we set the color temperature setting to custom and set the R/G/B values to equal values (often 100/100/100 or 255/255/255). As for the brightness, set it to a level which is comfortable for prolonged viewing, typically this means reducing the brightness from its default setting, this will often be somewhere around 25–50 on a 0–100 scale. Laptops are a different story, often you’ll be fighting different lighting conditions, so you may want to consider profiling your laptop at its full brightness. We’ll get back to the brightness setting later on.

Before continuing any further, let the display settle for at least half an hour (as its color rendition may change while the backlight is warming up) and make sure the display doesn’t go into power saving mode during this time.

Another point worth considering is cleaning the display before starting the calibration and profiling process, do keep in mind that displays often have relatively fragile coatings, which may be deteriorated by traditional cleaning products, or easily scratched using regular cleaning cloths. There are specialist products available for safely cleaning computer displays.

You may also want to consider dimming the ambient lighting while running the calibration and profiling procedure to prevent (potential) glare from being an issue.

Software

If you’re in a GNOME or Unity environment it’s highly recommend to use GNOME Color Manager (with colord and argyll). If you have recent versions (3.8.3, 1.0.5, 1.6.2 respectively), you can profile and setup your display completely graphically via the Color applet in System Settings. It’s fully wizard driven and couldn’t be much easier in most cases. This is what I personally use and recommend. The rest of this article focuses on the case where you are not using it.

Xubuntu users in particular can get experimental packages for the latest argyll and optionally xiccd from my xiccd-testing PPAs. If you’re using a different distribution you’ll need to source help from its respective community.

Report On The Uncalibrated Display

To get an idea of the displays uncalibrated capabilities we use argyll’s dispcal:

$ dispcal -H -y l -R
Uncalibrated response:
Black level = 0.4179 cd/m^2
50%   level = 42.93 cd/m^2
White level = 189.08 cd/m^2
Aprox. gamma = 2.14
Contrast ratio = 452:1
White     Visual Daylight Temperature = 7465K, DE 2K to locus =  3.2

Here we see the display has a fairly high uncalibrated native whitepoint at almost 7500K, which means the display is bluer than it should be. When we’re done you’ll notice the display becoming more yellow. If your displays uncalibrated native whitepoint is below 6500K you’ll notice it becoming more blue when loading the profile.

Another point to note is the fairly high white level (brightness) of almost 190 cd/m2, it’s fairly typical to target 120 cd/m2 for the final calibration, keeping in mind that we’ll lose 10 cd/m2 or so because of the calibration itself. So if your display reports a brightness significantly higher than 130 cd/m2 you may want to considering turning down the brightness another notch.

Calibrating And Profiling Your Display

First we’ll use argyll’s dispcal to measure and adjust (calibrate) the display, compensating for the displays whitepoint (targeting 6500K) and gamma (targeting industry standard 2.2, more info on gamma here):

$ dispcal -v -m -H -y l -q l -t 6500 -g 2.2 asus_eee_pc_1215p

Next we’ll use argyll’s targen to generate measurement patches to determine its gamut:

$ targen -v -d 3 -G -f 128 asus_eee_pc_1215p

Then we’ll use argyll’s dispread to apply the calibration file generated by dispcal, and measure (profile) the displays gamut using the patches generated by targen:

$ dispread -v -N -H -y l -k asus_eee_pc_1215p.cal asus_eee_pc_1215p

Finally we’ll use argyll’s colprof to generate a standardized ICC (version 2) color profile:

$ colprof -v -D "Asus Eee PC 1215P" -C "Copyright 2013 Pascal de Bruijn" \
          -q m -a G -n c asus_eee_pc_1215p
Profile check complete, peak err = 9.771535, avg err = 3.383640, RMS = 4.094142

The parameters used to generate the ICC color profile are fairly conservative and should be fairly robust. They will likely provide good results for most use-cases. If you’re after better accuracy you may want to try replacing -a G with -a S or even -a s, but I very strongly recommend starting out using -a G.

You can inspect the contents of a standardized ICC (version 2 only) color profile using argyll’s iccdump:

$ iccdump -v 3 asus_eee_pc_1215p.icc

To try the color profile we just generated we can quickly load it using argyll’s dispwin:

$ dispwin -I asus_eee_pc_1215p.icc

Now you’ll likely see a color shift toward the yellow side. For some possibly aged displays you may notice it shifting toward the blue side.

If you’ve used a colorimeter (as opposed to a spectrophotometer) to profile your display and if you feel the profile might be off, you may want to consider reading this and this.

Report On The Calibrated Display

Next we can use argyll’s dispcal again to check our newly calibrated display:

$ dispcal -H -y l -r
Current calibration response:
Black level = 0.3432 cd/m^2
50%   level = 40.44 cd/m^2
White level = 179.63 cd/m^2
Aprox. gamma = 2.15
Contrast ratio = 523:1
White     Visual Daylight Temperature = 6420K, DE 2K to locus =  1.9

Here we see the calibrated displays whitepoint nicely around 6500K as it should be.

Loading The Profile In Your User Session

If your desktop environment is XDG autostart compliant, you may want to considering creating a .desktop file which will load the ICC color profile during all users session login:

$ cat /etc/xdg/autostart/dispwin.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Argyll dispwin load color profile
Exec=dispwin -I /usr/share/color/icc/asus_eee_pc_1215p.icc
Terminal=false
Type=Application
Categories=

Alternatively you could use colord and xiccd for a more sophisticated setup. If you do make sure you have recent versions of both, particularly for xiccd as it’s still a fairly young project.

First we’ll need to start xiccd (in the background), which detects your connected displays and adds it to colord‘s device inventory:

$ nohup xiccd &

Then we can query colord for its list of available devices:

$ colormgr get-devices

Next we need to query colord for its list of available profiles (or alternatively search by a profile’s full filename):

$ colormgr get-profiles
$ colormgr find-profile-by-filename /usr/share/color/icc/asus_eee_pc_1215p.icc

Next we’ll need to assign our profile’s object path to our display’s object path:

$ colormgr device-add-profile \
   /org/freedesktop/ColorManager/devices/xrandr_HSD121PHW1_70842_pmjdebruijn_1000 \
   /org/freedesktop/ColorManager/profiles/icc_e7fc40cb41ddd25c8d79f1c8d453ec3f

You should notice your displays color shift within a second or so (xiccd applies it asynchronously), assuming you haven’t already applied it via dispwin earlier (in which case you’ll notice no change).

If you suspect xiccd isn’t properly working, you may be able to debug the issue by stopping all xiccd background processes, and starting it in debug mode in the foreground:

$ killall xiccd
$ G_MESSAGES_DEBUG=all xiccd

Also in xiccd‘s case you’ll need to create a .desktop file to load xiccd during all users session login:

$ cat /etc/xdg/autostart/xiccd.desktop
[Desktop Entry]
Encoding=UTF-8
Name=xiccd
GenericName=X11 ICC Daemon
Comment=Applies color management profiles to your session
Exec=xiccd
Terminal=false
Type=Application
Categories=
OnlyShowIn=XFCE;

You’ll note that xiccd does not need any parameters, since it will query colord‘s database what profile to load.

If your desktop environment is not XDG autostart compliant, you need to ask them how to start custom commands (dispwin or xiccd respectively) during session login.

Dual Screen Caveats

Currently having a dual screen color managed setup is complicated at best. Most programs use the _ICC_PROFILE atom to get the system display profile, and there’s only one such atom. To resolve this issue new atoms were defined to support multiple displays, but not all applications actually honor them. So with a dual screen setup there is always a risk of applications applying the profile for your first display to your second display or vice versa.

So practically speaking, if you need a reliable color managed setup, you should probably avoid dual screen setups altogether.

That said, most of argyll’s commands support a -d parameter for selecting which display to work with during calibration and profiling, but I have no personal experience with them whatsoever, since I purposefully don’t have a dual screen setup.

Application Support Caveats

As my other article explains display color profiles consist of two parts, one part (whitepoint & gamma correction) is applied via X11 and thus benefits all applications. There is however a second part (gamut correction) that needs to be applied by the application. And application support for both input and display color management vary wildly. Many consumer grade applications have no color management awareness whatsoever.

Firefox can do color management and it’s half-enabled by default, read this to properly configure Firefox.

GIMP for example has display color management disabled by default, you need to enable it via its preferences.

Eye of GNOME has display color management enabled by default, but it has nasty corner case behaviors, for example when a file has no metadata no color management is done at all (instead of assuming sRGB input). Some of these issues seem to have been resolved on Ubuntu Trusty (LP #272584).

Darktable has display color management enabled by default and is one of the few applications which directly support colord and the display specific atoms as well as the generic _ICC_PROFILE atom as fallback. There are however a few caveats for darktable as well, documented here.


This article by Pascal de Bruijn was originally published on his site and is reproduced here with permission.

And done!

Of course, we should have posted this yesterday. Or earlier today! But when around midnight we opened the Champagne (only a half-bot, and it was off, too! Mumm, booh!), we all felt we were at the end of a long, long month! We, that’s Boudewijn, Irina and Wolthera, gathered in Deventer for the occasion (and also for the Google Summer of Code). Over the past month, hundreds of bugs have been fixed, we’ve gone through an entire release cycle, we managed another successful Kickstarter campaign! Exhaustion had set in, and we went for a walk around scenic Deventer to look at cows, sheep, dogs, swans, piglets, ducklings, budgerigars and chickens, and lots of fresh early summer foliage.

But not all was laziness! Yesterday, all Kickstarter backers got their surveys, and over half have already returned them! Today, the people who backed us through paypal got their surveys, and we got a fair return rate as well! Currently, the score looks like this:

  • 24. Python Scripting Plugin: 414
  • 8. SVG Import/Export: 373

With runners up…

  •         21. Flipbook/Sketchbook 176
  •         2. Composition Guides 167
  •         1. Transform from pivot point 152
  •         7. Vector Layers as Mask 132
  •         13. Arrange Layers 129

The special goals selected by the 1500 euro backers are Improving the Reference Docker and — do what you developers think most fun! That’s not an official stretch goal, but we’ve got some ideas…

 

 

Sources for Openly-Licensed Content

This morning I got an email from my colleague Tyler Golden who was seeking advice on good places to get openly-licensed content so I put together a list. It seems the list would be generally useful (especially for my new design interns, who will be blogging on Planet Fedora soon <img src=🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> ) so here you are, a blog post. <img src=🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" />

There’s a lot more content types I could go through but I’m going to stick to icons/graphics and photography for now. If you know of any other good sources in these categories (or desperately need another category of content covered,) please let me know and I’ll update this list.

Also of note – please note any licenses for materials you’re evaluating for use, and if they require attribution please give it! It doesn’t have to be a major deal. (I covered this quite a bit in a workshop I’ve given a few times on Gimp & Inkscape so you might want to check out that preso if you need more info on that.)

Icons / Graphics

  • The Noun Project

    noun-project

    Ryan Lerch clued me in to this one. All of the graphics are Creative Commons (yo uhave to provide attribution) or you can pay a small fee if you don’t want to have to attribute. There’s a lot of nice vector-based icons here.

    thenounproject.com

  • Open Clip Art

    openclipart

    Everything is CC0 – no attribution needed – and all vector sources. Quality varies widely but there are some real gems in there. (My offerings are here, but my favorite collection is by the Fedora Design team’s gnokii. Ryan Lerch has a good set too!) There’s a plugin that comes with Inkscape that lets you search open clip art and pull in the artwork directly without having to go to the website.

    openclipart.org

  • Xaviju’s Inkscape Open Symbols

    font-awesome

    I love these because you can browse the graphics right in Inkscape’s UI and drag over whichever ones you want into your document. There’s a lot of different libraries there with different licenses but the github page gives links to all the upstreams. I’m a big fan of Font Awesome, which is one of the libraries here, and we’ve been using it in Fedora’s webapps as of late; except for the brand icons they are all licensed under the Open Font License.

    github.com/Xaviju/inkscape-open-symbols

  • Stamen

    stamen

    If you need a map, this app is awesome. It uses open licensed Open Street Map data and styles it – there’s watercolor and lithographic styles, just to name a couple. If you ever need a map graphic definitely check this out.

    maps.stamen.com

Photos

  • Pixabay

    pixabay

    This site has photography, graphics, and videos all under a CC0 license (meaning: no attribution required.) For me, this site is a relative newcomer but has some pretty high-quality works.

    pixabay.com

  • Flickr

    flickr

    Flickr lets you search by license. I’ve gotten a lot of great content on there under CC BY / CC BY SA (both of which allow commercial use and modification.) (More on Flickr below.)

    flickr.com/search/?license=4%2C5%2C9%2C10

  • Wikimedia Commons

    wikicommons

    You have to be a bit careful on here because some stuff isn’t actually freely licensed but most of it is. (I have seen trademarked stuff get uploaded on here.) Just evaluate content on here with a critical eye!

    commons.wikimedia.org

  • Miscellaneous Government Websites

    loc

    Photography released by the US government is required to be public domain in many cases. I don’t know about other countries as much, but I’m sure it’s the case for some of them (Europeana is run by a consortium of various EU countries for example.) These agencies are also starting to publish to Flickr which is great. NASA publishes a lot of photos that are public domain; I’ve also gone through the Library of Congress website to get images.

  • CC Search

    ccsearch

    This is Creative Commons’ search engine; it lets you search a bunch of places that have openly-licensed content at once.

    search.creativecommons.org/

  • CompFight

    compfight

    This is an interface on top of Flickr. It lets you search for images and dictate which licenses you’re interested in. Using it can be faster than searching Flickr.

    compfight.com

But wait, there’s more!

Naheem linked me to this aptly-named awesome resource:

https://github.com/neutraltone/awesome-stock-resources

Even more goodness there!!

June 07, 2016

Recommended Reading: Trajectory Book 1 and 2 by Robert Campbell

Years ago I had the pleasure of meeting Deb Richardson and Rob Campbell, a couple who were both working at Mozilla at the time. They came to our Zap Your PRAM conference in Dalvay back in 2008.

Rob was working on the Firefox dev tools, which had begun to lag behind Chrome, and have since become great again.

Trajectory by Rob Campbell

Then last year, I saw that Rob was self-publishing a science-fiction novel. This interested me as several of the books I’ve enjoyed recently are in the genre (Seveneves by Neal Stephenson, Kim Stanley Robinson’s Aurora, and my all-time favourite, the Mars trilogy). However, I was concerned. What if someone you know invites you to their comedy night and just isn’t funny? Fortunately, this wasn’t the case with Rob.

Rob’s book, Trajectory Book 1 was great. Easy to read, interesting, and nerdy in the right ways. My only complaint was that it ended abruptly. The solution to this, obviously, is Book 2, which came out yesterday.

If you have any interest in science fiction, I can gladly recommend Rob Campbell’s Trajectory Book 1 (and I’m looking forward to starting Trajectory Book 2).

June 06, 2016

David Revoy livestreaming on Twitch

Ten hours before the end of the Kickstarter (Tuesday June 7, from 21:00 to 23:00 CEST, UTC+2), David Revoy will draw in public! You can follow it on the official Krita channel on twitch.tv:  https://www.twitch.tv/artwithkrita

Read more about it on David’s website

Interview with Sara Tepes

Tranquil

Could you tell us something about yourself?

My name’s Sara Tepes, I’m 17 years old,  I was born in Romania but grew up in the U.S. and I live super close to Washington D.C. I love roses, rabbits, tea, and historical movies.

Do you paint professionally, as a hobby artist, or both?

I work on commissions and various projects and get paid for my work, so I’m sort of a freelance part-time illustrator, but I also draw and paint as a hobby. I hope to major in graphic design and be a professional full time freelancer.

What genre(s) do you work in?

Traditional drawing and both digital and traditional painting.
Garden

Whose work inspires you the most—who are your role models as an artist?

I have been inspired by Tony DiTerlizzi ever since I was a tiny kid who read the Spiderwick Chronicles. He was my art god for the longest time, and I still love his work; his technique is brilliant and the creatures he creates are just alive on the paper. Lois Van Baarle, aka Loish, (http://loish.net/) has been a huge role model ever since I first discovered her digital paintings in 2012. Her paintings have such wonderful colors, details, expressions and body language!

Traditional painters include John Singer Sargent, John William Waterhouse, Claude Monet, Gustav Klimt, and Alphonse Mucha.

How and when did you get to try digital painting for the first time?

I used to read a bunch of ”How to Draw Manga” books which discussed basic digital art with cell shading.  I started digital painting in 2011 when I was 12 with this old, crappy photo effect program. It basically had an airbrush feature and a select tool and paint bucket. It was super
simplistic and wasn’t even meant for digital painting, but I really wanted to digitally color in the manga drawings I was doing at the time.

What makes you choose digital over traditional painting?

Well, I work in both mediums, but I generally prefer digital painting because it’s super reliable. I don’t have to worry about my paint palette drying before I can use it, working in terrible lighting and getting all the colors skewed up, or having a really long drying time on the canvas. There’s no prep or cleanup to it.

How did you find out about Krita?

When I got a new computer, I was looking for free digital painting software. I was 13 and didn’t have $900 for Adobe Photoshop, but I didn’t like pirating the program. I found MyPaint, Gimp, and Krita and installed and used all of them.

What was your first impression?

I was curious about the program but I didn’t like the brush blending. At the time, all the “painterly” brushes had color blending and it annoyed me a lot. Thank God that’s not the case with the program right now!

What do you love about Krita?

It doesn’t have a huge learning curve like other programs. It’s straightforward, super professional, has a bunch of great features and brushes, and autosaves every minute! It’s pretty fantastic!

What do you think needs improvement in Krita? Is there anything that really annoys you?

The ONLY thing that I don’t like about Krita is that it doesn’t have a freehand warping tool how Photoshop has Liquefy or Gimp has iWarp. That would be really helpful honestly.

If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

Probably “Red Dress”. I love the backlighting and the vibrant red highlights. I really have to focus on how colors are affected by light, and I think I did a pretty good job with this one.
Red Dress

What techniques and brushes did you use in it?

Just the color tool and the bristles_hairy brush.

Where can people see more of your work?

Instagram: https://instagram.com/sarucatepes/
Twitter: https://twitter.com/sarucatepes
Tumblr: http://themerbunny.tumblr.com
Pinterest: https://www.pinterest.com/sarahandaric/
DeviantArt: http://sarucatepes.deviantart.com
Google+: https://plus.google.com/+SaraTepes

June 05, 2016

Digital diaphragm for optical lenses

In photography most optical lenses use mechanical diaphragms for aperture control. They are traditionally manufactured from metal blades and works quite good. However metal blades exposes some disadvantages:

  • mechanical parts will sooner or later fail
  • the cheaper forms give strong diffraction spikes
  • manufacturers need more metal blades for a round iris, which is expensive
  • a metal blade with its sharp edges give artefacts, which are visible in out of focus regions.
  • but, contrast is very high by using opaque metal

In order to obtain a better bokeh, some lenses are equipped with apodization filters. Those filters work mostly for fully open arperture and are very specialised and thus relatively expensive.

A digital arperture build as a transparent display with enough spatial resolution can not only improve the shape of the diaphragm. It could feature as a apodisation filter, if it supports enough gray levels. And it can change its form programatically.

Two possible digital diaphragm forms:
Kreise

  • leverage existing display technology
  • better aperture shape for reduced artefacts
  • apodisation filter on demand for best bokeh or faster light
  • programmable or at least updateable aperture pattern (sharp/gausian/linear/…)
  • no metal blades or other mechanical parts to fail
  • over the years get cheaper than mechanical counterpart
  • reduce number of glas to air surfaces in optical lens design
  • integratable aperture into lens groups
  • display transparency increases quickly and is for OLED at 45% by 2016, which means at the moment just one f-stop
  • mobile demands high display resolutions anyway

The digital arperture can easily be manufactured as a monochrome display and be placed traditionally between two optical lens groups, where today the diaphragm is located. Or it is even possible to optically integrate the aperture into one lens group, without additional glas to air surfaces, as is needed with moving blades. Once the optical quality of the digital filter display is good enough a digital diaphragm can be even cheaper than a high quality mechanical counterpart.

Design the Kickstarter T-shirt!

The Kickstarter has been funded so we’ll be needing T-shirts! Here’s your chance to earn fame by designing the one we’ll send to our backers: Special June drawing challenge!

The topic is: FLOW — interpret it any way you like. If your design wins the poll, it will be printed on all the Kickstarter backer shirts and we’ll send you one, too.

The contest is open until June 24, 12:00 UTC. That’s almost three weeks!

Summer vacations – not the farmer’s fault!

Episode 1 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

I posted a brief description of the Five Monkey experiment a few days ago, as an introduction to a series someone suggested to me as I was telling stories of how certain things came about> One of the stories was about school Summer vacation. Many educators these days feel for the most part that school holidays are too long, and that kids lose knowledge due to atrophy during the Summer months – the phenomenon even has a name.  And yet attempts to restructure the school year are strongly resisted, because of the amount of investment we have as a society in the school rhythms. But, why do US schools have 10-12 weeks of Summer vacation at all?

The story I had heard is that the Summer holiday is as long as it is, because at the origins of the modern education system, in a more agrarian society, kids were needed on the farm during the harvest and could not attend school.I do like to be accurate when talking about history, and so I went reading, and it turns out that this explanation is mostly a myth – at least in the US. And, as a farmer’s kid, that mostly makes sense to me. The harvest is mostly from August through to the beginning of October, so starting school in September, one of the busiest farming months, does not make a ton of sense.

But there is a grain of truth to it – in the US in the 1800s, there were typically two different school rhythms, depending on whether you lived in town or in the country. In town, schools were open all year round, but many children did not go all of the time. In the country, schools were mainly in session during two periods – Winter and Summer. Spring, when crops are plated, and Autumn, when they are harvested, were the busy months, and schools were closed. The advent of compulsory schooling brought the need to standardise the school year, and so vacations were introduced in the cities, and restructured in the country, to what we see today. This was essentially a compromise, and the long Summer vacation was driven, as you might expect, by the growing middle class’s desire to take Summer holidays with their children, not the farming family’s desire to exploit child labour. It was also the hardest period of the year for children in cities, with no air conditioning to keep school classrooms cool during the hottest months of the year.

So, while there is a grain of truth (holidays were scheduled around the harvest initially), the main driver for long Summer holidays is the same as today – parents want holidays too. The absence of air conditioning in schools would have been a distant second.

This article is US centric, but I have also seen this subject debated in France, where the tourism industry has strongly opposed changes to the school year structure, and in Ireland, where we had 8-9 weeks vacation in primary school. So – not off to a very good start, then!

June 04, 2016

Walking your Goat at the Summer Concert

I love this place. We just got back from this week's free Friday concert at Ashley Pond. Not a great band this time (the previous two were both excellent). But that's okay -- it's still fun to sit on the grass on a summer evening and watch the swallows wheeling over the pond and the old folks dancing up near the stage and the little kids and dogs dashing pell-mell through the crowd, while Dave, dredging up his rock-star past, explains why this band's sound is so muddy (too many stacked effects pedals).

And then on the way out, I'm watching appreciatively as the teen group, who were earlier walking a slack line strung between two trees, has now switched to juggling clubs. (I know old people are supposed to complain about "kids today", but honestly, the kids here seem smart and fit and into all kinds of cool activities.) One of the jugglers has just thrown three clubs and a ball, and is mostly keeping them all in the air, when I hear a bleat to my right -- it's a girl walking by with a goat on a leash.

Just another ordinary Friday evening in Los Alamos.

June 03, 2016

Anatomy of a bug fix

Updated builds with the fix are here: https://www.kickstarter.com/projects/krita/krita-2016-lets-make-text-and-vector-art-awesome/posts/1594853!

People sometimes assume that free software developers are only interested in adding cool new features, getting their few pixels of screenspace fame, and don’t care about fixing bugs. That’s not true – otherwise we wouldn’t have fixed about a thousand bugs in the past year (though it would be better if we hadn’t created the bugs in the first place). But sometimes bug fixing is just fun: sherlocking through the code, trying to come up with a mental model of what might be going wrong, hacking the code, discovering that you were right. Heady stuff, everyone should try it some time! Just head over to bugzilla and pick yourself a crash (crash bugs are among the easiest to fix).

But let’s take a look at a particularly nasty bug, one that we couldn’t fix for ages. Ever since Krita 2.9.6, we have received crash reports about Krita crashing when people were using drawing tablets. Not just any drawing tablets, but obscure tablets with names like Trust, Peritab, Adesso, Waltop, Aiptek, Genius — and others. Not the tablets that we do support because the companies have donated test hardware to Krita, like Wacom, Yiynova and Huion.

Also, not tablets that are readily available: most of these brands only produce hardware for a short time, flog it to unsuspecting punters and disappear. I.e., we couldn’t just go to the local computer shop and get one, or find one online and have it delivered. And since all these tablets have one thing in common, namely their cheapness, the users who bought them in all likelihood not all that flush, otherwise they would have bought a better tablet. So they couldn’t afford to donate their tablet to the project.

A hardware related bug without hardware to test with, that’s nearly impossible to fix. We had four “facts” to start with:

  • The bug started appearing after Krita 2.9.6 — unfortunately, that was when we rewrote a lot of the tablet support to allow Krita to work with tablets like the Surface Pro, and it was impossible to pinpoint which change was responsible for the crash.
  • All these tablets show the same suspicious values when we were querying them for dimensions
  • All these crashes happened after that query for the tablet dimensions
  • All crashes happen on Windows only

Now, on Windows, you talk to tablets through something called the “Wintab” API. The tablet manufacturer, or more likely, the manufacturer of the chip that that the tablet manufacturer uses, writes an implementation of this API in the form of a Wintab driver.

Wintab is ancient: it started out in the 16 bits Windows 3.0 times. It’s gnarly, it’s illogical, it’s hoary. You can only have one wintab driver dll on your system, which means that you cannot, like on Linux plug in a Huion, test, plug in a Wacom test, plug in a Yiynova and test — you need to install and uninstall the driver every time.

Anyway, last week we found a second-hand Trust tablet for sale. Since we’ve had at least six reports of crashes with just that particular brand, we got it. We installed a fresh Windows 10, installed the driver Trust fortunately still provides despite having discontinued its tablets, installed Krita, started Krita, brought pen to tablet and… Nothing happened. No crash, and Krita painted a shoddy, shakey, pressure sensitive line.

Dash it, 30 euros down the drain.

Next, we got an old Genius tablet and installed Windows 7. And bingo! A crash, and the same suspicious values in the tablet log. Now we’re talking! Unfortunately, the crash happened right inside the “Genius” wintab driver. Either we’re using the Wintab API wrong, or Genius implemented it wrong, but we cannot see the code. This is what Dmitry was looking at now:

Gibberish...

Gibberish…

But it gave the hint we needed. It is a bug in the Wintab driver, and we are guessing that since all these drivers give us the same weird context information, they all share the same codebase, come from the same manufacturer in fact, and have the same bug.

It turned out that when we added support for the Surface Pro 3, which has an N-Trig pen, we needed a couple of workarounds for its weirder features. We wrote code that would query the wintab dll for the name of the tablet, and if that was an N-Trig, we set the workaround flag:

UINT nameLength = m_winTab32DLL.wTInfo(WTI_DEVICES, DVC_NAME, 0);
TCHAR* dvcName = new TCHAR[nameLength + 1];
UINT returnLength = m_winTab32DLL.wTInfo(WTI_DEVICES, DVC_NAME, dvcName);
Q_ASSERT(nameLength == returnLength);
QString qDvcName = QString::fromWCharArray((const wchar_t*)dvcName);
// Name changed between older and newer Surface Pro 3 drivers
if (qDvcName == QString::fromLatin1("N-trig DuoSense device") ||
            qDvcName == QString::fromLatin1("Microsoft device")) {
    isSurfacePro3 = true;
}
delete[] dvcName;

Now follow me closely: the first line gets some info (wTInfo) from the wintab driver. It’s a call and has three parameters: the first says we want info about devices, the second says we want a name, and third one is 0. That is, zero. Null.  The second call is exactly the same, but passes something called dvcName, that is a pointer to a bit or memory where the wintab driver will write the name of the device. It’s a number, significantly bigger than 0.  The Wintab API says that if you pass 0 (null) as the third parameter, the driver should return the length of what it will return if you would pass it the length. Follow me? If you ask for  the name with 0 for length, it tells you the length; if you ask for the name with the right length, it gives you the name.

See for yourself: http://www.wacomeng.com/windows/docs/Wintab_v140.htm#_Toc275759816

You have to go through this hoop to set apart a chunk of memory big enough for Wintab to copy the tablet name in. Too short, and you get a crash: that’s what happens when you write out of bounds. Too long, and you waste space, and besides, how can you know how long a tablet name could be?

Okay, there’s one other way to crash, other than writing too much stuff in too small a chunk of memory. And that’s trying to write to Very Special Memory Adress 0. That’s zero, the first location in the memory of your computer. In fact, writing to location 0 (zero) is so extremely forbidden that programmers use it to flag, meaning “don’t write here”. A competent programmer will always check for 0 (zero) before writing to memory.

If you’re still here, I’m sure you’re getting suspicious now.

Yes, you’re right. The people who wrote the driver for the tablets that Trust, Genius, Adesso, Peritab, Aiptek and all their ilk repackaged, rebranded and resold were not competent. They did not check for zero; they blithely started writing the name of the tablet into the address provided.

And poof! Krita crashes, we get the bug reports — because after all, it must be Krita’s fault? The tablet works with Photoshop! Whereas it’s entirely likely that the people who cobbled together the driver didn’t even read the Wintab spec, but just fiddled with their driver until Photoshop more or less worked, before they called it a day and went to drown their sorrows in baijiu.

Enfin, we have now “fixed” the bug — we provide 1024 characters of space for the driver to write the name of the tablet in, and hope for the best…

Note that this doesn’t mean that your Trust, Genius or whatever tablet will work well and give satisfaction: these are still seriously badly put together products. After fixing the bug, we tried drawing with the Genius tablet and got weird, shaky lines. Then we tested with Photoshop, and after a while, saw the same weird shaky lines there. It almost looks as if the tablet driver developers didn’t really care about their product and just returned some randomly rounded numbers for the pen position.

 

The five monkeys thought experiment

The (probably apocryphal) five monkeys experiment goes like this:

Five monkeys are placed in a cage. There is a lever, which, if pulled, delivers food. The monkeys soon learn how it works, and regularly pull the level.

One day, when the lever is pulled, food is still delivered to the puller, but all the monkeys in the cage get an ice-cold shower for a period of time. The monkeys quickly learn the correlation between the lever and the cold shower, and stop any monkey from getting to the lever.

After a while, one of the monkeys is removed, and replaced by a new monkey. Out of curiosity, the new monkey tried to pull the lever, and was beaten into submission by the other monkeys. Progressively, more of the original five monkeys are removed, and replaced with new monkeys, and they all learn the social rule – if you try to pull the lever, the group will stop you.

Eventually, all of the original monkeys are gone. At this point, you can turn off the shower, secure in the knowledge that none of the monkeys will pull the lever, without ever knowing what will happen if they do.

A funny anecdote, right? A lesson for anyone who ever thinks “because that’s the way it has always been”.

And yet, there are a significant number of things in modern society that are the way they are because at one point in time, there was some constraint that applied, which no longer applies in the world of air travel and computers. I got thinking about this because of the electoral college and the constitutional delays between the November election and the January inauguration of a new president – a system that exists to get around the logistical constraints of having to travelling distances on horseback. But that is far from the only example.

I hope to write a series, covering each of the examples I have found, and hopefully uncovering others along the way, and the electoral college will be one of them. First up, though, will be the Summer school vacation.

 

June 01, 2016

A little while back I had attempted to document a shoot with my friend and model, Mairi. In particular I wanted to capture a start-to-finish workflow for processing a portrait using free software. There are often many tutorials for individual portions of a retouching process but rarely do they get seen in the context of a full workflow.

The results became a two-part post on my blog. For posterity (as well as for those who may have missed it the first time around) I am republishing the second part of the tutorial Postprocessing here.

Though the post was originally published in 2013 the process it describes is still quite current (and mostly still my same personal workflow). This tutorial covers the retouching in post while the original article about setting up and conducting the shoot is still over on my personal blog.

Mairi Portrait Final The finished result from the tutorial.
by Pat David (cba).

The tutorial may read a little long but the process is relatively quick once it’s been done a few times. Hopefully it proves to be helpful to others as a workflow to use or tweak for their own process!

Coming Soon

I am still working on getting some sample shots to demonstrate the previously mentioned noise free shadows idea using dual exposures. I just need to find some sample shots that will be instructive while still at least being something nice to look at…

Also, another guest post is coming down the pipes from the creator of PhotoFlow, Andrea Ferrero! He’ll be talking about creating blended panorama images using Hugin and PhotoFlow. Judging by the results on his sample image, this will be a fun tutorial to look out for!

May 31, 2016

Krita 3.0 Released

Today the Krita team releases Krita 3.0, the Animation Release. Wrapping up a year of work, this is a really big release: animation support integrated into Krita’s core, Instant Preview for better performance painting and drawing with big brushes on big canvases, ported to the latest version of the Qt platform and too many bigger and smaller new features and improvements to mention!

krita-3.0

Many of the new features were funded by the 2015 Kickstarter campaign. A big thank-you to all our backers! The remaining stretch goals will be released with Krita 3.1, later this year. And don’t forget that we’ve still got seven days in the current kickstarter campaign, We’re nearly funded, so there should still be time to reach some stretch goals this year, too!

The full list of improvements is too long for a release announcement. Please check out the extensive release notes we prepared!

Note: Krita 3.0 load and saves its configuration and resources in a different place than 2.9 so it’s possible to use both versions together without conflicts. Here is a tutorial on migrating resources.

Downloads

Windows

On Windows, Krita supports Wacom, Huion and Yiynova tablets, as well as the Surface Pro series of tablets. Trust, Bosto, Genius, Peritab and similar tablets are not supported at this moment because we lack testing hardware that allows us to reproduce reported bugs.

The portable zip file builds can be unzipped and run by double-clicking the krita link. If you want to use the installer builds, please uninstall Krita 2.9 first.

For Windows users, Alvin Wong has created a shell extension that allows you to preview krita images in Windows Explorer. You can install it separately, but it is also included in the setup installers.

If your virus scanner or other security software complains please verify the sha1 checksum noted below: if the checksum checks out, the files are safe.

Krita on Windows is tested on Windows 7, Windows 8 and Windows 10.

Linux

For Linux, we offer AppImages that should run on any reasonable recent Linux distribution. For Ubuntu 12.04 and CentOS 6.x you need the appimage that is built without support for OpenMP. We are working on updating the Krita Lime repository: for now, you can use that to install the krita3-testing build. Helio Castro is packaging Krita for Redhat/CentOS/Fedora.

You can download the appimage, make it executable and run it in place. No installation is needed. At this moment, we only have appimages for 64 bits versions of Linux.

You can also get Krita from Ubuntu’s App Store in snap format, thanks to Michael Hall’s help. Note that you cannot use the snap version of Krita with the NVidia proprietary driver, due to a limitation in Ubuntu and that there are no translations yet.

OSX

Krita on OSX will be fully supported with version 3.1. Krita 3.0 for OSX is still missing Instant Preview and High Quality Canvas scaling. There are also some issues with rendering the image — these issues follow from Apple’s decision to drop support for the OpenGL 3.0 compatibility profile in their display drivers. We are working to reimplement these features using OpenGL 3.0 Core profile. For now, we recommend disabling OpenGL when using Krita on OSX for production work. Krita for OSX is tested on 10.9 and 10.11 since we do not have access to other versions of OSX.

Source

A source archive is available for distributions wishing to package Krita 3.0. If you’re a curious user, it is recommended to build Krita directly from the git repository instead, so you get all fixes daily fresh. See David Revoy’s guide for an introduction to building Krita. If you build Krita from source and your version of Qt is lower than Qt 5.6.1, it is necessary to also rebuild Qt using the patches in krita/3rdparty/ext_qt.

May 30, 2016

designing interaction for creative pros /4

This is the fourth and final part of my LGM 2015 lecture. Part one urged to make a clear choice: is the software you’re making is for creative professionals, or not? Part two was all about the rally car and the need for speed. Part three showed the need to support the free and measured working modes of masters.

Today’s topic is how to be good in creative‐pro interaction. We start by revisiting the cars of part two.

party like it’s…

It is no coincidence that I showed you a 1991 family car and a 1991 rally car:

pics of the two cars source: netcarshow.com and imgbuddy.com

I did that because our world—that of software for creative pros—is largely stuck in that era. And just like 1991 king‐of‑the‑hill cars (even factory‐mint examples found in a time capsule), this software is no longer competitive.

a pair of yellow, y-front underpants yes, it’s pants! source: charliepants.com

It is my observation that in this field there is an abundance of opportunities to do better. If one just starts scratching the surface, on a product, workflow, or interaction level, then today’s software simply starts to crumble.

testing, testing, one two

For instance, while doing research for the Metapolator project, I asked some users to show me how they work with the font design tools of today. They showed me the glyph range, the central place to organise their work and get started:

a font editor's table view of all the glyphs in a font

They also showed me the curve editor, where the detailed work on each glyph is done:

a big window with a glyph outline editor

Both of them need the whole screen. In a short amount of time I saw a lot of switching between the both of them. I sensed wasted time and broken flows. I also saw tiny, slow handles in the editor. And I thought: this cannot be it.

They also showed me, in another program, designing in context:

editing a glyph in the context of a few others

I immediately sensed this was a big deal. I saw that they had pushed the envelope—however, not broken through to the other side.

Besides that, I observed that editing was done in outline mode (see the ‘y’, above), but evaluation is of solid black glyphs. Again I sensed broken flows, because of switching between making and evaluating. And I thought: this cannot be it.

Frank sez…

Enough of that; let’s zoom out from my field report, to the big issue at hand. To paraphrase Zappa:

‘How it has always been’ may not be quite dead, but it sure smells funny.

The question is: how did we get to this situation? Let me dig through my experience to find some of the causes.

First of all we can observe that each piece of creative‐pro software is a vertical market product; i.e. it is not used by the general population; only by certain masters. That means we are in armpit of usability territory. Rereading that blog post, I see I was already knee‐deep into this topic: ‘its development is driven by reacting to what users ask for (features!) and fear of changing “like it has always been” through innovation.’

go on, have another cake

The mechanism is clear: users and software makers, living in very different worlds, have a real hard time communicating. Where they manage, they are having the wrong conversation: ‘gimme more features!’ —‘OK, if that makes you happy.’

What is happening today is that users are discussing software made yesterday. They are not able to communicate that their needs are so lousily addressed. Instead, they want some more cherries on top and this cements the position of this outdated software.

Constantly, users are telling software makers, implicitly and explicitly, ‘add plenty of candy, but don’t change a thing.’

This has been going on for decades—lost decades.

bond of pain

A second cause that I want to highlight is that both users and software makers have worked for years to get on the inside and it has been a really painful experience for all of them. This unites them against change.

Thus users have been fighting several frustrating years to get ‘into’ software that was not designed (for them; armpit of usability, remember), but instead made on terms favourable to the software makers.

Software makers spent year after year trying to make something useful. Lacking any form of user research, the whole process has been an exasperating stab‐in‐the‐dark marathon.

Thus a variant of the Stockholm syndrome spooks both parties. They are scarred‐for‐life victims of the general dynamic of the pro‑software industry. But now that they have gotten this far, their instinct is to sustain it.

the point

Two decades of experience shows that there is a way out of this misery; to become competitive (again). There is no incremental way to get there; you’ll have to snap out of it. What is called for is innovation—of your product, workflow, your interaction. A way that unlocks results is:

  1. user research
    Experienced researchers cut straight past the wants and get the user needs on the table. (Obligatory health & safety notice: market research has nothing to do with user research; it is not even a little bit useful in this context.)
  2. design‐driven innovation
    When user needs are clear (see point 1), then a designer can tell you any minute of the project—first to last—what part of ‘how it has always been’ is begging to be replaced, and which part is the solid foundation to build upon. Designer careers are built on getting this right, every time.

Skip either point—or doing it only in a superficial, or non-consequential, way—and I’ll guarantee you’ll stay stuck in 1991. Making it happen requires action:

Software‐makers: enthusiastically seek out user researchers and designers and start to sail by them. Stop considering adding features a good thing, stop being a captive of ‘how it has always been’ and trust the accomplished.

picture show

To illustrate all this, let’s look at some of my designs for Metapolator. To be able to solve these problems of contemporary font design tools that I mentioned above, I had to snap out of the old way.

First of all, I pushed designing in context a lot further, by introducing in‑specimen editing:

a pangram type specimen is displayed in a window

Every glyph you see above is directly editable, eliminating switching between overview and editing. The size that the glyphs are displayed in can be adjusted any given moment, whatever suits the evaluate/edit balance.

‘OK, that’s great’ you say, ‘but every once in a while one needs a glyph range to do some gardening.’ To address that, I used a handy trick: the glyph range is just another specimen:

the glyph range organised as a specimen

Everybody in the Metapolator team thought I was crazy, but I was dead set on eliminating outline mode. I sensed there was chance to do that here, because the focus moves from working at the edge of the glyph—the high‐contrast black–white transition—to the center line within:

center line displayed within a full-black glyph, the points     on it connected to large handles outside the glyph area

Then there was the matter of offering users generous handles that are fast to grab and use. After brainstorming with Simon Egli, the design shown above was born: put them ‘on sticks’ outside, so that they do not impede visual evaluation of the glyph.

pep talk

In closing: to be good in creative‐pro interaction, I encourage you to—

Do not ask how the past can guide you. Ask yourself what you can do to guide your software for creative pros into the 21st century.

May 27, 2016

You know what they say: Big hands, small horse.

You know what they say: Big hands, small horse.

CSS Text Line Spacing Exposed!

Want evenly spaced lines of text like when writing on the lined paper we all used as kids? Should be easy. Turns out with CSS it is not. This post will show why. It is the result of too much time reading specs and making tests as I worked on Inkscape’s multi-line text.

The first thing to understand is that CSS text works by filling line boxes with glyphs and then stacking the boxes, much as is done in printing with movable type.

Four lines of movable type placed in a composing stick over a box of movable type.

Movable type placed in a composing stick. The image has been flipped horizontally so the glyphs are legible. (Modified from photo by Willi Heidelbach [CC BY 2.5], via Wikimedia Commons)

A line of CSS text is composed of a series of glyphs. It corresponds to a row of movable type where each glyph represents (mostly) a piece of type. The CSS ‘font-size’ property corresponds to the height of the type. A CSS line box contains a line of CSS text plus any leading (extra space) above and below the line.

Four lines of text mimicking the above figure.

The same four lines of text as in the previous figure. The CSS line boxes are shown by red rectangles. The line boxes are stacked without any leading between the lines.

The lines in the above figure are set tight, without any spacing between the lines. This makes the text hard to read. It is normal in typesetting to add a bit of leading between lines to give the lines a small amount of separation. This can be done with CSS through the ‘line-height’ property. A typical value of the ‘line-height’ property would be ‘1.2’ which means in the simplest terms to make the distance between the baselines of the text be 1.2 times the font size. CSS dictates that the extra space be split, half above the line, half below the line. The following example uses a ‘line-height’ value of 1.5 (to make the figure clearer).

Same four lines of text as in above figure but with leading added between lines.

The same four lines of text as in the previous figure but with leading added by a ‘line-height’ value of 1.5. The distance between the baselines (light-blue lines) is 1.5 times the font size. (Line boxes without leading are shown in green, line boxes with leading in red.)

Unlike with physical type faces, lines can be moved closer together than the height of the glyph boxes by using a ‘line-height’ value less than one. Normally you would not want to do this.

Same four lines of text as in above figure but with negative leading.

The same four lines of text as in the previous figure but with negative leading generated by a ‘line-height’ value of 0.8.

When only one font is used (same family and size), the distance between the baselines is consistent and easy to predict. But with multiple fonts it becomes a bit of a challenge. To understand the inner workings of ‘line-height’ we first need to get back to basics.

Glyphs are designed inside an em box. The ‘font-size’ property scales the em box so when rendered the height of the em box matches the font size. For most scripts, the em box is divided into two parts by a baseline. The ascent measures the distance between the baseline and the top of the box while the descent measures the distance between the baseline and the bottom of the box.

Diagram of 'em' box showing ascent and descent.

The coordinate system for defining glyphs is based on the “em box” (blue square). The origin of the coordinate system for Latin based glyphs is at the baseline on the left side of the box. The baseline divides the em box into two parts.

The distinction between ‘ascent’ and descent’ is important as the height of the CSS line box is calculated by finding independently the maximum ascent and the maximum descent of all the glyphs in a line of text and then adding the two values. The ratio between ascent and descent is a font design issue and will be different for different font families. Mixing font families may then lead to a line box height greater than that for a single font family.

Two 'M' glyphs from different fonts aligned to their alphabetic baseline.

Two ‘M’ glyphs with the same font size but from different font families (DejaVu Sans and Scheherazade). Their glyph boxes (blue rectangles) have the same height (equal to the em box or font size) but the boxes are shifted vertically so that their baselines are aligned. The resulting line box (dashed red rectangle), assuming a ‘line-height’ value of ‘1’, has a height that is greater than if just one font was used.

Keeping the same font family but mixing different sizes can also give results that are a bit unexpected.

Two text blocks.

Left: Text with a font size of 25 pixels and with a ‘line-height’ value of ‘2’. Right: Same as left but font size of 50px for middle line. Notice how the line boxes (red dashed rectangles) are lined up on a grid but that the baselines (light-blue lines) on the right are not; the middle right line’s baseline is off the grid.

So far, we’ve discussed only ‘line-height’ values that are unitless. Both absolute (‘px’, ‘pt’, etc. ) and relative (‘em’, ‘ex’, ‘%’) units are also allowed. The “computed” value of a unitless value is the unitless value while the “computed” value of a value with units is the “absolute” value. The actual value “used” for determining line box height is for a unitless value, the computed value multiplied by the font size, while for the values with units it is the “absolute value” For example, assuming a font size of 24px:

‘line-height:1.5′
computed value: 1.5, used value: 36px;
‘line-height: 36px’
computed and used values: 36px;
‘line-height: 150%’
computed and used values: 36px;
‘line-height: 1.5em’
computed and used values: 36px.

The importance of this is that it is the computed value of ‘line-height’ that is inherited by child elements. This gives different results for values with units compared to those without as seen in the following figure:

Two text blocks.

Left: Text with a font size of 25 pixels and with a ‘line-height’ value of ‘2’. Right: Same as left but ‘line-height’ value of ‘2em’. With the unitless ‘line-height’ value, the child element (second line, span with larger font) inherits the value ‘2’. As the larger font has a size of 50px, the “used” value for ‘line-height’ is 100px (2 times 50px) thus the line box is 100px tall. With the ‘line-height’ value of ‘2em’, the computed value is 50px. This is inherited by the child element which is then used in calculating the line box height. CodePen.

The astute observer will notice that in the above example the line box height of the middle line on the right is not 50 pixels as one might naturally expect. It is actually a bit larger. Why? Recall that the line box height is calculated from the maximum ascent and maximum descent of all the glyphs. One small detail was left out. CSS dictates that an imaginary zero width glyph called the “strut” be included in the calculation. This strut represents a glyph in the containing block’s initial font and with the block’s initial font size and line height. This throws everything out of alignment as shown in the figure below.

'A' and 'D' glyphs aligned to a common baseline with the 'D' having twice the font size as the 'A'.

Let the ‘A’ represents the strut. The glyph boxes for the ‘A’ and ‘D’ without considering line height are shown by blue rectangles. The glyph boxes with line height taken into account are shown by red-dashed rectangles. For the ‘D’, the glyph boxes with and without taking into account the line height are the same. Note that both the ‘A’ and ‘D’ boxes with line height factored in have the same height (2em relative to the containing block font size). The two boxes are aligned using the ‘alphabetic’ baseline. This results in the ‘A’ glyph box (with effect of line height) extending down past the bottom of the ‘D’ glyph box. The resulting line box (solid-pink rectangle) height is thus greater than either of the glyph box heights. The extra height is shown by the light gray rectangle.

So how can one keep line boxes on a regular grid? The solution is to rely on the strut! The way to do this is to make sure that the ascents and descents all child elements are smaller than the containing block strut’s ascent and descent values. One can do this most easily by setting ‘line-height’ to zero in child elements.

Two blocks of text, both showing evenly spaced lines. The third line on the right has text with a font size twice the rest of the lines.

Text with evenly spaced lines. Left: All text with the same font size. Right: The third line has text with double the font size but with a ‘line-height’ value of ‘0’. This ensures that the strut controls the spacing between lines. CodePen.

As one can see, positioning text on a regular grid can be done through a bit of effort. Does it have to be so difficult? There maybe an easier solution on the horizon. The CSS working group is working on a “Line Grid” specification that may make this trivial.

May 25, 2016

Blog backlog, Post 3, DisplayLink-based USB3 graphics support for Fedora

Last year, after DisplayLink released the first version of the supporting tools for their USB3 chipsets, I tried it out on my Dell S2340T.

As I wanted a clean way to test new versions, I took Eric Nothen's RPMs, and updated them along with newer versions, automating the creation of 32- and 64-bit x86 versions.

The RPM contains 3 parts, evdi, a GPLv2 kernel module that creates a virtual display, the LGPL library to access it, and a proprietary service which comes with "firmware" files.

Eric's initial RPMs used the precompiled libevdi.so, and proprietary bits, compiling only the kernel module with dkms when needed. I changed this, compiling the library from the upstream repository, using the minimal amount of pre-compiled binaries.

This package supports quite a few OEM devices, but does not work correctly with Wayland, so you'll need to disable Wayland support in /etc/gdm/custom.conf if you want it to work at the login screen, and without having to restart the displaylink.service systemd service after logging in.


 Plugged in via DisplayPort and USB (but I can only see one at a time)


The source for the RPM are on GitHub. Simply clone and run make in the repository to create 32-bit and 64-bit RPMs. The proprietary parts are redistributable, so if somebody wants to host and maintain those RPMs, I'd be glad to pass this on.

May 24, 2016

LVFS Technical White Paper

I spent a good chunk of today writing a technical whitepaper titled Introducing the Linux Vendor Firmware Service — I’d really appreciate any comments, either from people who have seen all progress from the start or who don’t know anything about it at all.

Typos, or more general comments are all welcome and once I’ve got something a bit more polished I’ll be sending this to some important suits in a few well known companies. Thanks for any help!

Year of the Linux Desktop

As some of you already know, xdg-app project is dead. The Swedish conspiracy members tell me it’s a good thing and should turn your attention to project Flatpak.

Flatpak aims to solve the painful problem of the Linux distribution — the fact that the OS is intertwined with the applications. It is a pain to decouple the two to be able to

  • Keep a particular version of an app around, regardless of OS updates. Or vice versa, be able to run an uptodate application on an older OS.
  • Allow application authors distribute binaries they built themselves. Binaries they can support and accept useful bug reports for. Binaries they can keep updated.

But enough of the useful info, you can read all about the project on the new website. Instead, here comes the irrelevant tidbits that I find interesting to share myself. The new website has been built with Middleman, because that’s what I’ve been familiar with and worked for me in other projects.

It’s nice to have a static site that is maintainable and easy to update over time. Using something like Middleman allows to do things like embedding an SVG inside a simple markdown page and animate it with CSS.

=partial "graph.svg"
:css
  @keyframes spin {
    0% { transform: rotateZ(0deg); }
    100% { transform: rotateZ(359deg); }
  }
  #cog {
    animation: spin 6s infinite normal linear forwards;
  }

See it in action.

The resulting page has the SVG embedded to allow text copy & pasting and page linking, while keeping the SVG as a separate asset allows easy edits in Inkscape.

What I found really refreshing is seeing so much outside involvement on the website despite ever publicising it. Even during developing the site as my personal project I would get kind pull requests and bug reports on github. Thanks to all the kind souls out there. While not forgetting about future proofing our infrastructure, we should probably not forget the barrier to entry and making use of well established infrastructures like github.

Also, there is no Swedish conspiracy. Oh and Flatpak packages are almost ready to go for Fedora.

colour manipulation with the colour checker lut module

colour manipulation with the colour checker lut module

motivation

for raw photography there exist great presets for nice colour rendition:

  • in-camera colour processing such as canon picture styles
  • fuji film-emulation-like presets (provia velvia astia classic-chrome)
  • pat david's film emulation luts

unfortunately these are eat-it-or-die canned styles or icc lut profiles. you
have to apply them and be happy or tweak them with other tools. but can we
extract meaning from these presets? can we have understandable and tweakable
styles like these?

in a first attempt, i used a non-linear optimiser to control the parameters of
the modules in darktable's processing pipeline and try to match the output of
such styles. while this worked reasonably well for some of pat's film luts, it
failed completely on canon's picture styles. it was very hard to reproduce
generic colour-mapping styles in darktable without parametric blending.

that is, we require a generic colour to colour mapping function. this should be
equally powerful as colour look up tables, but enable us to inspect it and
change small aspects of it (for instance only the way blue tones are treated).

overview

in git master, there is a new module to implement generic colour mappings: the
colour checker lut module (lut: look up table). the following will be a
description how it works internally, how you can use it, and what this is good
for.

in short, it is a colour lut that remains understandable and editable. that is,
it is not a black-box look up table, but you get to see what it actually does
and change the bits that you don't like about it.

the main use cases are precise control over source colour to target colour
mapping, as well as matching in-camera styles that process raws to jpg in a
certain way to achieve a particular look. an example of this are the fuji film
emulation modes. to this end, we will fit a colour checker lut to achieve their
colour rendition, as well as a tone curve to achieve the tonal contrast.

target

to create the colour lut, it is currently necessary to take a picture of an
it8 target (well, technically we support any similar target, but
didn't try them yet so i won't really comment on it). this gives us a raw
picture with colour values for a few colour patches, as well as a in-camera jpg
reference (in the raw thumbnail..), and measured reference values (what we know
it should look like).

to map all the other colours (that fell in between the patches on the chart) to
meaningful output colours, too, we will need to interpolate this measured
mapping.

theory

we want to express a smooth mapping from input colours \(\mathbf{s}\) to target
colours \(\mathbf{t}\), defined by a couple of sample points (which will in our
case be the 288 patches of an it8 chart).

the following is a quick summary of what we implemented and much better
described in JP's siggraph course [0].

radial basis functions

radial basis functions are a means of interpolating between sample points
via

$$f(x) = \sum_i c_i\cdot\phi(\| x - s_i\|),$$

with some appropriate kernel \(\phi(r)\) (we'll get to that later) and a set of
coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at
and in between the source colour positions \(s_i\). now to make
sure the function actually passes through the target colours, i.e. \(f(s_i) =
t_i\), we need to solve a linear system. because we want the function to take
on a simple form for simple problems, we also add a polynomial part to it. this
makes sure that black and white profiles turn out to be black and white and
don't oscillate around zero saturation colours wildly. the system is

$$ \left(\begin{array}{cc}A &P\\P^t & 0\end{array}\right)
\cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) =
\left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$

where

$$A=\left(\begin{array}{ccc}
\phi(r_{00})& \phi(r_{10})& \cdots \\
\phi(r_{01})& \phi(r_{11})& \cdots \\
\phi(r_{02})& \phi(r_{12})& \cdots \\
\cdots & & \cdots
\end{array}\right),$$

and \(r_{ij} = \| s_i - t_j \|\) is the distance (CIE 76 \(\Delta\)E,
\(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between
source colour \(s_i\) and target colour \(t_j\), in our case

$$P=\left(\begin{array}{cccc}
L_{s_0}& a_{s_0}& b_{s_0}& 1\\
L_{s_1}& a_{s_1}& b_{s_1}& 1\\
\cdots
\end{array}\right)$$

is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial
part. these are here so we can for instance easily reproduce \(t = s\) by setting
\(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this
system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).

many options will do the trick and solve the system here. we use singular value
decomposition in our implementation. one advantage is that it is robust against
singular matrices as input (accidentally map the same source colour to
different target colours for instance).

thin plate splines

we didn't yet define the radial basis function kernel. it turns out so-called
thin plate splines have very good behaviour in terms of low oscillation/low curvature
of the resulting function. the associated kernel is

$$\phi(r) = r^2 \log r.$$

note that there is a similar functionality in gimp as a gegl colour mapping
operation (which i believe is using a shepard-interpolation-like scheme).

creating a sparse solution

we will feed this system with 288 patches of an it8 colour chart. that means,
with the added four polynomial coefficients, we have a total of 292
source/target colour pairs to manage here. apart from performance issues when
executing the interpolation, we didn't want that to show up in the gui like
this, so we were looking to reduce this number without introducing large error.

indeed this is possible, and literature provides a nice algorithm to do so, which
is called orthogonal matching pursuit [1].

this algorithm will select the most important hand full of coefficients \(\in
\mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up
to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make
best use of gui real estate.

the colour checker lut module

clut-iop

gui elements

when you select the module in darkroom mode, it should look something like the
image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid
instead). by default, it will load the 24 patches of a colour checker classic
and initialise the mapping to identity (no change to the image).

  • the grid shows a list of coloured patches. the colours of the patches are
    the source points \(\mathbf{s}\).
  • the target colour \(t_i\) of the selected patch \(i\) is shown as
    offset controlled by sliders in the ui under the grid of patches.
  • an outline is drawn around patches that have been altered, i.e. the source
    and target colours differ.
  • the selected patch is marked with a white square, and the number shows
    in the combo box below.

interaction

to interact with the colour mapping, you can change both source and target
colours. the main use case is to change the target colours however, and start
with an appropriate palette (see the presets menu, or download a style
somewhere).

  • you can change lightness (L), green-red (a), blue-yellow (b), or saturation
    (C) of the target colour via sliders.
  • select a patch by left clicking on it, or using the combo box, or using the
    colour picker
  • to change source colour, select a new colour from your image by using the
    colour picker, and shift-left-click on the patch you want to replace.
  • to reset a patch, double-click it.
  • right-click a patch to delete it.
  • shift-left-click on empty space to add a new patch (with the currently
    picked colour as source colour).

example use cases

example 1: dodging and burning with the skin tones preset

to process the following image i took of pat in the overground, i started with
the skin tones preset in the colour checker module (right click on nothing in
the gui or click on the icon with the three horizontal lines in the header and
select the preset).

then, i used the colour picker (little icon to the right of the patch# combo
box) to select two skin tones: very bright highlights and dark shadow tones.
the former i dragged the brightness down a bit, the latter i brightened up a
bit via the lightness (L) slider. this is the result:

originaldialed down contrast in skin tones

example 2: skin tones and eyes

in this image, i started with the fuji classic chrome-like style (see below for
a download link), to achieve the subdued look in the skin tones. then, i
picked the iris colour and saturated this tone via the saturation slider.

as a side note, the flash didn't fire in this image (iso 800) so i needed to
stop it up by 2.5ev and the rest is all natural lighting..

original

+2.5ev classic chromesaturated eyes

use darktable-chart to create a style

as a starting point, i matched a colour checker lut interpolation function to
the in-camera processing of fuji cameras. these have the names of old film and
generally do a good job at creating pleasant colours. this was done using the
darktable-chart utility, by matching raw colours to the jpg output (both in
Lab space in the darktable pipeline).

here is the link to the fuji styles, and how to use them.
i should be doing pat's film emulation presets with this, too, and maybe
styles from other cameras (canon picture styles?). darktable-chart will
output a dtstyle file, with the mapping split into tone curve and colour
checker module. this allows us to tweak the contrast (tone curve) in isolation
from the colours (lut module).

these styles were created with the X100T model, and reportedly they work so/so
with different camera models. the idea is to create a Lab-space mapping which
is well configured for all cameras. but apparently there may be sufficient
differences between the output of different cameras after applying their colour
matrices (after all these matrices are just an approximation of the real camera
to XYZ mapping).

so if you're really after maximum precision, you may have to create the styles
yourself for your camera model. here's how:

step-by-step tutorial to match the in-camera jpg engine

note that this is essentially similar to pascal's colormatch script, but will result in an editable style for darktable instead of a fixed icc lut.

  • need an it8 (sorry, could lift that, maybe, similar to what we do for
    basecurve fitting)
  • shoot the chart with your camera:
    • shoot raw + jpg
    • avoid glare and shadow and extreme angles, potentially the rims of your
      image altogether
    • shoot a lot of exposures, try to match L=92 for G00 (or look that up in
      your it8 description)
  • develop the images in darktable:
    • lens and vignetting correction needed on both or on neither of raw + jpg
    • (i calibrated for vignetting, see lensfun)
    • output colour space to Lab (set the secret option in darktablerc:
      allow_lab_output=true)
    • standard input matrix and camera white balance for the raw, srgb for jpg.
    • no gamut clipping, no basecurve, no anything else.
    • maybe do perspective correction and crop the chart
    • export as float pfm
  • darktable-chart
    • load the pfm for the raw image and the jpg target in the second tab
    • drag the corners to make the mask match the patches in the image
    • maybe adjust the security margin using the slider in the top right, to
      avoid stray colours being blurred into the patch readout
    • you need to select the gray ramp in the combo box (not auto-detected)
    • export csv

darktable-lut-tool-crop-01darktable-lut-tool-crop-02

darktable-lut-tool-crop-03darktable-lut-tool-crop-04

edit the csv in a text editor and manually add two fixed fake patches HDR00
and HDR01:

name;fuji classic chrome-like
description;fuji classic chrome-like colorchecker
num_gray;24
patch;L_source;a_source;b_source;L_reference;a_reference;b_reference
A01;22.22;13.18;0.61;21.65;17.48;3.62
A02;23.00;24.16;4.18;26.92;32.39;11.96
...
HDR00;100;0;0;100;0;0
HDR01;200;0;0;200;0;0
...

this is to make sure we can process high-dynamic range images and not destroy
the bright spots with the lut. this is needed since the it8 does not deliver
any information out of the reflective gamut and for very bright input. to fix
wide gamut input, it may be needed to enable gamut clipping in the input colour
profile module when applying the resulting style to an image with highly
saturated colours. darktable-chart does that automatically in the style it
writes.

  • fix up style description in csv if you want
  • run darktable-chart --csv
  • outputs a .dtstyle with everything properly switched off, and two modules
    on: colour checker + tonecurve in Lab

fitting error

when processing the list of colour pairs into a set of coefficients for the
thin plate spline, the program will output the approximation error, indicated
by average and maximum CIE 76 $\Delta$E for the input patches (the it8 in the
examples here). of course we don't know anything about colours which aren't
represented in the patch. the hope would be that the sampling is dense enough
for all intents and purposes (but nothing is holding us back from using a
target with even more patches).

for the fuji styles, these errors are typically in the range of mean $\Delta
E\approx 2$ and max $\Delta E \approx 10$ for 24 patches and a bit less for 49.
unfortunately the error does not decrease very fast in the number of patches
(and will of course drop to zero when using all the patches of the input chart).

provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751

astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165

velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697

classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298

mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548

future work

it is possible to match the reference values of the it8 instead of a reference
jpg output, to calibrate the camera more precisely than the colour matrix
would.

  • there is a button for this in the darktable-chart tool
  • needs careful shooting, to match brightness of reference value closely.
  • at this point it's not clear to me how white balance should best be handled here.
  • need reference reflectances of the it8 (wolf faust ships some for a few illuminants).

another next step we would like to take with this is to match real film footage
(porta etc). both reference and film matching will require some global exposure
calibration though.

references

  • [0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, "Scattered data interpolation for computer graphics" in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. pdf
  • [1] J. A. Tropp and A. C. Gilbert, "Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit", in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.

links

May 23, 2016

Interview with Neotheta

Monni

Could you tell us something about yourself?

I’m Neotheta, 23-year-old from Finland and draw colourful pictures with animals, furries and alike. Drawing has been my passion since I was little, I was also interested in digital art early on but had a love&hate relationship with that because computers were not for kids, unstable and tools were pretty awkward back in those days. So I learned drawing mostly with traditional tools first.

Do you paint professionally, as a hobby artist, or both?

Both, I work full-time as an artist right now and hope to continue so!

What genre(s) do you work in?

Is furry a genre? I practice my own styles to draw animals and furries – luckily this is where the most demand for my work is as well. But I’ve also drawn more cartoony & simplified styles for children’s books.

SupernovaWhose work inspires you most — who are your role models as an artist?

My mom draws really well, she got me drawing! After that it’s been a blender of many inspirations, they tend to change pretty frequently – except I’ve always loved cats! I’m more of a role-maker than a taker, so I often do things differently on purpose – it’s not always a good thing but probably the reason why I’m currently drawing for a living.

How and when did you get to try digital painting for the first time?

6-years-old, I drew a colourful bunny with MS paint at my mom’s workplace (a school) and told her students that this is the future! After that my parents also gave me a drawing tablet but I was somewhat disappointed at the programs intended for digital art at the time – they were all kind of awkward to use. So I gave up digital art for many years and went back to traditional tools. I think I was 15 when I decided to try again seriously.

What makes you choose digital over traditional painting?

I enjoy bright colours, many of those are difficult to produce with traditional colours. Also the ability to print the finished drawing on desired material, such as fabrics – or test what it looks best on and what size. I can also share the same drawing with many people if the outcome is
awesome.

How did you find out about Krita?

I was actually on a sort of mental breakdown because my computer had kicked the bucket and my new setup simply didn’t function together. I had recently experienced how stable and awesome Debian was for PC and I really wanted to give it a try instead of windows. In the middle of the mess and new things someone told me I should try Krita because it sounded like it’d fit my needs – a drawing program for Linux.

What was your first impression?

I was in total awe because first I was ready to sacrifice not using my old favorite programs just so I could work stable. But then Krita turned out to be better than my previous combination of using Paint tool Sai + Photoshop CS2, it had all the same features I needed in one. Krita on Linux was also SO STABLE and FAST and there was autosaving just in case. I learned to use Krita really quickly (also thanks to the helpful community!) and kept finding new useful tools like a constant stream. It was like a dream come true (still is).

What do you love about Krita?

It’s so stable and fast, I have it on my powerful desktop and old laptop and it functions so nicely on both of them! The community is wonderful. The brush engine is so diverse, interface is customizable, g’mic plugin, line smoothing, perspective assistants… to name a few!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Better text tools and multipage pdf saving would make Krita perfect for comics.

What sets Krita apart from the other tools that you use?

Stability, fast performance, for Linux, well designed for drawing and painting, and lots of features!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Everything I’ve drawn in the recent years have been on Krita so it’s a difficult pick. My current favorite is a personal drawing of my dragon character in a mystical crystal cavern.

Crystal_Cavern

What techniques and brushes did you use in it?

This is my more simple style, first I sketch, then lineart, colour and add textures last. I’ve mostly used Wolthera’s dynamic inking pen, airbrush, gradients and layer effects. A more detailed description and .kra file for inspecting can be found from my site here: https://neotheta.fi/tutorials/wips/crystal/

Where can people see more of your work?

https://neotheta.fi

Anything else you’d like to share?

I recently made a telegram sticker pack with one sticker to spread word about Krita (when people get my pack they get the Krita sticker too). Feel free to add it to yours too or use in another creative way!

kritasticker

Krita at KomMissia

Last weekend, ace Krita hacker Dmitry Kazakov attended KomMissia, the annual Russian comics festival. Best quote award goes to the Wacom booth attendants, who install Krita on all their demo machines because “too many people keep asking about it”!

Here’s Dmitry’s report: enjoy!

Last weekend I have attended the Russian annual comic festival “KomMissia”. That is a nice event where a lot of comic writers, painters and publishers meet, share ideas, talk and have a lot of fun together.

My main goal of visiting the event was to find out what people in the comic industry need and what tools they expect to see in a graphics application. One of the goals of our Kickstarter 2016 is to create text tools for the comic artists, so I got a lot of useful information about the process the painters use.

There were a lot of classes by famous comic masters. I got really impressed by the way how Dennis Calero works (although he doesn’t use Krita, I hope “yet”). He uses custom brushes in quite unexpected way to create the hatches on the painting. He paints a single hatch, then creates a brush from it and then just paints a set of hatches to create a shadow and uses ‘[‘ and ‘]’ shortcuts to modify the size of single hatches. Now I really want to implement a shortcut for that in Krita!

image02
I also got in touch with people from Wacom team who had a booth there. They showed a lot of nice and huge Cintiq’s. The funniest thing happened when I asked them if I can install Krita on their devices. They answered that they do already have Krita on most of their demo machines! They say that quite a lot of people asked them about Krita during the previous events, so they decided to install it by default 🙂 So now, if you happen to see a Wacom booth on an event, you can easily go a test Krita there!

image03

image04

There were also a lot of classes organized by the art-material shops. They let people try various markers, paints and papers. I tried all of them. And now I have a lot of new ideas for new Krita brushes! Hehe… 🙂

This is my “masterpiece” done with watercolor markers 🙂 We can actually implement something like that… The user might paint with usual brushes and then use a special tool for “watering” the canvas. That might be really useful for painters!

image07
And the paintings below are not just “testing strokes with acrylic markers”. It is live illustration of Kubelka-Munk color reflectance theory! The “lemon yellow” pigment is the same on both pictures, but due to the different opacity of its particles it looks absolutely different on different background colors!

image00

image08

So now I’ve got a lot of ideas about what brushes and tools can be implemented in Krita! Just follow us on Twitter and VK and you will be the first to know about new features! 🙂

PS:

More photos and paintings (by Nikky Art) from the event

image01
image06

image05

External Plugins in GNOME Software (6)

This is my last post about the gnome-software plugin structure. If you want more, join the mailing list and ask a question. If you’re not sure how something works then I’ve done a poor job on the docs, and I’m happy to explain as much as required.

GNOME Software used to provide a per-process plugin cache, automatically de-duplicating applications and trying to be smarter than the plugins themselves. This involved merging applications created by different plugins and really didn’t work very well. For 3.20 and later we moved to a per-plugin cache which allows the plugin to control getting and adding applications to the cache and invalidating it when it made sense. This seems to work a lot better and is an order of magnitude less complicated. Plugins can trivially be ported to using the cache using something like this:

 
   /* create new object */
   id = gs_plugin_flatpak_build_id (inst, xref);
-  app = gs_app_new (id);
+  app = gs_plugin_cache_lookup (plugin, id);
+  if (app == NULL) {
+     app = gs_app_new (id);
+     gs_plugin_cache_add (plugin, id, app);
+  }

Using the cache has two main benefits for plugins. The first is that we avoid creating duplicate GsApp objects for the same logical thing. This means we can query the installed list, start installing an application, then query it again before the install has finished. The GsApp returned from the second add_installed() request will be the same GObject, and thus all the signals connecting up to the UI will still be correct. This means we don’t have to care about migrating the UI widgets as the object changes and things like progress bars just magically work.

The other benefit is more obvious. If we know the application state from a previous request we don’t have to query a daemon or do another blocking library call to get it. This does of course imply that the plugin is properly invalidating the cache using gs_plugin_cache_invalidate() which it should do whenever a change is detected. Whether a plugin uses the cache for this reason is up to the plugin, but if it does it is up to the plugin to make sure the cache doesn’t get out of sync.

And one last thing: If you’re thinking of building an out-of-tree plugin for production use ask yourself if it actually belongs upstream. Upstream plugins get ported as the API evolves, and I’m already happily carrying Ubuntu and Fedora-specific plugins that either self-disable at runtime or are protected using --enable-foo configure argument.

External Plugins in GNOME Software (5)

This is my penultimate post about the gnome-software plugin structure. If you’ve followed everything so far, well done.

There’s a lot of flexibility in the gnome-software plugin structure; a plugin can add custom applications and handle things like search and icon loading in a totally custom way. Most of the time you don’t care about how search is implemented or how icons are going to be loaded, and you can re-use a lot of the existing code in the appstream plugin. To do this you just save an AppStream-format XML file in either /usr/share/app-info/xmls/, /var/cache/app-info/xmls/ or ~/.local/share/app-info/xmls/. GNOME Software will immediately notice any new files, or changes to existing files as it has set up the various inotify watches.

This allows plugins to care a lot less about how applications are going to be shown. For example, the steam plugin downloads and parses the descriptions from a remote service during gs_plugin_refresh(), and also finds the best icon types and downloads them too. Then it exports the data to an AppStream XML file, saving it to your home directory. This allows all the applications to be easily created (and then refined) using something as simple as gs_app_new("steam:foo.desktop"). All the search tokenisation and matching is done automatically, so it makes the plugin much simpler and faster.

The only extra step the steam plugin needs to do is implement the gs_plugin_adopt_app() function. This is called when an application does not have a management plugin set, and allows the plugin to claim the application for itself so it can handle installation, removal and updating. In the case of steam it could check the ID has a prefix of steam: or could check some other plugin-specific metadata using gs_app_get_metadata_item().

Another good example is the fwupd that wants to handle any firmware we’ve discovered in the AppStream XML. This might be shipped by the vendor in a package using Satellite, or downloaded from the LVFS. It wouldn’t be kind to set a management plugin explicitly in case XFCE or KDE want to handle this in a different way. This adoption function in this case is trivial:

void
gs_plugin_adopt_app (GsPlugin *plugin, GsApp *app)
{
  if (gs_app_get_kind (app) == AS_APP_KIND_FIRMWARE)
    gs_app_set_management_plugin (app, "fwupd");
}

The next (and last!) blog post I’m going to write is about the per-plugin cache that’s available to plugins to help speed up some operations. In related news, we now have a mailing list, so if you’re interested in this stuff I’d encourage you to join and ask questions there. I also released gnome-software 3.21.2 this morning, so if you want to try all this plugin stuff yourself your distro if probably going to be updating packages soon.

May 22, 2016

External Plugins in GNOME Software (4)

After my last post, I wanted to talk more about the refine functionality in gnome-software. As previous examples have shown it’s very easy to add a new application to the search results, updates list or installed list. Some plugins don’t want to add more applications, but want to modify existing applications to add more information depending on what is required by the UI code. The reason we don’t just add everything at once is that for search-as-you-type to work effectively we need to return results in less than about 50ms and querying some data can take a long time. For example, it might take a few hundred ms to work out the download size for an application when a plugin has to also look at what dependencies are already installed. We only need this information once the user has clicked the search results and when the user is in the details panel, so we can save a ton of time not working out properties that are not useful.

Lets looks at another example.

gboolean
gs_plugin_refine_app (GsPlugin *plugin,
                      GsApp *app,
                      GsPluginRefineFlags flags,
                      GCancellable *cancellable,
                      GError **error)
{
  /* not required */
  if ((flags & GS_PLUGIN_REFINE_FLAGS_REQUIRE_LICENSE) == 0)
    return TRUE;

  /* already set */
  if (gs_app_get_license (app) != NULL)
    return TRUE;

  /* FIXME, not just hardcoded! */
  if (g_strcmp0 (gs_app_get_id (app, "chiron.desktop") == 0))
    gs_app_set_license (app, "GPL-2.0 and LGPL-2.0+");

  return TRUE;
}

This is a simple example, but shows what a plugin needs to do. It first checks if the action is required, in this case GS_PLUGIN_REFINE_FLAGS_REQUIRE_LICENSE. This request is more common than you might expect as even the search results shows a non-free label if the license is unspecified or non-free. It then checks if the license is already set, returning with success if so. If not, it checks the application ID and hardcodes a license; in the real world this would be querying a database or parsing an additional config file. As mentioned before, if the license value is freely available without any extra work then it’s best just to set this at the same time as when adding the app with gs_app_list_add(). Think of refine as adding things that cost time to calculate only when really required.

The UI in gnome-software is quite forgiving for missing data, hiding sections or labels as required. Some things are required however, and forgetting to assign an icon or short description will get the application vetoed so that it’s not displayed at all. Helpfully, running gnome-software --verbose on the command line will tell you why an application isn’t shown along with any extra data.

As a last point, a few people have worries that these blogs are perhaps asking for trouble; external plugins have a chequered history in a number of projects and I’m sure gnome-software would be in an even worse position given that the core maintainer team is still so small. Being honest, if we break your external plugin due to an API change in the core you probably should have pushed your changes upstream sooner. There’s a reason you have to build with -DI_KNOW_THE_GNOME_SOFTWARE_API_IS_SUBJECT_TO_CHANGE

Funding Krita’s Development

Funding Krita

We’re running this kickstarter to fund Krita’s development. That sounds like a truism, but free software projects actually trying to fund development is still a rarity. When KDE, our mother project, holds a fund raiser it’s to collect budget to make developers meetings possible, fund infrastructure (like this, Krita’s website) and so on, but KDE does not pay developers. Other projects don’t even try, or leave it to individual developers. Still others, like Blender, have a lot of experience funding development, of course.

We are happily learning from Blender, and have funded development for years. The first fund raisers were to pay Lukas Tvrdy to work full-time on Krita for a couple of months. His work was handsomely funded and made it possible for Krita to take the leap from slow-and-buggy to usable for day to day work.

Since 2013, the Krita Foundation supports Dmitry Kazakov to work full-time on Krita. And if we may be allowed to toot our horn a bit, that’s a pretty awesome achievement for a project that’s so much smaller than, for example, Blender. The results are there: every release make the previous release look old-hat. Since 2015, the Foundation also sponsors me, that’s Boudewijn Rempt, to work on Krita for three days a week. The other three days I have a day job — Krita doesn’t really bring in enough money to pay for my mortgage yet.

So, what’s coming in, and what’s going out?

In:

krita-foundation-income-20152016

  • Kickstarter: last year’s kickstarter resulted in about 35,000 euros. (Which explains this year’s goal of 30,000, which I’m sure we’re going to make!)
  • Monthly donations through the development fund: about 350 euros per month, 4200 euros per year
  • Krita on Steam: about 500 euros a month, 6000 euros per year
  • One-time donations through paypal: about 500 euros per month, since this drops of sharply during the kickstarter month, it’s only about 5000 euros a years
  • Sales of training videos: about 500 euros per month, same as with the donations, so about 5000 euros a year.

Last year we also had a total of 20,000 euros in special one-time donations, one earmarked for the port to Qt 5.

So, we have a yearly income of about 60.000 euros. Not bad for a free software project without any solid commercial backing! Especially not when looking at what we’re doing with it!

Now for spending the money — always fun!

krita-foundationoutgo-20152016

  • Sponsored development: for Dmitry and me together, that’s about 42,000 a year. Yes, we’re cheap. And if you’re a commercial user of Krita and need something developed, contact us!
  • Supporting our volunteers: there are some volunteers in our community who spend an inordinate amount of time on Krita, for instance, preparing and sending out all kickstarter rewards. Dutch law allows us to give those volunteers a little something , and that comes to about 3000 euros a year.
  • Hardware. We cannot buy all obscure drawing tablets on the market, so that’s not where we spend our money. Besides, manufacturers like Wacom, Huion and Yiynova have supported us by sending us test hardware! But when we decided to make OSX a first-level supported platform, we needed a Mac. When there were reports of big trouble on AMD CPU/GPU hardware, we needed a test system. This comes to about 2500 euros
  • Mini-sprints: Basically, getting a small groups, the Summer of Code students, me and Dmitry together to prepare the projects, or gettting Wolthera and me together to prepare the kickstarter. That’s about 1000 euros a year.
  • Video course: we spend about 3000 euros a year on creating a new video training course. This year will be all about animation!
  • Kickstarter rewards, postage, administrative costs: 7000 euros.

So, the total we spend at the moment is about… 57,500 euros.

In other words, Mr. Micawber would declare us to be happy! “Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.

But there’s not much of a buffer here, and a lot of potential for growth! And that’s still my personal goal for Krita: over the coming year or two, double the income and the spending.

#happybdaybassel 2016 with Cost of Freedom & Waiting… Books In Print

Today for Bassel’s 35th Birthday (#happybdaybassel) We Released Along With Many Others, The Cost of Freedom and Waiting… A Prose Book Now Available in Print.

External Plugins in GNOME Software (3)

Lots of nice feedback from my last post, so here’s some new stuff. Up now is downloading new metadata and updates in plugins.

The plugin loader supports a gs_plugin_refresh() vfunc that is called in various situations. To ensure plugins have the minimum required metadata on disk it is called at startup, but with a cache age of infinite. This basically means the plugin must just ensure that any data exists no matter what the age.

Usually once per day, we’ll call gs_plugin_refresh() but with the correct cache age set (typically a little over 24 hours) which allows the plugin to download new metadata or payload files from remote servers. The gs_utils_get_file_age() utility helper can help you work out the cache age of file, or the plugin can handle it some other way.

For the Flatpak plugin we just make sure the AppStream metadata exists at startup, which allows us to show search results in the UI. If the metadata did not exist (e.g. if the user had added a remote using the commandline without gnome-software running) then we would show a loading screen with a progress bar before showing the main UI. On fast connections we should only show that for a couple of seconds, but it’s a good idea to try any avoid that if at all possible in the plugin.
Once per day the gs_plugin_refresh() method is called again, but this time with GS_PLUGIN_REFRESH_FLAGS_PAYLOAD set. This is where the Flatpak plugin would download any ostree trees (but not doing the deloy step) so that the applications can be updated live in the details panel without having to wait for the download to complete. In a similar way, the fwupd plugin downloads the tiny LVFS metadata with GS_PLUGIN_REFRESH_FLAGS_METADATA and then downloads the large firmware files themselves only when the GS_PLUGIN_REFRESH_FLAGS_PAYLOAD flag is set.

If the @app parameter is set for gs_plugin_download_file() then the progress of the download is automatically proxied to the UI elements associated with the application, for instance the install button would show a progress bar in the various different places in the UI. For a refresh there’s no relevant GsApp to use, so we’ll leave it NULL which means something is happening globally which the UI can handle how it wants, for instance showing a loading page at startup.

gboolean
gs_plugin_refresh (GsPlugin *plugin,
                   guint cache_age,
                   GsPluginRefreshFlags flags,
                   GCancellable *cancellable,
                   GError **error)
{
  const gchar *metadata_fn = "/var/cache/example/metadata.xml";
  const gchar *metadata_url = "http://www.example.com/new.xml";

  /* this is called at startup and once per day */
  if (flags & GS_PLUGIN_REFRESH_FLAGS_METADATA) {
    g_autoptr(GFile) file = g_file_new_for_path (metadata_fn);

    /* is the metadata missing or too old */
    if (gs_utils_get_file_age (file) > cache_age) {
      if (!gs_plugin_download_file (plugin,
                                    NULL,
                                    metadata_url,
                                    metadata_fn,
                                    cancellable,
                                    error)) {
        /* it's okay to fail here */
        return FALSE;
      }
      g_debug ("successfully downloaded new metadata");
    }
  }

  /* this is called when the session is idle */
  if ((flags & GS_PLUGIN_REFRESH_FLAGS_PAYLOAD) == 0) {
    // FIXME: download any required updates now
  }

  return TRUE;
}

Note, if the downloading fails it’s okay to return FALSE; the plugin loader continues to run all plugins and just logs an error to the console. We’ll be calling into gs_plugin_refresh() again in only a few hours, so there’s no need to bother the user. For actions like gs_plugin_app_install we also do the same thing, but we also save the error on the GsApp itself so that the UI is free to handle that how it wants, for instance showing a GtkDialog window for example.

New Rapid Photo Downloader


New Rapid Photo Downloader

Damon Lynch brings us a new release!

Community member Damon Lynch happens to make an awesome program called Rapid Photo Downloader in his “spare” time. In fact you may have heard mention of it as part of Riley Brandt’s “The Open Source Photography Course”*. It is a program that specializes in downloading photo and video from media in as efficient a manner as possible while extending the process with extra functionality.

* Riley donates a portion of the proceeds from his course to various projects, and Rapid Photo Downloader is one of them!

Work Smart, not Dumb

The main features of Rapid Photo Downloader are listed on the website:

  1. Generates meaningful, user configurable file and folder names
  2. Downloads photos and videos from multiple devices simultaneously
  3. Backs up photos and videos as they are downloaded
  4. Is carefully optimized to download and back up at high speed
  5. Easy to configure and use
  6. Runs under Unity, Gnome, KDE and other Linux desktops
  7. Available in thirty languages
  8. Program configuration and use is fully documented

Damon announced his 0.9.0a1 release on the forums, and Riley Brandt even recorded a short overview of the new features:

(Shortly after announcing the 0.9.0a1 release, he followed it up with a 0.9.0a2 release with some bug fixes).

Some of the neat new features include being able to preview the download subfolder and storage space of devices before you download:

Rapid Photo Downloader Main Window

Also being able to download from multiple devices in parallel, including from all cameras supported by gphoto2:

Rapid Photo Downloader Downloading

There is much, much more in this release. Damon goes into much further detail on his post in the forum, copied here:


How about its Timeline, which groups photos and videos based on how much time elapsed between consecutive shots. Use it to identify photos and videos taken at different periods in a single day or over consecutive days.

You can adjust the time elapsed between consecutive shots that is used to build the Timeline to match your shooting sessions.

Rapid Photo Downloader timeline

How about a modern look?

Rapid Photo Downloader about

Download instructions: http://damonlynch.net/rapid/download.html

For those who’ve used the older version, I’m copying and pasting from the ChangeLog, which covers most but not all changes:

  • New features compared to the previous release, version 0.4.11:

    • Every aspect of the user interface has been revised and modernized.

    • Files can be downloaded from all cameras supported by gPhoto2, including smartphones. Unfortunately the previous version could download from only some cameras.

    • Files that have already been downloaded are remembered. You can still select previously downloaded files to download again, but they are unchecked by default, and their thumbnails are dimmed so you can differentiate them from files that are yet to be downloaded.

    • The thumbnails for previously downloaded files can be hidden.

    • Unique to Rapid Photo Downloader is its Timeline, which groups photos and videos based on how much time elapsed between consecutive shots. Use it to identify photos and videos taken at different periods in a single day or over consecutive days. A slider adjusts the time elapsed between consecutive shots that is used to build the Timeline. Time periods can be selected to filter which thumbnails are displayed.

    • Thumbnails are bigger, and different file types are easier to distinguish.

    • Thumbnails can be sorted using a variety of criteria, including by device and file type.

    • Destination folders are previewed before a download starts, showing which subfolders photos and videos will be downloaded to. Newly created folders have their names italicized.

    • The storage space used by photos, videos, and other files on the devices being downloaded from is displayed for each device. The projected storage space on the computer to be used by photos and videos about to be downloaded is also displayed.

    • Downloading is disabled when the projected storage space required is more than the capacity of the download destination.

    • When downloading from more than one device, thumbnails for a particular device are briefly highlighted when the mouse is moved over the device.

    • The order in which thumbnails are generated prioritizes representative samples, based on time, which is useful for those who download very large numbers of files at a time.

    • Thumbnails are generated asynchronously and in parallel, using a load balancer to assign work to processes utilizing up to 4 CPU cores. Thumbnail generation is faster than the 0.4 series of program releases, especially when reading from fast memory cards or SSDs. (Unfortunately generating thumbnails for a smartphone’s photos is painfully slow. Unlike photos produced by cameras, smartphone photos do not contain embedded preview images, which means the entire photo must be downloaded and cached for its thumbnail to be generated. Although Rapid Photo Downloader does this for you, nothing can be done to speed it up).

    • Thumbnails generated when a device is scanned are cached, making thumbnail generation quicker on subsequent scans.

    • Libraw is used to render RAW images from which a preview cannot be extracted, which is the case with Android DNG files, for instance.

    • Freedesktop.org thumbnails for RAW and TIFF photos are generated once they have been downloaded, which means they will have thumbnails in programs like Gnome Files, Nemo, Caja, Thunar, PCManFM and Dolphin. If the path files are being downloaded to contains symbolic links, a thumbnail will be created for the path with and without the links. While generating these thumbnails does slow the download process a little, it’s a worthwhile tradeoff because Linux desktops typically do not generate thumbnails for RAW images, and thumbnails only for small TIFFs.

    • The program can now handle hundreds of thousands of files at a time.

    • Tooltips display information about the file including name, modification time, shot taken time, and file size.

    • Right click on thumbnails to open the file in a file browser or copy the path.

    • When downloading from a camera with dual memory cards, an emblem beneath the thumbnail indicates which memory cards the photo or video is on

    • Audio files that accompany photos on professional cameras like the Canon EOS-1D series of cameras are now also downloaded. XMP files associated with a photo or video on any device are also downloaded.

    • Comprehensive log files are generated that allow easier diagnosis of program problems in bug reports. Messages optionally logged to a terminal window are displayed in color.

    • When running under Ubuntu‘s Unity desktop, a progress bar and count of files available for download is displayed on the program’s launcher.

    • Status bar messages have been significantly revamped.

    • Determining a video’s correct creation date and time has been improved, using a combination of the tools MediaInfo and ExifTool. Getting the right date and time is trickier than it might appear. Depending on the video file and the camera that produced it, neither MediaInfo nor ExifTool always give the correct result. Moreover some cameras always use the UTC time zone when recording the creation date and time in the video’s metadata, whereas other cameras use the time zone the video was created in, while others ignore time zones altogether.

    • The time remaining until a download is complete (which is shown in the status bar) is more stable and more accurate. The algorithm is modelled on that used by Mozilla Firefox.

    • The installer has been totally rewritten to take advantage of Python‘s tool pip, which installs Python packages. Rapid Photo Downloader can now be easily installed and uninstalled. On Ubuntu, Debian and Fedora-like Linux distributions, the installation of all dependencies is automated. On other Linux distrubtions, dependency installation is partially automated.

    • When choosing a Job Code, whether to remember the choice or not can be specified.

  • Removed feature:

    • Rotate Jpeg images - to apply lossless rotation, this feature requires the program jpegtran. Some users reported jpegtran corrupted their jpegs’ metadata – which is bad under any circumstances, but terrible when applied to the only copy of a file. To preserve file integrity under all circumstances, unfortunately the rotate jpeg option must therefore be removed.
  • Under the hood, the code now uses:

    • PyQt 5.4 +

    • gPhoto2 to download from cameras

    • Python 3.4 +

    • ZeroMQ for interprocess communication

    • GExiv2 for photo metadata

    • Exiftool for video metadata

    • Gstreamer for video thumbnail generation

  • Please note if you use a system monitor that displays network activity, don’t be alarmed if it shows increased local network activity while the program is running. The program uses ZeroMQ over TCP/IP for its interprocess messaging. Rapid Photo Downloader’s network traffic is strictly between its own processes, all running solely on your computer.

  • Missing features, which will be implemented in future releases:

    • Components of the user interface that are used to configure file renaming, download subfolder generation, backups, and miscellaneous other program preferences. While they can be configured by manually editing the program’s configuration file, that’s far from easy and is error prone. Meanwhile, some options can be configured using the command line.

    • There are no full size photo and video previews.

    • There is no error log window.

    • Some main menu items do nothing.

    • Files can only be copied, not moved.


Of course, Damon doesn’t sit still. He quickly followed up the 0.9.0a1 announcement by announcing 0.9.0a2 which included a few bug fixes from the previous release:

  • Added command line option to import preferences from from an old program version (0.4.11 or earlier).

  • Implemented auto unmount using GIO (which is used on most Linux desktops) and UDisks2 (all those desktops that don’t use GIO, e.g. KDE).

  • Fixed bug while logging processes being forcefully terminated.

  • Fixed bug where stored sequence number was not being correctly used when renaming files.

  • Fixed bug where download would crash on Python 3.4 systems due to use of Python 3.5 only math.inf


If you’ve been considering optimizing your workflow for photo import and initial sorting now is as good a time as any - particularly with all of the great new features that have been packed into this release! Head on over to the Rapid Photo Downloader website to have a look and see the instructions for getting a copy:

http://damonlynch.net/rapid/download.html

Remember, this is Alpha software still (though most of the functionality is all in place). If you do run into any problems, please drop in and let Damon know in the forums!

May 20, 2016

External Plugins in GNOME Software (2)

After quite a lot of positive feedback from my last post I’ll write some more about custom plugins. Next up is returning custom applications into the installed list. The use case here is a proprietary software distribution method that installs custom files into your home directory, but you can use your imagination for how this could be useful.

The example here is all hardcoded, and a true plugin would have to derive the details about the GsApp, for example reading in an XML file or YAML config file somewhere. So, code:

#include <gnome-software.h>

void
gs_plugin_initialize (GsPlugin *plugin)
{
  gs_plugin_add_rule (plugin, GS_PLUGIN_RULE_RUN_BEFORE, "icons");
}

gboolean
gs_plugin_add_installed (GsPlugin *plugin,
                         GsAppList *list,
                         GCancellable *cancellable,
                         GError **error)
{
  g_autofree gchar *fn = NULL;
  g_autoptr(GsApp) app = NULL;
  g_autoptr(AsIcon) icon = NULL;

  /* check if the app exists */
  fn = g_build_filename (g_get_home_dir (), "chiron", NULL);
  if (!g_file_test (fn, G_FILE_TEST_EXISTS))
    return TRUE;

  /* the trigger exists, so create a fake app */
  app = gs_app_new ("example:chiron.desktop");
  gs_app_set_management_plugin (app, "example");
  gs_app_set_kind (app, AS_APP_KIND_DESKTOP);
  gs_app_set_state (app, AS_APP_STATE_INSTALLED);
  gs_app_set_name (app, GS_APP_QUALITY_NORMAL,
                   "Chiron");
  gs_app_set_summary (app, GS_APP_QUALITY_NORMAL,
                      "A teaching application");
  gs_app_set_description (app, GS_APP_QUALITY_NORMAL,
        "Chiron is the name of an application.\n\n"
        "It can be used to demo some of our features");

  /* these are all optional */
  gs_app_set_version (app, "1.2.3");
  gs_app_set_size_installed (app, 2 * 1024 * 1024);
  gs_app_set_size_download (app, 3 * 1024 * 1024);
  gs_app_set_origin_ui (app, "The example plugin");
  gs_app_add_category (app, "Game");
  gs_app_add_category (app, "ActionGame");
  gs_app_add_kudo (app, GS_APP_KUDO_INSTALLS_USER_DOCS);
  gs_app_set_license (app, GS_APP_QUALITY_NORMAL,
                      "GPL-2.0+ and LGPL-2.1+");

  /* create a stock icon (loaded by the 'icons' plugin) */
  icon = as_icon_new ();
  as_icon_set_kind (icon, AS_ICON_KIND_STOCK);
  as_icon_set_name (icon, "input-gaming");
  gs_app_set_icon (app, icon);

  /* return new app */
  gs_app_list_add (list, app);

  return TRUE;
}

This shows a lot of the plugin architecture in action. Some notable points:

  • The application ID (example:chiron.desktop) has a prefix of example which means we can co-exist with any package or flatpak version of the Chiron application, not setting the prefix would make the UI confused if more than one chiron.desktop got added.
  • Setting the management plugin means we can check for this string when working out if we can handle the install or remove action.
  • Most applications want a kind of AS_APP_KIND_DESKTOP to be visible as an application.
  • The origin is where the application originated from — usually this will be something like Fedora Updates.
  • The GS_APP_KUDO_INSTALLS_USER_DOCS means we get the blue “Documentation” award in the details page; there are many kudos to award to deserving apps.
  • Setting the license means we don’t get the non-free warning — removing the 3rd party warning can be done using AS_APP_QUIRK_PROVENANCE
  • The icons plugin will take the stock icon and convert it to a pixbuf of the correct size.

To show this fake application just compile and install the plugin, touch ~/chiron and then restart gnome-software.

Screenshot from 2016-05-20 21-22-38

By filling in the optional details (which can also be filled in using gs_plugin_refine_app() (to be covered in a future blog post) you can also make the details page a much more exciting place. Adding a set of screenshots is left as an exercise to the reader.

Screenshot from 2016-05-20 21-22-46

For anyone interested, I’m also slowly writing up these blog posts into proper docbook and uploading them with the gtk-doc files here. I think this documentation would have been really useful for the Endless and Ubuntu people a few weeks ago, so if anyone sees any typos or missing details please let me know.

May 19, 2016

External plugins in GNOME Software

I’ve just pushed a set of patches to gnome-software master that allow people to compile out-of-tree gnome-software plugins.

In general, building things out-of-tree isn’t something that I think is a very good idea; the API and ABI inside gnome-software is still changing and there’s a huge benefit to getting plugins upstream where they can undergo review and be ported as the API adapts. I’m also super keen to provide configurability in GSettings for doing obviously-useful things, the sort of thing Fleet Commander can set for groups of users. However, now we’re shipping gnome-software in enterprise-class distros we might want to allow customers to ship thier own plugins to make various business-specific changes that don’t make sense upstream. This might involve querying a custom LDAP server and changing the suggested apps to reflect what groups the user is in, or might involve showing a whole new class of applications that does not conform to the Linux-specific “application is a desktop-file” paradigm. This is where a plugin makes sense, and something I’d like to support in future updates to RHEL 7.

At this point it probably makes sense to talk a bit about how the architecture of gnome-software works. At its heart it’s just a big plugin loader that has some GTK UI that gets created for various result types. The idea is we have lots of small plugins that each do one thing and then pass the result onto the other plugins. These are ordered by dependencies against each other at runtime and each one can do things like editing an existing application or adding a new application to the result set. This is how we can add support for things like firmware updating, Steam, GNOME Shell web-apps and flatpak bundles without making big changes all over the source tree.

There are broadly 3 types of plugin methods:

  • Actions: Do something on a specific GsApp; install gimp.desktop
  • Refine: Get details about a specific GsApp; is firefox.desktop installed? or get reviews for inkscape.desktop
  • Adopt: Can this plugin handle this GsApp; can fwupd handle com.hughski.ColorHug2.firmware

You only need to define the vfuncs that the plugin needs, and the name is taken automatically from the suffix of the .so file. So, lets look at a sample plugin one chunk at a time, taking it nice and slow. First the copyright and licence (it only has to be GPLv2+ if it’s headed upstream):

/*
 * Copyright (C) 2016 Richard Hughes 
 * Licensed under the GNU General Public License Version 2
 */

Then, the magic header that sucks in everything that’s exported:

#include <gnome-software.h>

Then we have to define when our plugin is run in reference to other plugins, as we’re such a simple plugin we’re relying on another plugin to run after us to actually make the GsApp “complete”, i.e. adding icons and long descriptions:

void
gs_plugin_initialize (GsPlugin *plugin)
{
  gs_plugin_add_rule (plugin, GS_PLUGIN_RULE_RUN_BEFORE, "appstream");
}

Then we can start to do something useful. In this example I want to show GIMP as a result (from any provider, e.g. flatpak or a distro package) when the user searches exactly for fotoshop. There is no prefixing or stemming being done for simplicity.

gboolean
gs_plugin_add_search (GsPlugin *plugin,
                      gchar **values,
                      GsAppList *list,
                      GCancellable *cancellable,
                      GError **error)
{
  guint i;
  for (i = 0; values[i] != NULL; i++) {
    if (g_strcmp0 (values[i], "fotoshop") == 0) {
      g_autoptr(GsApp) app = gs_app_new ("gimp.desktop");
      gs_app_add_quirk (app, AS_APP_QUIRK_MATCH_ANY_PREFIX);
      gs_app_list_add (list, app);
    }
  }
  return TRUE;
}

We can then easily build and install the plugin using:

gcc -shared -o libgs_plugin_example.so gs-plugin-example.c -fPIC \
 `pkg-config --libs --cflags gnome-software` \
 -DI_KNOW_THE_GNOME_SOFTWARE_API_IS_SUBJECT_TO_CHANGE &&
 sudo cp libgs_plugin_example.so `pkg-config gnome-software --variable=plugindir`

Screenshot from 2016-05-19 10-39-53

I’m going to be cleaning up the exported API and adding some more developer documentation before I release the next tarball, but if this is useful to you please let me know and I’ll do some more blog posts explaining more how the internal architecture of gnome-software works, and how you can do different things with plugins.

G’MIC 1.7.1: When the flowers are blooming, image filters abound!

Disclaimer: This article is a duplicate of this post, originally published on the Pixls.us website, by the same authors.

Then we shall all burn together by Philipp Haegi.

 A new version 1.7.1Spring 2016” of G’MIC (GREYC’s Magic for Image Computing),
the open-source framework for image processing, has been released recently (26 April 2016). This is a great opportunity to summarize some of the latest advances and features over the last 5 months.

G’MIC: A brief overview

G’MIC is an open-source project started in August 2008. It has been developed in the IMAGE team of the GREYC laboratory from the CNRS (one of the major French public research institutes). This team is made up of researchers and teachers specializing in the algorithms and mathematics of image processing. G’MIC is released under the free software licence CeCILL (GPL-compatible) for various platforms (Linux, Mac and Windows). It provides a set of various user interfaces for the manipulation of generic image data, that is images or image sequences of multispectral data being 2D or 3D, and with high-bit precision (up to 32bits floats per channel). Of course, it manages “classical” color images as well.

logo_gmic

Logo and (new) mascot of the G’MIC project, the open-source framework for image processing.

Note that the project just got a redesign of its mascot Gmicky, drawn by David Revoy, a French illustrator well-known to free graphics lovers for being responsible for the great libre webcomics Pepper&CarottG’MIC is probably best known for it’s GIMP plug-in, first released in 2009. Today, this popular GIMP extension proposes more than 460 customizable filters and effects to apply on your images.

gmic_gimp171_s

Overview of the G’MIC plug-in for GIMP.

But G’MIC is not a plug-in for GIMP only. It also offers a command-line interface, that can be used in addition with the CLI tools from ImageMagick or GraphicsMagick (this is undoubtly the most powerful and flexible interface of the framework). G’MIC also has a web service G’MIC Online to apply effects on your images directly from a web browser. Other G’MIC-based interfaces also exist (ZArt, a plug-in for Krita, filters for Photoflow…). All these interfaces are based on the generic C++ libraries CImg and libgmic which are portable, thread-safe and multi-threaded (through the use of OpenMP). Today, G’MIC has more than 900 functions to process images, all being fully configurable, for a library of only approximately 150 kloc of source code. It’s features cover a wide spectrum of the image processing field, with algorithms for geometric and color manipulations, image filtering (denoising/sharpening with spectral, variational or patch-based approaches…), motion estimation and registration, drawing of graphic primitives (up to 3d vector objects), edge detection, object segmentation, artistic rendering, etc. This is a versatile tool, useful to visualize and explore complex image data, as well as elaborate custom image processing pipelines (see these slides to get more information about the motivations and goals of the G’MIC project).

A selection of some new filters and effects

Here we look at the descriptions of some of the most significant filters recently added. We illustrate their usage from the G’MIC plug-in for GIMP. All of these filters are of course available from other interfaces as well (in particular within the CLI tool gmic).

Painterly rendering of photographs

The filter Artistic / Brushify tries to transform an image into a painting. Here, the idea is to simulate the process of painting with brushes on a white canvas. One provides a template image and the algorithm first analyzes the image geometry (local contrasts and orientations of the contours), then attempt to reproduce the image with a single brush that will be locally rotated and scaled accordingly to the contour geometry. By simulating enough of brushstrokes, one gets a “painted” version of the template image, which is more or less close to the original one, depending on the brush shape, its size, the number of allowed orientations, etc. All these settings being customizable by the user as parameters of the algorithm: This filter allows thus to render a wide variety of painting effects.

gmic_brushify

Overview of the filter “Brushify” in the G’MIC plug-in GIMP. The brush that will be used by the algorithmis visible on the top left.

The animation below illustrates the diversity of results one can get with this filter, applied on the same input picture of a lion. Various brush shapes and geometries have been supplied to the algorithm. Brushify is computationally expensive so its implementation is parallelized (each core gives several brushstrokes simultaneously).

brushify2

A few examples of renderings obtained with “Brushify” from the same template image, but with different brushes and parameters.

Note that it’s particularly fun to invoke this filter from the command line interface (using the option -brushify available in gmic) to process a sequence of video frames (see this example of “ brushified “ video):

Reconstructing missing data from sparse samples

G’MIC gets a new algorithm to reconstruct missing data in images. This is a classical problem in image processing, often named “Image Inpainting“, and G’MIC already had a lot of useful filters to solve this problem. Here, the newly added interpolation method assumes only a sparse set of image data is known, for instance a few scattered pixels over the image (instead of continuous chuncks of image data). The analysis and the reconstruction of the global image geometry is then particularly tough.

The new option -solidify in G’MIC allows the reconstruction of dense image data from such a sparse sampling, based on a multi-scale diffusion PDE’s-based technique. The figure below illustrates the ability of the algorithm with an example of image reconstruction. We start from an input image of a waterdrop, and we keep only 2.7% of the image data (a very little amount of data!). The algorithm is able to reconstruct a whole image that looks like the input, even if all the small details have not been fully reconstructed (of course!). The more samples we have, the finer details we can recover.

waterdrop2

Reconstruction of an image from a sparse sampling.

As this reconstruction technique is quite generic, several new G’MIC filters takes advantage of it:

  • Filter Repair / Solidify applies the algorithm in a direct manner, by reconstructing transparent areas from the interpolation of opaque regions. The animation below shows how this filter can be used to create an artistic blur on the image borders.
gmic_sol

Overview of the “Solidify” filter, in the G’MIC plug-in for GIMP.

From an artistic point of view, there are many possibilities offered by this filters. For instance, it becomes really easy to generate color gradients with complex shapes, as shown with the two examples below (also in this video that details the whole process).

gmic_solidify2

Using the “Solidify” filter of G’MIC to easily create color gradients with complex shapes (input images on the left, filter results on the right).

  • Filter Artistic / Smooth abstract uses same idea as the one with the waterdrop image: it purposely sub-samples the image in a sparse way, by choosing keypoints mainly on the image edges, then use the reconstruction algorithm to get the image back. With a low number of samples, the filter can only render a piecewise smooth image, i.e. a smooth abstraction of the input image.
smooth_abstract

Overview of the “Smooth abstract” filter in the G’MIC plug-in for GIMP.

  • Filter Rendering / Gradient [random] is able to synthetize random colored backgrounds. Here again, the filter initializes a set of colors keypoints randomly chosen over the image, then interpolate them with the new reconstruction algorithm. We end up with a psychedelic background composed of randomly oriented color gradients.
gradient_random

Overview of the “Gradient [random]” filter in the G’MIC plug-in for GIMP.

  • Simulation of analog films : the new reconstruction algorithm also allowed a major improvement for all the analog film emulation filters that have been present in G’MIC for years. The section Film emulation/ proposes a wide variety of filters for this purpose. Their goal is to apply color transformations to simulate the look of a picture shot by an analogue camera with a certain kind of film. Below, you can see for instance a few of the 300 colorimetric transformations that are available in G’MIC.
gmic_clut1

A few of the 300+ color transformations available in G’MIC.

From an algorithmic point of view, such a color mapping is extremely simple to implement : for each of the 300+ presets, G’MIC actually has an HaldCLUT, that is a function defining for each possible color (R,G,B) (of the original image), a new color (R’,G’,B’) color to set instead. As this function is not necessarily analytic, a HaldCLUT is stored in a discrete manner as a lookup table that gives the result of the mapping for all possible colors of the RGB cube (that is 2^24 = 16777216 values if we work with a 8bits precision per color component). This HaldCLUT-based color mapping is illustrated below for all values of the RGB color cube.

gmic_clut0

Principle of an HaldCLUT-based colorimetric transformation.

This is a large amount of data: even by subsampling the RGB space (e.g. with 6 bits per component) and compressing the corresponding HaldCLUT file, you ends up with approximately 200 and 300 kB for each mapping file. Multiply this number by 300+ (the number of available mappings in G’MIC), and you get a total of 85MB of data, to store all these color transformations. Definitely not convenient to spread and package!

The idea was then to develop a new lossy compression technique focused on HaldCLUT files, that is volumetric discretised vector-valued functions which are piecewise smooth by nature. And that what has been done in G’MIC, thanks to the new sparse reconstruction algorithm. Indeed, the reconstruction technique also works with 3D image data (such as a HaldCLUT!), so one simply has to extract a sufficient number of significant keypoints in the RGB cube and interpolate them afterwards to allow the reconstruction of a whole HaldCLUT (taking care to have a reconstruction error small enough to be sure that the color mapping we get with the compressed HaldCLUT is indistinguishable from the non-compressed one).

gmic_clut2

How the decompression of an HaldCLUT now works in G’MIC, from a set of colored keypoints located in the RGB cube.

Thus, G’MIC doesn’t need to store all the color data from a HaldCLUT, but only a sparse sampling of it (i.e. a sequence of { rgb_keypoint, new_rgb_color }). Depending on the geometric complexity of the HaldCLUTs to encode, more or less keypoints are necessary (roughly from 30 to 2000). As a result, the storage of the 300+ HaldCLUTs in G’MIC requires now only 850 KiB of data (instead of 85 MiB), that is a compression gain of 99% ! That makes the whole HaldCLUT data storable in a single file that is easy to ship with the G’MIC package. Now, a user can then apply all the G’MIC color transformations while being offline (previously, each HaldCLUT had to be downloaded separately from the G’MIC server when requested).

It looks like this new reconstruction algorithm from sparse samples is really great, and no doubts it will be used in other filters in the future.

Make textures tileable

Filter Arrays & tiles / Make seamless [patch-based] tries to transform an input texture to make it tileable, so that it can be duplicated as tiles along the horizontal and vertical axes without visible seams on the borders of adjacent tiles. Note that this is something that can be extremely hard to achieve, if the input texture has few auto-similarity or glaring luminosity changes spatially. That is the case for instance with the “Salmon” texture shown below as four adjacent tiles (configuration 2×2) with a lighting that goes from dark (on the left) to bright (on the right). Here, the algorithm modifies the texture so that the tiling shows no seams, but where the aspect of the original texture is preserved as much as possible (only the texture borders are modified).

seamless1

Overview of the “Make Seamless” filter in the G’MIC plug-in for GIMP.

We can imagine some great uses of this filter, for instance in video games, where texture tiling is common to render large virtual worlds.

seamless2

Result of the “Make seamless” filter of G’MIC to make a texture tileable.

Image decomposition into several levels of details

A “new” filter Details / Split details [wavelets] has been added to decompose an image into several levels of details. It is based on the so-called “à trous” wavelet decomposition. For those who already know the popular Wavelet Decompose plug-in for GIMP, there won’t be so much novelty here, as it is mainly the same kind of decomposition technique that has been implemented. Having it directly in G’MIC is still a great news: it offers now a preview of the different scales that will be computed, and the implementation is parallelized to take advantage of multiple cores.

gmic_wavelets

Overview of the wavelet-based image decomposition filter, in the G’MIC plug-in for GIMP.

The filter outputs several layers, so that each layer contains the details of the image at a given scale. All those layers blended together gives the original image back. Thus, one can work on those output layers separately and modify the image details only for a given scale. There are a lot of applications for this kind of image decomposition, one of the most spectacular being the ability to retouch the skin in portraits : the flaws of the skin are indeed often present in layers with middle-sized scales, while the natural skin texture (the pores) are present in the fine details. By selectively removing the flaws while keeping the pores, the skin aspect stays natural after the retouch (see this wonderful link for a detailed tutorial about skin retouching techniques, with GIMP).

skin

Using the wavelet decomposition filter in G’MIC for removing visible skin flaws on a portrait.

Image denoising based on “Patch-PCA”

G’MIC is also well known to offer a wide range of algorithms for image denoising and smoothing (currently more than a dozen). And he got one more ! Filter Repair / Smooth [patch-pca] proposed a new image denoising algorithm that is both efficient and computationally intensive (despite its multi-threaded implementation, you probably should avoid it on a machine with less than 8 cores…). In return, it sometimes does magic to suppress noise while preserving small details.

patchpca

Result of the new patch-based denoising algorithm added to G’MIC.

The “Droste” effect

The Droste effect (also known as “mise en abyme“ in art) is the effect of a picture appearing within itself recursively. To achieve this, a new filter Deformations / Continuous droste has been added into G’MIC. It’s actually a complete rewrite of the popular Mathmap’s Droste filter that has existed for years. Mathmap was a very popular plug-in for GIMP, but it seems to be not maintained anymore. The Droste effect was one of its most iconic and complex filter. Martin “Souphead”, one former user of Mathmap then took the bull by the horns and converted the complex code of this filter specifically into a G’MIC script, resulting in a parallelized implementation of the filter.

droste0

Overview of the converted “Droste” filter, in the G’MIC plug-in for GIMP.

This filter allows all artistic delusions. For instance, it becomes trivial to create the result below in a few steps: create a selection around the clock, move it on a transparent background, run the Droste filter, et voilà!.

droste2

A simple example of what the G’MIC “Droste” filter can do.

Equirectangular to nadir-zenith transformation

The filter Deformations / Equirectangular to nadir-zenith is another filter converted from Mathmap to G’MIC. It is specifically used for the processing of panoramas: it reconstructs both the Zenith and the
Nadir regions of a panorama so that they can be easily modified (for instance to reconstruct missing parts), before being reprojected back into the input panorama.

zenith1

Overview of the “Deformations / Equirectangular to nadir-zenith” filter in the G’MIC plug-in for GIMP.

Morgan Hardwood has wrote a quite detailled tutorial, on pixls.us, about the reconstruction of missing parts in the Zenith/Nadir of an equirectangular panorama. Check it out!

Other various improvements

Finally, here are other highlights about the G’MIC project:

  • Filter Rendering / Kitaoka Spin Illusion is another Mathmap filter converted to G’MIC by Martin “Souphead”. It generates a certain kind of optical illusions as shown below (close your eyes if you are epileptic!)
spin2

Result of the “Kitaoka Spin Illusion” filter.

  • Filter Colors / Color blindness transforms the colors of an image to simulate different types of color blindness. This can be very helpful to check the accessibility of a web site or a graphical document for colorblind people. The color transformations used here are the same as defined on Coblis, a website that proposes to apply this kind of simulation online. The G’MIC filter gives strictly identical results, but it ease the batch processing of several images at once.
gmic_cb

Overview of the colorblindness simulation filter, in the G’MIC plug-in for GIMP.

  • Since a few years now, G’MIC has its own parser of mathematical expression, a really convenient module to perform complex calculations when applying image filters This core feature gets new functionalities: the ability to manage variables that can be complex, vector or matrix-valued, but also the creation of user-defined mathematical functions. For instance, the classical rendering of the Mandelbrot fractal set (done by estimating the divergence of a sequence of complex numbers) can be implemented like this, directly on the command line:
    $ gmic 512,512,1,1,"c = 2.4*[x/w,y/h] - [1.8,1.2]; z = [0,0]; for (iter = 0, cabs(z)
gmic_mand

Using the G’MIC math evaluator to implement the rendering of the Mandelbrot set, directly from the command line!_

This clearly enlarge the math evaluator ability, as you are not limited to scalar variables anymore. You can now create complex filters which are able to solve linear systems or compute eigenvalues/eigenvectors, and this, for each pixel of an input image. It’s a bit like having a micro-(micro!)-Octave inside G’MIC. Note that the Brushify filter described earlier uses these new features extensively. It’s also interesting to know that the G’MIC math expression evaluator has its own JIT compiler to achieve a fast evaluation of expressions when applied on thousands of image values simultaneously.

  • Another great contribution has been proposed by Tobias Fleischer, with the creation of a new API to invoke the functions of the libgmic library (which is the library containing all the G’MIC features, initially available through a C++ API only). As the C ABI is standardized (unlike C++), this basically means G’MIC can be interfaced more easily with languages other than C++. In the future, we can imagine the development of G’MIC APIs for languages such as Python for instance. Tobias is currently using this new C API to develop G’MIC-based plug-ins compatible with the OpenFX standard. Those plug-ins should be usable indifferently in video editing software such as After effects, Sony Vegas Pro or Natron. This is still an on-going work though.
gmic_natron

Overview of some G’MIC-based OpenFX plug-ins, running under Natron.

gmic_blender2

Overview of a dedicated G’MIC script running within the Blender VSE.

  • You can find out G’MIC filters also in the opensource nonlinear video editor Flowblade, thanks to the hard work of Janne Liljeblad (Flowblade project leader). Here again, the goal is to allow the application of G’MIC effects and filters directly on image sequences, mainly for artistic purposes (as shown in this video or this one).
gmic_flowblade

Overview of a G’MIC filter applied under Flowblade, a nonlinear video editor.

What’s next ?

As you see, the G’MIC project is doing well, with an active development and cool new features added months after months. You can find and use interfaces to G’MIC in more and more opensource software, as GIMPKritaBlenderPhotoflowFlowbladeVeejayEKD and Natron in a near future (at least we hope so!).

At the same time, we can see more and more external resources available for G’MIC : tutorials, blog articles (hereherehere,…), or demonstration videos (herehereherehere,…). This shows the project becoming more useful to users of opensource software for graphics and photography.

The development of version 1.7.2 already hit the ground running, so stay tuned and visit the official G’MIC forum on pixls.us to get more info about the project developement and get answers to your questions. Meanwhile, feel the power of free software for image processing!

May 18, 2016

Krita 3.0 Release Candidate 1 Released

We’re getting closer and closer to releasing Krita 3.0, the first version of Krita that includes animation tools, instant preview and which is based on Qt5! Today’s release candidate offers many fixes and improvements over the previous beta releases. The Animation and Instant Preview features were funded by last year’s successful Kickstarter, and right now we’re running our third Kickstarter campaign: this year’s main topics are creating a great text and vector toolset. After one week, we’re already half-way!

support-krita-2016-3

The biggest new feature is no doubt support for hand-drawn animation. This summer, Jouni Pentikäinen will continue improving the animation tools, but it’s already a solid toolset. Here’s a video tutorial where Wolthera shows how she created the animated headers for this year’s Kickstarter stretch goals:

And here is another demonstration by Wolthera showing off the Instant Preview feature, which makes it possible to use big brushes on big canvases. It may take a bit more memory, but it gives a huge speed boost:

Apart from Instant Preview, Animation, Qt5-support, Krita 3.0 will have a number of Kickstarter stretch goals, like improved layer handling, improved shortcuts, the tangent normal brush, and great colorspace selector, guides, a grids and guides docker, snapping to grids and guides, improved shortcut palette, gradient map filter and much, much, much more. And we’ll be sure to fix more issues before we present the final release.

So check out the review prepared by Nathan Lovato, while we’re preparing the full release announcement:

Release Candidate 3 Improvements

Compared to the last beta, we’ve got the following improvements:

  • Shortcuts now also work if the cursor is not hovering over the canvas
  • Translations are more complete
  • The export to PDF dialog shows the page preview
  • The advanced color selector is faster
  • The vector gradient tool performs petter
  • Fill layers are saved and loaded correctly
  • Improvements to Instant Preview
  • Fix crashes when setting font properties in the text tool.
  • Fix handling the mirror axis handle
  • Use OpenMP in G’Mic on Windows and Linux, which makes most filters much faster
  • Fixes to the file dialog
  • The Spriter export plugin was rewritten
  • Fix a number of crashes
  • Fix the scaling of the toolbox icons
  • Add new icons for the pan and zoom tools
  • Make it possible to enable HiDPI mode by setting the environment variable KRITA_HIDPI to ON.
  • Fix the fade, distance and time sensors in the brush editor
  • Make it possible to open color palettes again
  • Add a shortcut for toggling onion skinning
  • Fix loading of the onion skin button states
  • Add a lock for the brush drawing angle
  • Handle positioning popups and dialogs on multi-monitor setups correctly

And a load of smaller things!

Downloads

Windows Shell Extension package by Alvin Wong. Just install it and Windows Explorer will start showing preview and meta information for Krita files. (Disregard any warnings by virus checkers, because this package is built with the NSIS installer maker, some virus checkers always think it’s infected, it’s not.)

Windows: Unzip and run the bin/krita.exe executable! These downloads do not interfere with your existing installation. The configuration file location has been moved from %APPDATA%\Local\kritarc to %APPDATA%\Local\krita\kritarc.

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released. Disable OpenGL in the preferences dialog to see a cursor outline, grids and guides and so on.

The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage is provided with g’mic built without OpenMP (which makes it much slower)

As usual, you can use these builds without affecting your 2.9 installation.

Source: you can find the source here:

G'MIC 1.7.1


G'MIC 1.7.1

When the flowers are blooming, image filters abound!

A new version 1.7.1Spring 2016” of G’MIC (GREYC’s Magic for Image Computing), the open-source framework for image processing, has been released recently (26 April 2016). This is a great opportunity to summarize some of the latest advances and features over the last 5 months.

G’MIC: A brief overview

G’MIC is an open-source project started in August 2008. It has been developed in the IMAGE team of the GREYC laboratory from the CNRS (one of the major French public research institutes). This team is made up of researchers and teachers specializing in the algorithms and mathematics of image processing. G’MIC is released under the free software licence CeCILL (GPL-compatible) for various platforms (Linux, Mac and Windows). It provides a set of various user interfaces for the manipulation of generic image data, that is images or image sequences of multispectral data being 2D or 3D, and with high-bit precision (up to 32bits floats per channel). Of course, it manages “classical” color images as well.

logo_gmic Logo and (new) mascot of the G’MIC project, the open-source framework for image processing.

Note that the project just got a redesign of its mascot Gmicky, drawn by David Revoy, a French illustrator well-known to free graphics lovers for being responsible for the great libre webcomics Pepper&Carott.

G’MIC is probably best known for it’s GIMP plug-in, first released in 2009. Today, this popular GIMP extension proposes more than 460 customizable filters and effects to apply on your images.

gmic_gimp171_s Overview of the G’MIC plug-in for GIMP.

But G’MIC is not a plug-in for GIMP only. It also offers a command-line interface, that can be used in addition with the CLI tools from ImageMagick or GraphicsMagick (this is undoubtly the most powerful and flexible interface of the framework). G’MIC also has a web service G’MIC Online to apply effects on your images directly from a web browser. Other G’MIC-based interfaces also exist (ZArt, a plug-in for Krita, filters for Photoflow…). All these interfaces are based on the generic C++ libraries CImg and libgmic which are portable, thread-safe and multi-threaded (through the use of OpenMP). Today, G’MIC has more than 900 functions to process images, all being fully configurable, for a library of only approximately 150 kloc of source code. It’s features cover a wide spectrum of the image processing field, with algorithms for geometric and color manipulations, image filtering (denoising/sharpening with spectral, variational or patch-based approaches…), motion estimation and registration, drawing of graphic primitives (up to 3d vector objects), edge detection, object segmentation, artistic rendering, etc. This is a versatile tool, useful to visualize and explore complex image data, as well as elaborate custom image processing pipelines (see these slides to get more information about the motivations and goals of the G’MIC project).

A selection of some new filters and effects

Here we look at the descriptions of some of the most significant filters recently added. We illustrate their usage from the G’MIC plug-in for GIMP. All of these filters are of course available from other interfaces as well (in particular within the CLI tool gmic).

Painterly rendering of photographs

The filter Artistic / Brushify tries to transform an image into a painting. Here, the idea is to simulate the process of painting with brushes on a white canvas. One provides a template image and the algorithm first analyzes the image geometry (local contrasts and orientations of the contours), then attempt to reproduce the image with a single brush that will be locally rotated and scaled accordingly to the contour geometry. By simulating enough of brushstrokes, one gets a “painted” version of the template image, which is more or less close to the original one, depending on the brush shape, its size, the number of allowed orientations, etc. All these settings being customizable by the user as parameters of the algorithm: This filter allows thus to render a wide variety of painting effects.

gmic_brushify Overview of the filter “Brushify” in the G’MIC plug-in GIMP. The brush that will be used by the algorithmis visible on the top left.

The animation below illustrates the diversity of results one can get with this filter, applied on the same input picture of a lion. Various brush shapes and geometries have been supplied to the algorithm. Brushify is computationally expensive so its implementation is parallelized (each core gives several brushstrokes simultaneously).

brushify2 A few examples of renderings obtained with “Brushify” from the same template image, but with different brushes and parameters.

Note that it’s particularly fun to invoke this filter from the command line interface (using the option -brushify available in gmic) to process a sequence of video frames (see this example of “ brushified “ video):


Reconstructing missing data from sparse samples

G’MIC gets a new algorithm to reconstruct missing data in images. This is a classical problem in image processing, often named “Image Inpainting“, and G’MIC already had a lot of useful filters to solve this problem. Here, the newly added interpolation method assumes only a sparse set of image data is known, for instance a few scattered pixels over the image (instead of continuous chuncks of image data). The analysis and the reconstruction of the global image geometry is then particularly tough.

The new option -solidify in G’MIC allows the reconstruction of dense image data from such a sparse sampling, based on a multi-scale diffusion PDE’s-based technique. The figure below illustrates the ability of the algorithm with an example of image reconstruction. We start from an input image of a waterdrop, and we keep only 2.7% of the image data (a very little amount of data!). The algorithm is able to reconstruct a whole image that looks like the input, even if all the small details have not been fully reconstructed (of course!). The more samples we have, the finer details we can recover.

waterdrop2 Reconstruction of an image from a sparse sampling.

As this reconstruction technique is quite generic, several new G’MIC filters takes advantage of it:

  • Filter Repair / Solidify applies the algorithm in a direct manner, by reconstructing transparent areas from the interpolation of opaque regions. The animation below shows how this filter can be used to create an artistic blur on the image borders.
gmic_sol Overview of the “Solidify” filter, in the G’MIC plug-in for GIMP.

From an artistic point of view, there are many possibilities offered by this filters. For instance, it becomes really easy to generate color gradients with complex shapes, as shown with the two examples below (also in this video that details the whole process).

gmic_solidify2 Using the “Solidify” filter of G’MIC to easily create color gradients with complex shapes (input images on the left, filter results on the right).
  • Filter Artistic / Smooth abstract uses same idea as the one with the waterdrop image: it purposely sub-samples the image in a sparse way, by choosing keypoints mainly on the image edges, then use the reconstruction algorithm to get the image back. With a low number of samples, the filter can only render a piecewise smooth image, i.e. a smooth abstraction of the input image.
smooth_abstract Overview of the “Smooth abstract” filter in the G’MIC plug-in for GIMP.
  • Filter Rendering / Gradient [random] is able to synthetize random colored backgrounds. Here again, the filter initializes a set of colors keypoints randomly chosen over the image, then interpolate them with the new reconstruction algorithm. We end up with a psychedelic background composed of randomly oriented color gradients.
gradient_random Overview of the “Gradient [random]” filter in the G’MIC plug-in for GIMP.
  • Simulation of analog films : the new reconstruction algorithm also allowed a major improvement for all the analog film emulation filters that have been present in G’MIC for years. The section Film emulation/ proposes a wide variety of filters for this purpose. Their goal is to apply color transformations to simulate the look of a picture shot by an analogue camera with a certain kind of film. Below, you can see for instance a few of the 300 colorimetric transformations that are available in G’MIC.
gmic_clut1 A few of the 300+ color transformations available in G’MIC.

From an algorithmic point of view, such a color mapping is extremely simple to implement : for each of the 300+ presets, G’MIC actually has an HaldCLUT, that is a function defining for each possible color (R,G,B) (of the original image), a new color (R’,G’,B’) color to set instead. As this function is not necessarily analytic, a HaldCLUT is stored in a discrete manner as a lookup table that gives the result of the mapping for all possible colors of the RGB cube (that is 2^24 = 16777216 values if we work with a 8bits precision per color component). This HaldCLUT-based color mapping is illustrated below for all values of the RGB color cube.

gmic_clut0 Principle of an HaldCLUT-based colorimetric transformation.

This is a large amount of data: even by subsampling the RGB space (e.g. with 6 bits per component) and compressing the corresponding HaldCLUT file, you ends up with approximately 200 and 300 kB for each mapping file. Multiply this number by 300+ (the number of available mappings in G’MIC), and you get a total of 85MB of data, to store all these color transformations. Definitely not convenient to spread and package!

The idea was then to develop a new lossy compression technique focused on HaldCLUT files, that is volumetric discretised vector-valued functions which are piecewise smooth by nature. And that what has been done in G’MIC, thanks to the new sparse reconstruction algorithm. Indeed, the reconstruction technique also works with 3D image data (such as a HaldCLUT!), so one simply has to extract a sufficient number of significant keypoints in the RGB cube and interpolate them afterwards to allow the reconstruction of a whole HaldCLUT (taking care to have a reconstruction error small enough to be sure that the color mapping we get with the compressed HaldCLUT is indistinguishable from the non-compressed one).

gmic_clut2 How the decompression of an HaldCLUT now works in G’MIC, from a set of colored keypoints located in the RGB cube.

Thus, G’MIC doesn’t need to store all the color data from a HaldCLUT, but only a sparse sampling of it (i.e. a sequence of { rgb_keypoint, new_rgb_color }). Depending on the geometric complexity of the HaldCLUTs to encode, more or less keypoints are necessary (roughly from 30 to 2000). As a result, the storage of the 300+ HaldCLUTs in G’MIC requires now only 850 KiB of data (instead of 85 MiB), that is a compression gain of 99% ! That makes the whole HaldCLUT data storable in a single file that is easy to ship with the G’MIC package. Now, a user can then apply all the G’MIC color transformations while being offline (previously, each HaldCLUT had to be downloaded separately from the G’MIC server when requested).

It looks like this new reconstruction algorithm from sparse samples is really great, and no doubts it will be used in other filters in the future.

Make textures tileable

Filter Arrays & tiles / Make seamless [patch-based] tries to transform an input texture to make it tileable, so that it can be duplicated as tiles along the horizontal and vertical axes without visible seams on the borders of adjacent tiles. Note that this is something that can be extremely hard to achieve, if the input texture has few auto-similarity or glaring luminosity changes spatially. That is the case for instance with the “Salmon” texture shown below as four adjacent tiles (configuration 2x2) with a lighting that goes from dark (on the left) to bright (on the right). Here, the algorithm modifies the texture so that the tiling shows no seams, but where the aspect of the original texture is preserved as much as possible (only the texture borders are modified).

seamless1 Overview of the “Make Seamless” filter in the G’MIC plug-in for GIMP.

We can imagine some great uses of this filter, for instance in video games, where texture tiling is common to render large virtual worlds.

seamless2 Result of the “Make seamless” filter of G’MIC to make a texture tileable.

Image decomposition into several levels of details

A “new” filter Details / Split details [wavelets] has been added to decompose an image into several levels of details. It is based on the so-called “à trous” wavelet decomposition. For those who already know the popular Wavelet Decompose plug-in for GIMP, there won’t be so much novelty here, as it is mainly the same kind of decomposition technique that has been implemented. Having it directly in G’MIC is still a great news: it offers now a preview of the different scales that will be computed, and the implementation is parallelized to take advantage of multiple cores.

gmic_wavelets Overview of the wavelet-based image decomposition filter, in the G’MIC plug-in for GIMP.

The filter outputs several layers, so that each layer contains the details of the image at a given scale. All those layers blended together gives the original image back.

Thus, one can work on those output layers separately and modify the image details only for a given scale. There are a lot of applications for this kind of image decomposition, one of the most spectacular being the ability to retouch the skin in portraits : the flaws of the skin are indeed often present in layers with middle-sized scales, while the natural skin texture (the pores) are present in the fine details. By selectively removing the flaws while keeping the pores, the skin aspect stays natural after the retouch (see this wonderful link for a detailed tutorial about skin retouching techniques, with GIMP).

skin Using the wavelet decomposition filter in G’MIC for removing visible skin flaws on a portrait.

Image denoising based on “Patch-PCA”

G’MIC is also well known to offer a wide range of algorithms for image denoising and smoothing (currently more than a dozen). And he got one more ! Filter Repair / Smooth [patch-pca] proposed a new image denoising algorithm that is both efficient and computationally intensive (despite its multi-threaded implementation, you probably should avoid it on a machine with less than 8 cores…). In return, it sometimes does magic to suppress noise while preserving small details.

patchpca Result of the new patch-based denoising algorithm added to G’MIC.

The “Droste” effect

The Droste effect (also known as “mise en abyme“ in art) is the effect of a picture appearing within itself recursively. To achieve this, a new filter Deformations / Continuous droste has been added into G’MIC. It’s actually a complete rewrite of the popular Mathmap’s Droste filter that has existed for years. Mathmap was a very popular plug-in for GIMP, but it seems to be not maintained anymore. The Droste effect was one of its most iconic and complex filter. Martin “Souphead”, one former user of Mathmap then took the bull by the horns and converted the complex code of this filter specifically into a G’MIC script, resulting in a parallelized implementation of the filter.

droste0 Overview of the converted “Droste” filter, in the G’MIC plug-in for GIMP.

This filter allows all artistic delusions. For instance, it becomes trivial to create the result below in a few steps: create a selection around the clock, move it on a transparent background, run the Droste filter, et voilà!.

droste2 A simple example of what the G’MIC “Droste” filter can do.

Equirectangular to nadir-zenith transformation

The filter Deformations / Equirectangular to nadir-zenith is another filter converted from Mathmap to G’MIC. It is specifically used for the processing of panoramas: it reconstructs both the Zenith and the Nadir regions of a panorama so that they can be easily modified (for instance to reconstruct missing parts), before being reprojected back into the input panorama.

zenith1 Overview of the “Deformations / Equirectangular to nadir-zenith” filter in the G’MIC plug-in for GIMP.

Morgan Hardwood has wrote a quite detailled tutorial, here on pixls.us, about the reconstruction of missing parts in the Zenith/Nadir of an equirectangular panorama. Check it out!

Other various improvements

Finally, here are other highlights about the G’MIC project:

  • Filter Rendering / Kitaoka Spin Illusion is another Mathmap filter converted to G’MIC by Martin “Souphead”. It generates a certain kind of optical illusions as shown below (close your eyes if you are epileptic!)
spin2 Result of the “Kitaoka Spin Illusion” filter.
  • Filter Colors / Color blindness transforms the colors of an image to simulate different types of color blindness. This can be very helpful to check the accessibility of a web site or a graphical document for colorblind people. The color transformations used here are the same as defined on Coblis, a website that proposes to apply this kind of simulation online. The G’MIC filter gives strictly identical results, but it ease the batch processing of several images at once.
gmic_cb Overview of the colorblindness simulation filter, in the G’MIC plug-in for GIMP.
  • Since a few years now, G’MIC has its own parser of mathematical expression, a really convenient module to perform complex calculations when applying image filters This core feature gets new functionalities: the ability to manage variables that can be complex, vector or matrix-valued, but also the creation of user-defined mathematical functions. For instance, the classical rendering of the Mandelbrot fractal set (done by estimating the divergence of a sequence of complex numbers) can be implemented like this, directly on the command line:
    $ gmic 512,512,1,1,"c = 2.4*[x/w,y/h] - [1.8,1.2]; z = [0,0]; for (iter = 0, cabs(z)<=2 && ++iter<256, z = z**z + c); 6*iter" -map 7,2
    
gmic_mand Using the G’MIC math evaluator to implement the rendering of the Mandelbrot set, directly from the command line!_

This clearly enlarge the math evaluator ability, as you are not limited to scalar variables anymore. You can now create complex filters which are able to solve linear systems or compute eigenvalues/eigenvectors, and this, for each pixel of an input image. It’s a bit like having a micro-(micro!)-Octave inside G’MIC. Note that the Brushify filter described earlier uses these new features extensively. It’s also interesting to know that the G’MIC math expression evaluator has its own JIT compiler to achieve a fast evaluation of expressions when applied on thousands of image values simultaneously.

  • Another great contribution has been proposed by Tobias Fleischer, with the creation of a new C API to invoke the functions of the libgmic library (which is the library containing all the G’MIC features, initially available through a C++ API only). As the C ABI is standardized (unlike C++), this basically means G’MIC can be interfaced more easily with languages other than C++. In the future, we can imagine the development of G’MIC APIs for languages such as Python for instance. Tobias is currently using this new C API to develop G’MIC-based plug-ins compatible with the OpenFX standard. Those plug-ins should be usable indifferently in video editing software such as After effects, Sony Vegas Pro or Natron. This is still an on-going work though.
gmic_natron Overview of some G’MIC-based OpenFX plug-ins, running under Natron.
gmic_blender2 Overview of a dedicated G’MIC script running within the Blender VSE.
  • You can find out G’MIC filters also in the opensource nonlinear video editor Flowblade, thanks to the hard work of Janne Liljeblad (Flowblade project leader). Here again, the goal is to allow the application of G’MIC effects and filters directly on image sequences, mainly for artistic purposes (as shown in this video or this one).
gmic_flowblade Overview of a G’MIC filter applied under Flowblade, a nonlinear video editor.

What’s next ?

As you see, the G’MIC project is doing well, with an active development and cool new features added months after months. You can find and use interfaces to G’MIC in more and more opensource software, as GIMP, Krita, Blender, Photoflow, Flowblade, Veejay, EKD and Natron in a near future (at least we hope so!).

At the same time, we can see more and more external resources available for G’MIC : tutorials, blog articles (here, here, here,…), or demonstration videos (here, here, here, here,…). This shows the project becoming more useful to users of opensource software for graphics and photography.

The development of version 1.7.2 already hit the ground running, so stay tuned and visit the official G’MIC forum on pixls.us to get more info about the project developement and get answers to your questions. Meanwhile, feel the power of free software for image processing!

May 13, 2016

Blutella, a Bluetooth speaker receiver

Quite some time ago, I was asked for a way to use the AV amplifier (which has a fair bunch of speakers connected to it) in our living-room that didn't require turning on the TV to choose a source.

I decided to try and solve this problem myself, as an exercise rather than a cost saving measure (there are good-quality Bluetooth receivers available for between 15 and 20€).

Introducing Blutella



I found this pot of Nutella in my travels (in Europe, smaller quantities are usually in a jar that looks like a mustard glass, with straight sides) and thought it would be a perfect receptacle for a CHIP, to allow streaming via Bluetooth to the amp. I wanted to make a nice how-to for you, dear reader, but best laid plans...

First, the materials:
  • a CHIP
  • jar of Nutella, and "Burnt umber" acrylic paint
  • micro-USB to USB-A and jack 3.5mm to RCA cables
  • Some white Sugru, for a nice finish around the cables
  • bit of foam, a Stanley knife, a CD marker

That's around 10€ in parts (cables always seem to be expensive), not including our salvaged Nutella jar, and the CHIP itself (9$ + shipping).

You'll start by painting the whole of the jar, on the inside, with the acrylic paint. Allow a couple of days to dry, it'll be quite thick.

So, the plan that went awry. Turns out that the CHIP, with the cables plugged in, doesn't fit inside this 140g jar of Nutella. I also didn't make the holes exactly in the right place. The CHIP is tiny, but not small enough to rotate inside the jar without hitting the side, and the groove to screw the cap also have only one position.

Anyway, I pierced two holes in the lid for the audio jack and the USB charging cable, stuffed the CHIP inside, and forced the lid on so it clipped on the jar's groove.

I had nice photos with foam I cut to hold the CHIP in place, but the finish isn't quite up to my standards. I guess that means I can attempt this again with a bigger jar ;)

The software

After flashing the CHIP with Debian, I logged in, and launched a script which I put together to avoid either long how-tos, or errors when I tried to reproduce the setup after a firmware update and reset.

The script for setting things up is in the CHIP-bluetooth-speaker repository. There are a few bugs due to drivers, and lack of integration, but this blog is the wrong place to track them, so check out the issues list.

Apart from those driver problems, I found the integration between PulseAudio and BlueZ pretty impressive, though I wish there was a way for the speaker to reconnect to the phone I streamed from when turned on again, as Bluetooth speakers and headsets do, removing one step from playing back audio.

New 3.0 development builds! (With a cool little new feature as well)

Our kickstarter campaign has been running four days now, and we’re only 2000 euros short of being at 50% funded! Of course it’s Kickstarter, so it’s 100% or nothing, so we’ve still got work to do!

In the meantime, Dmitry published an article on Geektimes and one of the comments had a tantalizing suggestion about locking the brush angles that when Wolthera, David Revoy and Raghukamath, resident artists on the Krita chat channel, saw the mockup of, they all cried, we want that!

Since we could implement it without adding new strings, Dmitry took half a day off from bug fixing and added it! And David Revoy let himself be inspired by Cezanna and produced this introduction video:

And then, among other things, we fixed the application icon on Windows, fixed issues with translations on Windows, fixed issues with the color picker, finished the Spriter scml export plugin, worked around some bugs in Qt that made popups and dialogs show on the wrong monitor, made sure author and title info gets saved to PNG images, fixed display artefacts when using Instant Preview, fixed the direction of the fade, distance and time brush engine sensors, fixed reading the random offset parameter in brush engines, improved custom shortcut handling and fixed some crashes. Oh, and we fixed the Krita Lime repository builds for Ubuntu 16.04, so you can replace the ancient 2.9.7 build Ubuntu provides with a shiny 2.9.11

Krita 3.0 is getting stabler all the time; a new beta will be released next week, but we feel it’s good enough that we’ve added Bleeding Edge download links to the download page, too! For your convenience, here are the links to the latest builds:

Windows Shell Extension package by Alvin Wong. Just install it and Windows Explorer will start showing preview and meta information for Krita files. (Disregard any warnings by virus checkers, because this package is built with the NSIS installer maker, some virus checkers always think it’s infected, it’s not.)

Windows: Unzip and run the bin/krita.exe executable!

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage is provided with g’mic built without OpenMP (which makes it much slower)

(As usual, you can use these builds without affecting your 2.9 installation.)

Let’s finish up with cute Kiki!

kiki

May 09, 2016

Blog backlog, Post 2, xdg-app bundles


I recently worked on creating an xdg-app bundle for GNOME Videos, aka Totem, so it would be built along with other GNOME applications, every night, and made available via the GNOME xdg-app repositories.

There's some functionality that's not working yet though:
  • No support for optical discs
  • The MPRIS plugin doesn't work as we're missing dbus-python (I'm not sure that the plugin will survive anyway, it's more suited to audio players, don't worry though, it's not going to be removed until we have made changes to the sound system in GNOME)
  • No libva/VDPAU hardware acceleration (which would require plugins, and possibly device access of some sort)
However, I created a bundle that extends the freedesktop runtime, that contains gst-libav. We'll need to figure out a way to distribute it in a way that doesn't cause problems for US hosts.

As we also have a recurring problem in Fedora with rpmfusion being out of date, and I sometimes need a third-party movie player to test things out, I put together an mpv manifest, which is the only MPlayer-like with a .desktop and a GUI when launched without any command-line arguments.

Finally, I put together a RetroArch bundle for research into a future project, which uncovered the lack of joystick/joypad support in the xdg-app sandbox.

Hopefully, those few manifests will be useful to other application developers wanting to distribute their applications themselves. There are some other bundles being worked on, and that can be used as examples, linked to in the Wiki.

Let’s make Text and Vectors Awesome: 2016 Kickstarter

Even while we’re still working on fixing the last bunch of bugs for what promises to become a great 3.0 release, we’re taking the next step! It’s time for the 2016 Krita Kickstarter!
Last year, our backers funded a big performance improvement in the form of the Instant Preview feature and wickedly cool animation support, right in the core of Krita. And a bunch of stretch goals, some of which are already implemented in 3.0, some of which will come in Krita 3.1.

This year, we’re focusing on two big topics: the text tool and the vector layers. Plus, there are a lot of exciting stretch goals for you to vote on!

Krita’s text tool used to be shared with the rest of KOffice, later Calligra. It’s a complete word processor in a box, with bibliography, semantic markup, tables, columns and more! But not much fine typographic control and besides… It has always been a bad fit, it has never worked well!

Now is the time to join us and make it possible to create an awesome text tool, one that is really suitable to what you need text for in Krita: real typographic and artistic control, support for various languages, for translations, for scripts from all over the world. One integrated text tool that is easy to use, puts you in control and can be extended over the years with new features.

texteditor-mock

The second topic is vector graphics. It’s related to the text tool, since both are vector layer features. Currently, our vector graphics are defined in the OpenDocument Graphics format, which is fine for office applications, but not great for artwork. There’s already a start for supporting the SVG standard instead, and now’s the time to finish the job! And once we’re SVG to the core, we can start improving the usability of the vector tools themselves, which also suffer from having been designed to work in a word processor, spreadsheet or presentation application. Now that Krita is no longer part of a suite of office applications, we can really focus on making all the tools suitable for artists! Let’s make working with vector art great!

FlyingKonqui-animtim

And of course, there are a bunch of stretch goals, ranging from small ones to a really big stretch goal, Python scripting. Check out the kickstarter page for a full list!

support-krita-2016-3

One of the fun parts of backing a kickstarter project are the rewards. For a Krita kickstarter, these are mostly small, fun things to remember a great campaign by. But we’re trying to do something special this year! After the kickstarter is funded, we will commission Krita artists from all over the world to create art for us that we will use in various rewards!

Interview with Toby Willsmer

rip-the-gasworks

Could you tell us something about yourself?

Sure, I am originally from the UK but now live in New Zealand. At 44 I have been drawing and illustrating for over 20 years but currently only for myself. I have a love of comics and graphic novels which is pretty much the style I have inherited over the years. By day I’m a Front End Developer and by night I like to let my mind run riot and then draw it.

Do you paint professionally, as a hobby artist, or both?

At the moment it’s a life long hobby for me although every now and then I’ll take on the odd commission for some one who wants to own one of my style pieces. I have a long term graphic novel that I’ve been working on for a few years now, maybe when that is done that will be the professional turning point?

What genre(s) do you work in?

I mostly illustrate in a comic book style, pretty much all my digital paintings are figurative in some sort of way.

Whose work inspires you most — who are your role models as an artist?

That’s an easy one for me, it has to be Simon Bisley and Frank Frazetta. Simon Bisley’s work is legendary in the comic/graphic novel world, he really pushes the boundaries and is a complete master of his medium. As for Frank Frazetta’s work, need I say more?

How and when did you get to try digital painting for the first time?

The first time I did anything digital art related in a computer would be in 1991 whilst at college. They had a paint program that used the mouse and keyboard, very basic but at that time it was amazing that you could draw pictures in a computer. I used a graphics tablet to try drawing with for the first time in around 2002 but I guess the first time I properly did a complete digital painting using a tablet would have been in 2007. I saw a small 8 inch tablet in my local supermarket (yep, they sold a small range of home office equipment) and bought it to try it out. I’ve never looked back since.

What makes you choose digital over traditional painting?

I still love traditional painting and still do it every now and then but with digital, the scope for colours, details, speed and of course good ol’ ctrl Z means you can really go for it. That and it’s a lot less messy! I mean having a room full of canvases and somewhere to actually paint in large scale is great but just not possible these days. Once I discovered digital on a level that meant I could create what was in my head at a speed that I wanted to, then the transition was easy for me.

How did you find out about Krita?

I used Painter and Photoshop for Windows for years, although I always felt a little let down by them. Then I changed over to the open source movement (Linux) a couple of years ago. This meant having to find another program to paint with. I went looking in Google and read through forums for an alternative that was dedicated to digital painting with a strong emphasis on keeping it as close to traditional painting as possible. Krita was one that kept popping up and had an enthusiastic following which I really liked.

What was your first impression?

Shortly after I installed it I remember thinking ‘this seems to be kinda like painting a traditional picture but on steroids’. It was just so easy to use for the first time and I could see that it would suit my style very quickly.

What do you love about Krita?

I guess if it has to be one thing, it’s got to the the brush engines. They are by far the best I have used in painting software. Very adaptable, easy to edit and make new ones, a real joy to mess around with. Oh and the transform tool… Oh and the right click docker… Oh and the…

What do you think needs improvement in Krita? Is there anything that really annoys you?

There is always room for improvement and I guess everyone uses Krita in different ways. I only use Krita to paint, so for me I would like to see more improvements in the brush engines to really nail how they work across large brushes, to multiple headed brushes.

One of the main things that annoys me is the brush lag when they are large but I see that’s up for being fixed for V3. Nothing really bothers me that much whilst using it.

What sets Krita apart from the other tools that you use?

You can really get a feel of live painting when you use it. It’s almost like you expect to have paint on your fingers when you are finished.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

terminator800This. It changes but at the moment this Terminator graphic novel style cover is my favourite. As I did it as a black and white ink drawing in Krita first, then coloured a year later.

What techniques and brushes did you use in it?

I still use the same techniques as I do when I paint with brushes and tubes of paint but of course the process is much faster. I started with a pencil sketch then inked it into a black and white finished piece, then coloured it later on.
I used a 2b pencil brush and the ink 25 brush for the initial black and white. Then mostly 3 different wet bristle brushes for almost all of the main, titles and background tweaking them a little in the process Then some splatter brushes in the background to keep it a little messy. I keep to minimum layers only using one for the background, title and main part and sometimes just one depending on how it’s going.

I have a set of about 15 brushes that I have tagged into a set as my defaults, most of them have been tweaked in some sort of way.

Where can people see more of your work?

The usual suspects on social sites. I mostly post finished pieces here
https://www.facebook.com/tobywillsmerillustrator

and I tend to post more random stuff here doodles/sketches, WIPs and the token pictures of my dog.
https://www.instagram.com/tobywillsmer/

Anything else you’d like to share?

I hope people enjoyed reading this and will enjoy the illustrations I keep putting out. It will be interesting to see how Krita evolves over the next few years and to see how new people finding it will adapt and use it to create and push the digital art format. I for one am looking forward to the V3 release.

May 08, 2016

Setting "Emacs" key theme in gtk3 (and Firefox 46)

I recently let Firefox upgrade itself to 46.0.1, and suddenly I couldn't type anything any more. The emacs/readline editing bindings, which I use probably thousands of times a day, no longer worked. So every time I typed a Ctrl-H to delete the previous character, or Ctrl-B to move back one character, a sidebar popped up. When I typed Ctrl-W to delete the last word, it closed the tab. Ctrl-U, to erase the contents of the urlbar, opened a new View Source tab, while Ctrl-N, to go to the next line, opened a new window. Argh!

(I know that people who don't use these bindings are rolling their eyes and wondering "What's the big deal?" But if you're a touch typist, once you've gotten used to being able to edit text without moving your hands from the home position, it's hard to imagine why everyone else seems content with key bindings that require you to move your hands and eyes way over to keys like Backspace or Home/End that aren't even in the same position on every keyboard. I map CapsLock to Ctrl for the same reason, since my hands are too small to hit the PC-positioned Ctrl key without moving my whole hand. Ctrl was to the left of the "A" key on nearly all computer keyboards until IBM's 1986 "101 Enhanced Keyboard", and it made a lot more sense than IBM's redesign since few people use Caps Lock very often.)

I found a bug filed on the broken bindings, and lots of people commenting online, but it wasn't until I found out that Firefox 46 had switched to GTK3 that I understood had actually happened. And adding gtk3 to my web searches finally put me on the track to finding the solution, after trying several other supposed fixes that weren't.

Here's what actually worked: edit ~/.config/gtk-3.0/settings.ini and add, inside the [Settings] section, this line:

gtk-key-theme-name = Emacs

I think that's all that was needed. But in case that doesn't do it, here's something I had already tried, unsuccessfully, and it's possible that you actually need it in addition to the settings.ini change (I don't know how to undo magic Gnome settings so I can't test it):

gsettings set org.gnome.desktop.interface gtk-key-theme "Emacs"

May 06, 2016

darktable 2.0.4 released

we're proud to announce the fourth bugfix release for the 2.0 series of darktable, 2.0.4!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.4.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.0.4.tar.xz
80e448622ff060bca1d64bf6151c27de34dea8fe6b7ddb708e1e3526a5961e62  darktable-2.0.4.tar.xz
$ sha256sum darktable-2.0.4.dmg 
1e6306f623c3743fabe88312d34376feae94480eb5a38858f21751da04ac4550  darktable-2.0.4.dmg

and the changelog as compared to 2.0.3 can be found below.

New Features

  • Support grayscale input profiles
  • Add a BRG profile for testing purposes

Bugfixes

  • Fix the GUI with GTK 3.20
  • Fix the color profiles we ship
  • Fix two deflicker (exposure iop, mode = automatic) issues
  • Fix trashing of files on OSX
  • Fix Rights field in Lua

Base Support

  • Nikon D5
  • Sony ILCA-68

White Balance Presets

  •  Pentax K-S1
  • Sony ILCA-68

Noise Profiles

  • Canon PowerShot G15
  • Fujifilm X70
  • Olympus PEN-F
  • Panasonic DMC-GF7

Translation Added

  • Slovenian

Translations Updates

  • Catalan
  • Dutch
  • German
  • Hebrew
  • Slovak
  • Spanish

May 05, 2016

SVG Working Group Editor’s Meeting Report — London — 2016

First, let me thank all the people that donated to Inkscape’s SVG Standards Work fund as well as to the Inkscape general fund that made my attendance possible.

The subset of the SVG working group met in London after the LGM meeting to get down to the nitty gritty of getting the SVG 2 specification ready to move to the “Candidate Recommendation” (CR) stage. Three of the core group members (Nikos, Amelia, and myself) were joined some of the days by three other group members who do not normally participate in the weekly teleconferences. This was a great chance to get some new eyes looking at the spec.

Most of the time was spent in reading the specification together. We managed to get through about half the chapters including the most problematic ones. When we found problems we either made changes on the fly if possible or filed issues if not. We recently switched to Github to keep track of issues which seems to be working well. You can see outstanding issues at our issue tracker. (Feel completely free to add your own comments!)

Minutes of the meetings can be found at:

As this was a meeting focused on getting the spec out the door, our discussions were pretty mundane. Nevertheless, let me give you a flavor of the kinds of things we addressed. It was brought up in the issue tracker that the specification is unclear on how text should be rendered if it follows a <textPath> element. It never occurred to me (and probably to most people) that you could have in an SVG file the following:

<text x="50" y="150">Before<textPath xlink:href="#path">On Path</textPath>After</text>
]]>

For an implementer, it is fairly straight forward to figure out where to position the “Before” (use the ‘x’ and ‘y’ attributes) and the “On Path” (use the path) but where should the “After” be rendered? Firefox won’t render it at all. Chrome will render the “After” starting at the end position of the ‘h’ in “On Path”. After some discussion we decided that the only really logical place to render the “After” was at the end of the path. This is the only point that is well defined (the ‘h’ can move around depending on the font used to render the text).

Defining a fill area using <div&gt and floats.

How the above text element is rendered according to the decision of the group at the London meeting. The starting point for text after the <textPath> element is at the end of the path (red dot).

We will have another editor’s meeting in June in Amsterdam where hopefully we’ll finish the editting so we can move the spec to CR. We’ll then need to turn our attention to writing tests. Please consider making a donation to support my travel to this meeting at the link at the start of the post! Thanks.

Blog backlog, Post 1, Emoji

Short version


dnf copr enable hadess/emoji
dnf update cairo
dnf install eosrei-emojione-fonts



Long version

A little while ago, I was reading this article, called "Emoji: how do you get from U+1F355 to 🍕?", which said, and I reluctantly quote: "[...] and I don’t know what Linux does, but it’s probably black and white and who cares [...]".

Well. I care. And you probably do as well if your pizza slice above is black and white.

So I set out to check on the status of Behdad Esfahbod (or just "Behdad" as we know him)'s patches to add colour font support to cairo, which he presented at GUADEC in Strasbourg Gothenburg. It adds support for the "bitmap in font" as Android does, and as freetype supports.

It kind of worked, and Matthias Clasen reworked the patches a few times, completing the support. This is probably not the code that will be worked on and will land in cairo, but it's a good enough base for people interested in contributing to use.

After that, we needed something to display using that feature. We ended up using the same font recommended in this article, the Emoji One font.


There's still plenty to be done to support emojis, even after the cairo support is merged. We'd need a way to input emojis (maybe Lalo Martins is listening), and support in a lot of toolkits other than GNOME (Firefox only supports the SVG-in-OTF format, WebKit, Chrome, LibreOffice don't seem to know about colour fonts either).

You can find more information about design interests in GNOME around Emoji on the Wiki.

Update: Behdad's presentation was in Gothenburg, not Strasbourg. You can also see the video on YouTube.

New Development Builds Ready

There are new development builds ready, with a bunch of bug fixes since the last beta. Please test thoroughly, we’re getting really close to the second beta! These builds have the following fixes:

  • The Settings and Windows menu actions now can have shortcuts assigned to them
  • Loading of images with group layers and layers in different colorspaces now always works correctly
  • Several issues with onion skinning are fixed
  • Make it possible again to load KPP (paintop presets) as images
  • Don’t leave the pan tool when accidentally double clicking
  • Fix a crash when closing Krita if the canvas mirror option is active
  • Fix layouts in the advanced color selector configuration dialog
  • Fix saving shortcuts
  • Fix the duplicate F4 shortcut
  • Fix the order of application of color curves in the Curves filter (first per-channel, then composite rgb curve, then lightness curve)
  • Fix saving and loading of the image resolution in non-US locales
  • Fix updating the global selection mask
  • Fix the file dialog on some Linux distributions where most image types were not shown
  • Show the webp image type in the file dialog
  • Fix initialization of the Tool Options docker
  • Fix using the canvas extension buttons with a tablet stylus
  • Fix jumping of the mirror axis handle
  • Make imagepipe and spray brush compatible with Instant Preview
  • Fix crashes in the text tool when changing font properties
  • Make it possible to use autospacing with Instant Preview
  • Fix loading of fil layers
  • Fix infinite loop when using the vector gradient tool
  • Fix extreme slowness in the color selector
  • Fix showing the page preview in the PDF export dialog
  • Fix using shortcuts when the cursor is not over the canvas
  • Fix the location of translations on Window

Download

UPDATE 05.05.16: Now there in also a Windows Shell Extension package available! Just install it and Windows Explorer will start showing preview and meta information for Krita files

Windows: Unzip and run the bin/krita.exe executable!

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage is provided with g’mic built without OpenMP (which makes it much slower)

May 04, 2016

Hardcore Henry – using Blender for VFX

By: Yaroslav Kemnits, Ph.D., Creative VFX director, Division LLC, Moscow, Russia

Hardcore Henry is a sci-fi movie. The hero is a cyborg fighting other hostile cyborgs. Instead of putting it in futuristic setting, writer/director Ilya Nayshuller puts the events in the present ordinary world. That’s why the shooting has taken place in buildings and streets of a real city. Just one scene of the movie couldn’t be produced this way; the life pod free fall from stratosphere on the road near Moscow City, which should be looking like common GoPro action recording.

The fall has been filmed in three different parts: sky (above clouds), clouds (inside) and above city. The first part is set with blue sky above and white clouds canvas below – such scenes are always pretty. Then the pod enters the clouds. The GoPro shooting of falling through the clouds looks quite boring – grey screen is the only thing you see. That’s why I added turbulence effect on entering the cloud mass and made huge cave-like hollows inside. Finally, the pod flies out of clouds and we can see quickly approaching city. Hero opens the parachute which softens the impact of collision with track.

I often use Blender when I create visual effects. I use it to create action scenes animatics, to make decoration sketches and many more. And, obviously, I know about its Cloud Generator add-on, so I used it in this movie. It is simple to use and versatile at the same time.

Using it, I’ve made a cloud and dropped 6-camera unit on it. This unit is descending closer to the border of the cloud because the hero never looks backwards.

image01 Animation of 6-camera unit

The scene was illuminated by two sources of light – Sun and Hemi (sky). I have compared render result with real GoPro recordings and it has exceeded all my expectations.

image02 Render result

We have filmed the city with 360 degrees shooting using drone with 6-GoPro box. It was simpler.

image03

I have synchronized the records and layed it on the cube.

image04

The highest building in the Moscow City complex is 374 meters tall. The drone couldn’t ascend higher than that, and I have needed to make the feeling of much greater height.

image05

I have used camera mapping in order to do this.

We have created a “white room” with programmed luminaires around the pod.

image06

Several light FX were created with it. Moving sun light on heroine’s face during pod’s rotation, for example.

image07

The program allowed us to alter light parameters inside and out of clouds and many other things.

image08

Usage of mirrors enabled recording of reflections.

Finally, I’ve needed to create and animate a parachute.

image09

It wasn’t difficult because the parachute should have been visible just for one second. I’ve used usual Round Canopy Parachute. One thing – I had to enlarge it a little, otherwise it was becoming too small. I used the cloth simulation and the wind as force for animation parachute.

We have also used Blender Fracture Modifier (http://df-vfx.de/fracturemodifier/) to create explosions and collapses.

image10

Why have we chosen Blender?

It is a very flexible tool. It includes almost every modern top technologies and has convenient and user-friendly interface, which gives us the opportunity to solve creative problems without struggling with fiddly software.

http://www.di.vision/

Yaroslav Kemnits, Ph.D., Creative VFX director, Division LLC, Moscow, Russia

On set VFX supervisor of “Hardcore Henry” movie

Unidade Básica de Saúde Riacho Fundo

Esse projeto foi pensado para contribuir com a nova política de humanização do SUS. Nessa visão, a Unidade básica de Saúde (UBS) é um local de acolhimento e aconselhamento, sendo muito mais um local onde se cuida do que...

May 02, 2016

New GDQuest tutorials!

Nathan speaking:

9 new Krita tutorials came out since the last time I made a news post here. And so did my shoulder. Out of its socket. Ouch!

But that’s a whole other story. If I’m here today, it’s to bring you 2 pieces of good news:

1. You may have noticed that the videos that came out on the channel until last week were just an introduction to the course. From now on, the vast majority of the tutorials that are coming out on the GDquest channel will be dedicated to Krita’s tools.

2. There is now a dedicated course page where you can find all of the Krita tutorials. This is also where you will find all of the extra course material that goes along with the training: exercises, cheat sheets, exclusive tips! You can find it at this address: http://gdquest.com/game-art-quest/volume-1/course-public/
Be sure to bookmark the page! I will add exclusive content over time.

Here are some of the latest videos so you can get a sense of what came out lately:

Painting with the freehand brush tool

Navigation on the canvas in Krita

Bonus: Overview of Pureref

Pureref is a powerful free application for Windows, Mac and Linux. It allows you to overlay reference pictures on top of Krita and to arrange them freely. This is a great companion for any artist out there, whether you’re an illustrator or you work in games.

Thank you for your time,

Nathan

April 29, 2016

Post Libre Graphics Meeting


Post Libre Graphics Meeting

What a trip!

What a blast!

This trip report is long overdue, but I wanted to process some of my images to share with everyone before I posted.

It had been a couple of years since I had an opportunity to travel and meet with the GIMP team again (Leipzig was awesome) so I was really looking forward to this trip. I missed the opportunity to head up to the great white North for last years meeting in Toronto.

London Calling

Passport to LGM Passport? Check! Magazine? Check! Ready to head to London!

I was going to attend the pre-LGM photowalk again this year so this time I decided to pack some bigger off-camera lighting modifiers for everyone to play with. Here’s a neat travelling photographer pro-tip: most airlines will let you carry on an umbrella as a “freebie” item. They just don’t specify that it has to be an umbrella to keep the rain off you. So I carried on my big Photek Softlighter II (luckily my light stands fit in my checked luggage). Just be sure not to leave it behind somewhere (which I was paranoid about for most of my trip). Luckily I was only changing planes in Atlanta.

Atlanta Airport International Terminal The new ‘futristic’ looking Atlanta airport international terminal.

A couple of (bad) movies and hours later I was in Heathrow. I figured it wouldn’t be much trouble getting through border control.

I may have been a little optimistic about that.

The Border Force agent was quite nice and super inquisitive. So much so that I actually began to worry at some point (I think I must have spent almost 20 minutes talking to her) that she might not let me in!

She kept asking what I was coming to London for and I kept trying to explain to her what a “Libre Graphics Meeting“ was. This was almost a tragic comedy. The idea of Free Software did not seem to compute to her and I was sorry I had even made the passing mention. Her attention then turned to my umbrella and photography. What was I there to photograph? Who? Why? (Come to think of it, I should start asking myself those same questions more often… It was an existential visit to the border control.)

In the end I think she got bored with my answers and figured that I was far too awkward to be a threat to anything. Which pretty much sums up my entire college dating life.

Photowalk

In what I hope will become a tradition we had our photowalk the day before LGM officially kicked off and we could not have asked for a better day of weather! It was partly cloudy and just gorgeous (pretty much the complete opposite to what I was expecting for London weather).

Furtherfield Commons

Furtherfield Logo

I want to thank Ruth Catlow (http://ruthcatlow.net/) for allowing us to use the awesome space at Furtherfield Commons in Finsbury Park as a base for our photowalk! They were amazingly accommodating and we had a wonderful time chatting in general about art and what they were up to at the gallery and space.

They have some really neat things going on at the gallery and space so be sure to check them out if you can!

Going for a Walk with Friends

This is one of my favorite things about being able to attend LGM. I get to take a stroll and talk about photography with friends that I only usually get to interact with through an IRC window. I also feel like I can finally contribute something back to these awesome people that provide software I use every day.

IMGP6089 Mairi between Simon and myself (I’m holding a reflector for him).
Photo by Michael Schumacher cbna

We meandered through the park and chatted a bit about various things. Simon had brought along his external flash and wanted to play with off-camera lighting. So we convinced Liam to stand in front of a tree for us and Simon ended up taking one of my favorite images from the entire trip. This was Liam standing in front of the tree under the shade with me holding the flash slightly above him and to the camera right.

Liam by nomis Liam by Simon

We even managed to run into Barrie Minney while on our way back to the Commons building. Aryeom and I started talking a little bit while walking when we crossed paths with some locals hanging out in the park. One man in particular was quite outgoing and let Aryeom take his photo, leading to another fun image!

Upon returning to the Commons building we experimented with some of the pretty window light coming into the building along with some black panels and a model (Mairi). This was quite fun as we were experimenting with various setups for the black panels and speedlights. Everyone had a chance to try some shots out and to direct Mairi (who was super patient and accommodating while we played).

Mairi Natural Light I was having so much fun talking and trying things out with everyone that I didn’t even take that many photos of my own! This is one of my only images of Mairi inside the Commons.
Mairi Natural Light cba

Towards the end of our day I decided get my big Softlighter out and to try a few things in the lane outside the Commons building. Luckily Michael Schumacher grabbed an image of us while we were testing some shots with Mairi outside.

IMGP6108 A nice behind-the-scenes image from schumaml of the lighting setup used below.
Yes, that’s darktable developer hanatos bracing the umbrella from the wind for me!
Photo by Michael Schumacher cbna

I loved the lane receding in the background and thought it might make for some fun images of Mairi. I had two YN-560 flashes in the Softlighter both firing around ¾ power. I had to balance the ambient sky with the softlighter so needed the extra power of a second flash (it also helps to keep the cycle times down).

Mairi Finsbury Mairi waiting patiently while we set things up.
Mairi Finsbury cba
50mm f/8.0 1200 ISO200
Mairi Finsbury Park (In the Lane) Mairi Finsbury Park (In the Lane) cba

The day was awesome and I really enjoyed being able to just hang out with everyone and take some neat photos. The evening at the pub was pretty great also (I got to hang out with Barrie and his friend and have a couple of pints - thanks again Barrie!).

LGM

It never fails to amaze me how every year the LGM organizers manage to put together such a great meeting for everyone. The venue was great and the people were just fantastic at the University of Westminster.

University of Westminster View of the lobby and meeting rooms (on the second floor).
LGM Auditorium Andrea Ferrero (@Carmelo_DrRaw) presenting PhotoFlow in the auditorium!

The opening “State of the Libre Graphics“ presentation was done by our (the GIMP teams) very own João Bueno who did a fantastic job! João will also be the local organizer for the 2017 LGM in Rio.

Thanks to contributions from community members Kees Guequierre, Jonas Wagner, and Philipp Haegi I had some great images to use for the PIXLS.US community slides for the “State of the Libre Graphics“. If anyone is curious, here is what I submitted:

PIXLS State of Libre Graphics 0
PIXLS State of Libre Graphics 0
PIXLS State of Libre Graphics 0

These slides can be found on our Github PIXLS.US Presentations page (along with all of our other presentations that relate to PIXLS.US and promoting the community).

Speaking of presentations…

Presentation

I was given some time to talk about and present our community to everyone at the meeting. (See embedded slides below):

LGM2016 PIXLS.US Presentation

I started by looking at what my primary motivation was to begin the site and what the state of free software photography was like at that time (or not like). Mainly that the majority of resources online for photographers that were high quality (and focused on high-quality results) were usually aimed at proprietary software users. Worse still, in some cases these websites locked away their best tutorials and learning content behind paywalls and subscriptions. I finished by looking at what was done to build this site and forum as a community for everyone to learn and share with each other freely.

I think the presentation went well and people seemed to be interested in what we were doing! Nate Willis even published an article about the presentation at LWN.net, “Refactoring the open-source photography community”:

Pat David presenting on PIXLS.US at LGM 2016 A photo of me I don’t hate! :)

Exhibition

A nice change this year was the inclusion of an exhibition space to display works by LGM members and artists. We even got an opportunity to hang a couple of prints (for some reason they really wanted my quad-print of pippin). I was particularly happy that we were able to print and display the Green Tiger Beetle by community member Kees Guequierre:

hanatos and houz at LGM Hanatos and houz inspecting the prints at the exhibition.
View of the LGM Exhibition View of the Exhibition. Well attended!
Pippin x5 pippin x5

Portraits

In Leipzig I thought it would be nice to offer portraits/headshots of folks that attended the meeting. I think it’s a great opportunity to get a (hopefully) nice photograph that people can use in social media, avatars, websites, etc. Here’s a sample of portraits from LGM2014 of the GIMP team that sat for me:

GIMPers

In 2014 I was lucky that houz had brought along an umbrella and stand to use, so this time I figured it was only fair that I bring along some gear myself. I had the Softlighter setup on the last couple of days for anyone that was interested in sitting for us. I say us because Marek Kubica (@Leonidas) from the community was right there to shoot with me along with the very famous @Ofnuts (well - famous to me - I’ve lost count of the neat things I’ve picked up from his advice)! Marek took quite a few portraits and managed the subjects very well - he was conversational, engaged, and managed to get some great personality from them.

Still don't know your name A sample portrait by Marek Kubica cba
Better with glasses Better with glasses by Marek Kubica cba

A couple of samples from the images that I got are here as well, and they are the local organizer Lara with students from the University! I simply can’t thank them enough for the efforts and generosity in making us feel so welcome.

Lara University of Westminster
Lara University of Westminster
Lara University of Westminster

I’m still working through the portraits I took, but I’ll have them uploaded to my Flickr soon to share with everyone!

GIMPers

One of the best parts of attendance is getting to spend some time with the rest of the GIMP crew. Here’s an action shot during the GIMP meeting over lunch with a neat, glitchy schumaml:

GIMP Meeting Panorama There’s even some darktable nerds thrown in there!

It was great to see everyone at the flat on our last evening there as well…

GIMP and darktable at LGM Everyone spending the evening together! Mitch is missing from his seat in this shot (back there by pippin).

Wrap up

Overall this was another incredible meeting bringing together great folks who are building and supporting Free Software and Libre Graphics. Just my kind of crowd!

I even got a chance to speak a bit with the wonderful Susan Spencer of the Valentina project and we roughed out some thoughts about getting together at some point. It turns out she lives just up the same state as me (Alabama)! This is simply too great to not take advantage of - Free Software Fashion + Photography?! That will have to be a fun story (and photos) for another day…

Keep watching the blog for some more images from the trip - up next are the portraits of everyone and some more shots of the venue and exhibition!

Vermillion Cliffs trip, and other distractions

[Red Toadstool, in the Paria Rimrocks] [Cobra Arch, in the Vermillion Cliffs] I haven't posted in a while. Partly I was busy preparing for, enjoying, then recovering from, a hiking trip to the Vermillion Cliffs, on the Colorado River near the Arizona/Utah border. We had no internet access there (no wi-fi at the hotel, and no data on the cellphone). But we had some great hikes, and I saw my first California Condors (they have a site where they release captive-bred birds). Photos (from the hikes, not the condors, which were too far away): Vermillion Cliffs trip.

I've also been having fun welding more critters, including a roadrunner, a puppy and a rattlesnake. I'm learning how to weld small items, like nail legs on spark plug dragonflies and scorpions, which tend to melt at the MIG welder's lowest setting.

[ Welded puppy \ [ Welded Roadrunner ] [ Welded rattlesnake ]

New Mexico's weather is being charmingly erratic (which is fairly usual): we went for a hike exploring some unmapped cavate ruins, shivering in the cold wind and occasionally getting lightly snowed upon. Then the next day was a gloriously sunny hike out Deer Trap Mesa with clear long-distance views of the mountains and mesas in all directions. Today we had graupel -- someone recently introduced me to that term for what Dave and I have been calling "snail" or "how" since it's a combination of snow and hail, soft balls of hail like tiny snowballs. They turned the back yard white for ten or fifteen minutes, but then the sun came out for a bit and melted all the little snowballs.

But since it looks like much of today will be cloudy, it's a perfect day to use up that leftover pork roast and fill the house with good smells by making a batch of slow-cooker green chile posole.

April 28, 2016

Development Builds Ready To Test

So… Yesterday, Dmitry tried to fix an ancient bug that made it inconvenient to work with dockers, popups and the canvas: sometimes the focus would go haywire, and if you’d try to enter a value in a docker, or zoom while the cursor wasn’t over your image, things would go wrong. Well… There’s this fix, and it needs testing. It really needs testing before we can make it part of Krita 3.0. So, here are new builds for Windows, Linux and OSX. Please help by downloading them and giving them a good work-out. There are a dozen or so other fixes in there as well, but I won’t bore you with those. Please test! Spend an hour or two painting, transforming, swapping brushes, setting colors! You can download all of these and use them without interfering with any of your Krita 2.9 settings.

Download

Windows: Unzip and run the bin/krita.exe executable!

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage without G’Mic is provided: it is no longer possible to build the latest version of G’Mic in a way that can run on those distributions.

GCompris: New Chess graphics

For my first sponsored post last month on my recently launched Patreon, I’ve decided to update the graphics of the Chess activities. There are three: End of game, Play against a friend and Play against Tux.

I think those are great to start playing chess, and not only for kids. I must say that I had a lot of fun playing with these activities while integrating the new graphics.

First, a screenshot of what it looked like before:

gcompris-chess-01-before

And now two screenshots with new graphics: the first with the new activity icons, and the second is a fullscreen view of the new chessboard and background.

gcompris-chess-03-icons
gcompris-chess-02-after

I also made some style changes to the overlays while moving the pieces, and to the side text and buttons.

If you were looking for a simple chess game to play alone or with a friend, look for the next release soon, or take a look at the build instructions on the website to test the development version.

Have fun playing chess in Gcompris !

Also, I’m about to update another activity before the end of this month, so stay tuned on my patreon page for the next news, and don’t forget to subscribe if you want to support this work.

Premier livre sur Krita en français

(Post in french, english version below)

Le mois dernier est sorti mon livre “Dessin et Peinture numérique avec Krita”. Il s’agit du premier livre en francais sur ce logiciel. J’espère qu’il contribuera à faire connaitre ce magnifique logiciel libre de dessin à tous les artistes francophones.

Ce livre est disponible en version imprimée couleur, en version numérique téléchargeable sans DRM ou encore en version consultable en ligne, sur le site de l’éditeur D-Booker. Je remercie d’ailleurs mon éditeur pour m’avoir permis d’écrire ce livre.

dessin-et-peinture-numerique-avec-krita

Last month, my book “Dessin et Peinture numérique avec Krita” has been released. It is the first book in french about this software. I hope it will contribute to introduce this wonderful Free Software to all french speaking artists.

This book is available in full-color printed version, as digital download without DRM or as online version, on the website of the publisher D-Booker. By the way, I’d like to thank my publisher who made it possible to write this book.

April 27, 2016

3rd Party Fedora Repositories and AppStream

I was recently asked how to make 3rd party repositories add apps to GNOME Software. This is relevant if you run a internal private repo for employee tools, or are just kind enough to provide a 3rd party repo for Fedora or RHEL users for your free or non-free applications.

In most cases people are already running something like this to generate the repomd metadata files on a directory of RPM files:

createrepo_c --no-database --simple-md-filenames SRPMS/
createrepo_c --no-database --simple-md-filenames x86_64/

So, we need to actually generate the AppStream XML. This works by exploding any interesting .rpm files and merging together the .desktop file, the .appdata.xml file and preprocessing some icons. Only applications installing AppData files will be shown in GNOME Software, so you might need to fix before you start.

appstream-builder			\
	--origin=yourcompanyname	\
	--basename=appstream		\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=1			\
	--min-icon-size=32		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons

This takes a second or two (or 40 minutes if you’re trying to process the entire Fedora archive…) and spits out some files to /tmp/asb-md — you probably want to change some things there to make more sense for your build server.

We then have to take the generated XML and the tarball of icons and add it to the repomd.xml master document so that GNOME Software (via PackageKit) automatically downloads the content for searching. This is as simple as doing:

modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream.xml.gz	\
	x86_64/repodata/
modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

Any questions, please ask. If you’re using a COPR then all these steps are done for you automatically. If you’re using xdg-app already, then this is all magically done for you as well, and automatically downloaded by GNOME Software.

Krita 3.0 BETA builds out!

We have spent the last 20 days making over 80 fixes in Krita since the last Alpha build. We have decided it is time to enter the Beta phase! Furthermore, we’ve also spent time on improving our Windows builds.  This should fix all the scary G’Mic crashes on Windows.

Notable Fixes

  • G’Mic is fixed so that it uses OpenMP for multi-threading on Linux and Windows! This is a big performance increase from Krita 2.9 which was single-threaded. G’Mic probably is still broken on OSX: no need to report that.
  • Mask updating problems have been tackled rigorously!
  • So have transform masks and transform bugs!
  • Scary saving and loading bugs have been fixed. Remember, if you ever having a saving/loading bug with Krita, come to us immediately!
  • The clone and tangent tilt brushes have fixes with crashing and behavior!
  • Tons of little UI fixes with theme colors and consistency.
  • Several fixes for the shortcuts. They should now be saved and loaded properly.
  • Tablet fixes for dealing with animation. This makes duplicating frames easier, as well as using several tools faster.
  • Several fixes in the grids and guides.
  • And much more… See the full list of bug fixes!
  • Linux and Windows builds now should have full access to all translations and all of the menus should be translated.

From here, we will go through another round of bug fixing. We would love it if you could test out the new build and give us feedback in the chatroom or bug reporter. This testing helps us prevent ‘surprise issues’ when Krita 3.0 will be released in the coming weeks.

Kickstarter!

We had intended to get 3.0 ready before the next kickstarter, but we feel its more important to spend another month fixing bugs. We’re still going ahead with the next Kickstarter on schedule, so the May Kickstarter will coincide with the May release of Krita 3.0!

Known Issues

We’re fixing bugs like crazy, but there are still a number of known bugs. Please help us by testing this beta release and checking whether those bugs are still valid! Bug triaging is an awesome way of becoming part of our community.

Download

8 May. UPDATE: we made new builds and updated the links

We haven’t managed to create an MSI installer next to the zip files. But we do have 32 bits builds as well as 64 builds now, and a new setup that makes it really fast and easy to do new builds. These builds also include for the first time the camera raw importer plugin and the PDF import/export plugin.

There are also Windows builds with debug information available, as an experiment from http://files.kde.org/krita/3/windows.

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage should run on any Linux distribution released since 2012. After downloading, make the appimage executable and run it. No installation is needed.

Source code:

Git repository:

Plan to level up contributors with Fedora Hubs!

Fedora Hubs

What’s going on with Hubs?

So a little update for those not following closely to get you up to date:

  • We have a big milestone we’re working towards – a working version of Fedora Hubs in time for Flock. It won’t have all of the bells and whistles of the mockups that we’ve presented, but it will be usable and hopefully demonstrate the potential of the app as well and enable more development.
  • We have a number of fantastic interns coming on board (including Devyani) who will be helping us work on Fedora Hubs this summer.
  • pingou is going to be leading development on fedora-hubs.
  • I’m clearly back from an extended leave this past winter and cranking back on mockups again. 🙂
  • ryanlerch has upgraded hubs to fedora-bootstrap so it has a fresh look and feel (which you’ll see reflected in mockups moving forward.)
  • Overall, we’ve gotten more momentum than ever before with a clear goal and timeline, so you’ll hopefully be seeing a lot more of these juicy updates more frequently!

(“Wait, what is Fedora Hubs?” you ask. This older blog post has a good summary.)

Okay, so let’s move on and talk about Hubs and Badges, particularly in light of some convos we’ve had in the regular weekly Fedora Hubs check-in meetings as well as an awesome hack session Remy D. and jflory7 pulled together last Thursday night.

Fedora Hubs + Badges – what’s old is new again

Behold, a mockup from 2011:

Fedora RPG mockup

In a sense, this is actually an early prototype/idea for Fedora Hubs + Badges integration. Remember that one of the two main goals of Fedora Hubs is to enable new Fedora users and make it easier for them to get bootstrapped into the project. Having tasks in the form of badges awarded for completing a task arranged in “missions” makes it clear and easy for new contributors to know what they can do and what they can do next to gradually build up their skills and basically ‘level up’ and feel happy, helpful, and productive. So there’s a clear alignment between badges and hubs in terms of goals.

So that was 2011, where are we going in 2016?

First thoughts about a badge widget

We have a couple of tickets relating to badges in the hubs issue tracker:

As we’ve been figuring out as we’ve been going through the needsmockup queue and building widgets, most widgets have at least two versions: the user version (what data in this widget relates to me? Across all projects, what bugs are assigned to me?) versus the project version (across all users, what bugs relate to this project?) You can’t just have one badges widget, because certain data related to that widget is more or is less useful in the context it’s being viewed in.

Today, the Fedora badges widget in Hubs is not unlike the one on the Fedora wiki (I have both the sidebar version and the content side version on my profile.) It’s basically small versions of the badge icon art in a grid (screenshot from the wiki version):

screenshot of wiki badges widget

The mockup below (from issue #85) shows what a little pushing in terms of working the metadata we’ve got available can do to provide a clearer picture of the badge earner via the badges he or she has won (left version is compressed, right version is expanded):

mockup of badges widget for hubs profiles

Layering on some more badgery

The above mockups are all just layer 0 stuff though. Layer 0? Yeh, here’s a hokey way of explaining how we’re starting to think about hubs development, particularly in the context of getting something out the door for Flock:

  • Layer 0 – stuff we already have in place in hubs, or refinements on what’s already implemented.
  • Layer 1 – new feature development at a base level – no whizbang or hoozits, and absolutely nothing involving modifications to ‘upstream’ / data-providing apps. (Remember that Hubs is really a front-end on front of fedmsg… we’re working with data coming from many other applications. If a particular type or format of data isn’t available to us, we have to modify the apps putting that data on the bus to be able to get it.)
  • Layer 2 – making things a bit nicer. We’re not talking base model here, we’re getting some luxury upgrades, but being pretty sensible about them. Maybe making some modifications to the provider apps.
  • Layer 3 – solid gold, gimme everything! This is the way we want things, having to make modifications to other apps isn’t of concern.

To get something out the door for Flock… we have to focus mostly on layer 0 and layer 1 stuff. This is hard, though, because when this team gets together we have really awesome, big, exciting ideas and it’s hard to scale back. 🙂 It’s really fun to brainstorm together and come up with those ideas too. In the name of fun, let’s talk through some of the layers we’ve been talking about for badges in hubs in particular, and through this journey introduce some of the big picture ideas we have.

Badges Layer 1: Tagging Along

An oftrequested feature of tahrir, the web app that powers badges.fedoraproject.org, is the notion of grouping badges together in a series (similar to the “missions” in the 2011 mockup above.) The badges in the series can be sequentially ordered, or they may have no particular order. Some in a series can have a sequential ordering and some not at all.

Here’s an example of badges with a sequential ordering (this series goes on beyond these, but the example three illustrate the concept well enough):

Here’s an example of badges that are closely related but have no particular sequence or order to them:

You can see, I hope, how having these formally linked together would be a boon for onboarding contributors. If you earned the first badge artist badge, for example, the page could link you to the next in the series… you could view a summary of it and come to understand you’d need to make artwork for only four more badges to get to the next level. Even if there isn’t a specific order, having a group of badges that you have to complete to get the whole group, like a field of unchecked checkboxes (or unpopped packing bubbles), kind of gives you the urge to complete them all. (Pop!) If a set of badges correspond to a set of skills needed to ramp up for work on a given project, that set would make a nice bootstrapping set that you could make a prerequisite for any new join requests to your project hub. So on and so forth.

So here’s the BIG SECRET:

There’s no badge metadata that links these together at all.

How do we present badges in series without this critical piece of metadata? We use a system already in place – badge tags. Each series could have an agreed upon tag, and all badges with that tag can become a group. This won’t give us the sequential ordering that some of the badges demand, but it’ll get us a good layer 1 to start. Mockup forthcoming on this, but it will get us a nicer badge widget for project / team hubs (issue #17).

Badges Layer 2: Real Badge Metadata

Here’s layer 2 for the feature – and I thought this would be the end of the road before Remy set us straight (more on that in the next section on layer 3):

So this one is somewhat simple. We potentially modify the badges themselves by adding additional fields to their yaml files (example behind link), and modify tahrir, the web app that drives badges.fpo, to parse and store those new fields. I tried to piece together a plan of attack for achieving this in tahrir ticket #343.

The problem here is that this would necessarily require changing the data model. It’s possible, but also a bit of a pain, and not something you want to do routinely – so this has to be done carefully.

Part of this would also involve dropping our overloading of tags. Now we can store descriptions for each badge series, and store sequential ordering for individual badges, and a few other nice things tags couldn’t enable.

If we’re changing the data model for layer 2, may as well also change it for *LAYER 3!!*, which I am emphasizing out of excitement.

Layer 3: How the years I spent playing Final Fantasy games finally pay off

skill tree diagram

“A simplified example of a skill tree structure, in this case for the usage of firearms.” Created by user Some_guy on Wikipedia; used under a CC0 license.

Remy D. suggested instead of linear and flat groupings of badges, we also add the ability to link them together into a skill tree. Now, you may already have experience with say the Final Fantasy series, Diablo series, Star Wars: The Old Republic, or other RPG-based games. Related skills are grouped together in particular zones of the tree, and depending on which zones of the tree you have filled out, you sort of fulfill a particular career path or paths. (e.g., in the case of Final Fantasy X… when you work towards filling out Lulu’s sphere grid area, you’re making your character a dark mage. When you work towards filling out Rikku’s area, you’re building skills towards become a skilled thief. So on, and so forth.)

Where this gets cool for Fedora is that we not only can help contributors get started and feeling awesome about their progress by presenting them clear sequences of badges to complete to earn a series (layer 2), but we can also help guide them towards building a ‘career’ or even multiple ‘careers’ (or ‘hats,’ heh) within the project and build their personal skill set as well. Today we already have five main categories for badges in terms of the artwork templates we use, but we can break these down further if need be – as-is, they map neatly to ‘careers’ in Fedora:

  • Community
  • Content
  • Development
  • Quality
  • Events

Fedora contributors could then choose to represent themselves using a radar chart (example displayed below), and others can get a quick visual sense of that contributor’s skillset:

So that’s layer 3. 🙂

Okay, so have you actually thought about what badges should be chained together for what teams?

Yes. 🙂 Remy D. and jflory7 started a list by researching the current onboarding procedures across a number of Fedora teams. Coming up with the actual arrangements of badges within series is important work too that has a big influence on whether or not the system actually works for end users! These suggestions Remy and Justin put together are suggestions of badges new contributors should complete while getting boostrapped and ready to contribute to the corresponding team.

In some cases these involve existing badges, in some cases additional badges we need to create to support the scenario have been uncovered. (This is great work, because over time badges has tended to be unbalanced in terms of awarding folks involved in packaging and who go to a lot of events more than others. It makes sense – the packaging infrastructure was the first part of Fedora’s infrastructure to get hooked up to fedmsg IIRC, so the data was more readily available.)

Here’s an excerpt of the first-cut of that work by Justin and Remy:

Ambassadors
  1. Get a FAS Account (sign the FPCA) (Involvement)
  2. Create a User Wiki Page
  3. Join Mailing Lists and IRC Channels
  4. Contact a Regional Mentor, get sponsored
  5. Get mentor approval
  6. Attend regional ambassador meeting, introduce yourself
CommOps
  1. If no fas account, create one (Involvement)
  2. Intro to commops mailing list
  3. Join IRC #fedora-commops
  4. Get with a mentor and start writing / editing blog / fed mag articles
Design
  1. Create a FAS account (sign the FPCA) (Involvement)
  2. Join the mailing list, introduce yourself: https://admin.fedoraproject.org/mailman/listinfo/design-team
  3. Claim a ticket in the Design Trac: https://fedorahosted.org/design-team/report/1
  4. Update your ticket, send updated information to Design List
  5. Once work is approved, request an invite for your FAS username for the design-team group on the design team list: https://admin.fedoraproject.org/mailman/listinfo/design-team
  6. Add yourself to the contributors list: http://fedoraproject.org/wiki/Artwork/Contributors
  7. Attend Design Team IRC meeting? (Speak Up)
  8. Subscribe to the design tasks fedocal: https://fedorapeople.org/groups/schedule/f-24/
Documentation
  1. Create a FAS Account (sign the FPCA) (Involvement)
  2. Create a GPG Key, and upload it to keyservers, one of which being keys.fedoraproject.org (Crypto Panda)
  3. Write a self-introduction to the mailing list with some ideas on how you would like to contribute: https://fedoraproject.org/wiki/Introduce_yourself_to_the_Docs_Project
  4. Create your own user wiki page, or update with new info if one exists from another prject (Let me Introduce myself Badge)
  5. Attend the freenode.net InterRelay Chat channel #fedora-meeting meetings on Mondays. (Speak Up Badge)
  6. Hang out on freenode.net InterRelay Chat channel #fedora-docs
  7. Interact with other fedora contributors (how to use fasinfo, lookup others wiki user pages, asking for sponsorship)
  8. Make a contribution: Choose an item from this page: https://fedoraproject.org/wiki/How_to_contribute_to_Docs
  9. Post to mailing list, describing which contribution you want to make, asking for feedback
  10. Post to mailing list with links to your contribution
Marketing
  1. Create a FAS Account (and sign the FPCA) (Involvement)
  2. Join the mailing list and introduce yourself: https://fedoraproject.org/wiki/Introduce_yourself_to_the_marketing_group
  3. Choose a marketing task you’d like to help with, and post to the mailing list asking for feedback: https://fedoraproject.org/wiki/Marketing/Schedule
  4. Post to the mailing list with a link to your contribution.
  5. Request to join the marketing group in FAS

Hopefully that gives a better picture on specifics, and what some of the bootstrapping series particularly would involve. You see here how a skill tree rather than badge series makes more sense – you only need create one FAS account, participate in IRC once intially, and participate on a mailing list once initially to learn how those things work before you shoudl really move on. So with this system, you could learn those “skills” joining any team, and where you complete the series for any particular team are the higher-numbered badges in that team’s bootstrap series. (Hope that makes sense.)

Get involved in this business!

we need your help!

Help us build fun yet effective RPG-like components into a platform that can power the free software communities of the future! How do you start? Sadly, we do not have the badge series / skill tree feature done yet, so I can’t simply point you at that. But here’s what I can point you to:

  • hubs-devel Mailing List – our list is powered by HyperKitty, so you don’t even need to have mail delivered to your inbox to participate! Mostly our weekly meeting minutes are posted here. I try to post summaries so you don’t have to read the whole log.
  • The Fedora Hubs Repo – the code with instructions on how to build a development instance and our issue tracker which includes tickets discussed above and many more!
  • Fedora Hubs weekly check-in meeting – our weekly meeting is at 14:00 UTC on Tuesdays in #fedora-hubs. Come meet us!

What do you think?

Questions? Comments? Feedback? Hit me up in the comments (except, don’t literally hit me. Because mean comments make me cry.)

kasapanda

April 26, 2016

Three Slots Awarded to Krita for Google Summer of Code

GSoC2016Logo

Every year Google puts on a program called Google Summer of Code (GSoC). Students from all over the world try to obtain an internship where they can be paid to work on an open source application. This year we are lucky enough to have had three students accepted into the program! (Who gets accepted depends on how many applications there are, how many slots Google has and how many get distributed to KDE.) These three students will be working on Krita for the summer to improve three import areas in Krita.

Here is what they will be trying to tackle in the coming months.

  1. Jouni Pentikäinen – GSoC Project Overview – “This project aims to bring Krita’s animation features to more types of layers and masks, as well as provide means to generate certain types of interpolated frames and extend the user interface to accommodate these features.” In short, Jouni is going to work on animating opacity, filter layers and maybe even transform masks. Not just that, but he’ll work on a sexy curve time-line element for controlling the interpolation!
  2. Wolthera van Hövell tot Westerflier   – GSoC  Project Overview – “Currently, Krita’s architecture has all the bells and whistles for wide-gamut editing. Two big items are missing: Softproofing and a good internal colour selector dialogue for selecting colours that are outside of the sRGB colour space.” Wolthera’s work will make illustration for print workflows much smoother, letting you preview how likely your RGB image will keep at it’s details when printed out. Furthermore, she’ll work on improving your ability to use filters correctly on wide gamut files, extending Krita’s powerful color core.
  3. Julian Thijsen – GSoC Project Overview –  “I aim to seek out the reliance on legacy functionality in the OpenGL engine that powers the QPainter class and to convert this functionality to work using OpenGL 3.2 Core Profile — it needs the compatibility profile at the moment. This will enable OSX to display decorations and will likely allow Krita to run on Mac OS X computers.” This one is best described as a “OpenGL canvs by-pass operation”, Krita currently uses OpenGL 2.1 and 3.0. To run on OSX, we’ll need to be able to run everything in OpenGL 3.0 at the least. It is the biggest blocker for full OSX support, and we’re really excited Nimmy decided to take the challenge!

The descriptions might sound a bit technical for a lay person, but these enhancements will make a big impact. We congratulate the accepted students and wish them the best of luck this summer.

April 25, 2016

Interview with Tomáš Marek

DifferentApproach

Could you tell us something about yourself?

Hi, my name is Tomáš Marek. I’m 22 years old, self-taught digital/traditional artist and student, and I currently live in the Czech Republic. Unlike most of the other artists I started drawing pretty late, about 4 years ago, mainly because I never had any sign of a talent for anything, and I had no idea what I wanted to do with my life. It was 4 years ago when I found out about my great-grandfather who was an artist (landscape painter) which was the initial trigger for me, “I want to be an artist”. Since then I draw pretty much every day.

Right now I’m working on my personal project, it will be a graphic novel, can’t tell you much about it yet, and developing my own style which I call #BigNoses.

PurpleGirl

Do you paint professionally, as a hobby artist, or both?

In this moment I see myself more like a hobbyist than professional, because right now I’m still a student and I’m working on my degree in computer graphics, which is very time consuming. However, from time to time I do some freelance or commissions. So lets say I’m both.

What genre(s) do you work in?

I actually never thought about drawing in some specific genre, I pretty much draw what and how I feel that day.

TheLastWatchman

Whose work inspires you most — who are your role models as an artist?

Well, I can’t pick just one artist, there are so many of them. But if I could pick three of them, the first would be my great-grandfather who introduced me to art, the second is Sycra Yasin who taught me that mileage is more important than talent, and the most recent one is Kim Jung Gi because, well, just look at his work and you will know why.

How and when did you get to try digital painting for the first time?

My first time was in 2012 in the house of my friend, who had a Wacom Cintiq 13. He let me try it with Photoshop CS4 and my first impression of it was “I want one”.

What makes you choose digital over traditional painting?

That would probably be freedom of tools. Because I’m the constantly changing and erasing type of guy. And with digital not only that changing stuff is fast and clean but also, as the saying goes, “pixels are cheap”.

TheLonelyFisherman

How did you find out about Krita?

The first time I heard about Krita was about 2 years ago on Sycra’s Youtube channel, I think he drew his self-portrait. But I didn’t pay much attention to it because in that time I was using Photoshop for my paintings, which I didn’t like but it was the only software that I knew how to use.

What was your first impression?

OK, I remember this moment very well. When I first opened Krita, picked the first brush I saw, I think that it was the Color Smudge type, then I started with painting, and this is what I had in my mind: “This is weird, but kinda cool, but weird… yeah I love it”. I hope this sums it up well.

What do you love about Krita?

Mainly these almost traditional-like brush engines, and the fact that it runs on GNU/Linux, Windows and Mac OS.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like to see a realtime histogram of all visible layers, not only for one selected layer. And some performance improvement for filters.

What sets Krita apart from the other tools that you use?

I’m a GNU/Linux user and when I wanted to paint I always had to reboot to Windows to use Photoshop for painting, so with Krita I don’t have to use Windows at all.

And as I said before, I love Krita’s brush engines.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

This is hard; it’s like asking parents which is their favourite child, but if I had to choose it would be probably my recent painting from my series #BigNoses called “It’s Something”

It'sSomething

What techniques and brushes did you use in it?

For most of my work I’m using my own brush, which is a rectangle brush with pressure size and softness, plus airbrush and some texture brushes. And my technique is pretty simple pipeline: lineart → base colors → shades → rendering.

Where can people see more of your work?

I’m frequently posting my work on these sites:

Twitter https://twitter.com/marts_art
Instagram https://www.instagram.com/marts_struggle_with_drawing/
DeviantArt http://marts-art.deviantart.com/

Or on my Youtube channel https://www.youtube.com/channel/UC0099tv90SxjQJ2PtIh7_EQ

Anything else you’d like to share?

I would like to thank you for inviting me to this interview. I really admire the work you’re doing on Krita, so keep going this way.

April 20, 2016

Wed 2016/Apr/20

  • A Cycling Map

    Este post en español

    There are no good, paper cycling maps for my region. There are 1:20,000 street maps for car navigation within the city, but they have absolutely no detail in the rural areas. There are 1:200,000 maps for long trips by car, but that's too big of a scale.

    Ideally there would be high-quality printed maps at 1:50,000 scale (i.e. 1 Km in the real world is 2 cm in the map), with enough detail and some features:

    • Contour lines. Xalapa is in the middle of the mountains, so it's useful to plan for (often ridiculously steep) uphills/downhills.

    • Where can I replenish my water/food? Convenience stores, roadside food stands.

    • What's the quality and surface of the roads? This region is full of rural tracks that go through coffee and sugarcane plantations. The most-transited tracks can be ridden with reasonable "street" tyres. Others require fatter tyres, or a lot of skill, or a mountain bike, as they have rocks and roots and lots of fist-sized debris.

    • Any interesting sights or places? It's nice to have a visual "prize" when you reach your destination, apart from the mountainous landscape itself. Any good viewpoints? Interesting ruins? Waterfalls?

    • As many references as possible. The rural roads tend to look all the same — coffee plants, bananas, sugarcane, dirt roads. Is there an especially big tree at the junction of two trails so you know when to turn? Is there one of the ubiquitous roadside shrines or crosses? Did I just see the high-voltage power lines overhead?

    Make the map yourself, damnit

    For a couple of years now, I have been mapping the rural roads around here in OpenStreetMap. This has been an interesting process.

    For example, this is the satellite view that gets shown in iD, the web editor for OpenStreetMap:

    Satellite view        of rural area

    One can make out rural roads there between fields (here, between the blue river and the yellow highway). They are hard to see where there are many trees, and sometimes they just disappear in the foliage. When these roads are just not visible or 100% unambiguous in the satellite view, there's little else to do but go out and actually ride them while recording a GPS track with my phone.

    These are two typical rural roads here:

    Rural road between plantations Rural road with     view to the mountains

    Once I get back home, I'll load the GPS track in the OpenStreetMap editor, trace the roads, and add some things by inference (the road crosses a stream, so there must be a bridge) or by memory (oh, I remember that especially pretty stone bridge!). Behold, a bridge in an unpaved road:

    Bridge in the editor Bridge in the real        world

    It is also possible to print a map quickly, say, out of Field Papers, annotate it while riding, and later add the data to the map when on the computer. After you've fed the dog.

    Field papers in use

    Now, that's all about creating map data. Visualization (or rendering for printing) is another matter.

    Visualization

    Here are some interesting map renderers that work from OpenStreetMap data:

    OpenTopoMap

    OpenTopoMap. It has contour lines. It is high contrast! Paved roads are light-colored with black casing (cartography jargon for the outline), like on a traditional printed map; unpaved rural tracks are black. Rivers have a dark blue outline. Rivers have little arrows that indicate the flow direction (that means downhill!) — here, look for the little blue arrow where the river forks in two. The map shows things that are interesting in hiking/biking maps: fences, gates, viewpoints, wayside crosses, shelters. Wooded areas, or farmland and orchards, are shaded/patterned nicely. The map doesn't show convenience stores and the like.

    GPSies with Sigma Cycle layer

    GPSies with its Sigma Cycle layer. It has contour lines. It tells you the mountain biking difficulty of each trail, which is a nice touch. It doesn't include things like convenience stores unless you go into much higher zoom levels. It is a low-contrast map as is common for on-screen viewing — when printed, this just makes a washed-out mess.

    Cycle.Travel

    Cycle.Travel. The map is very pretty onscreen, not suitable for printing, but the bicycle routing is incredibly good. It gives preference to small, quiet roads instead of motorways. It looks for routes where slopes are not ridiculous. It gives you elevation profiles for routes... if you are in the first world. That part doesn't work in Mexico. Hopefully that will change — worldwide elevation data is available, but there are some epic computations that need to happen before routing works on a continent-level scale (see the end of that blog post).

    Why don't you take your phone with maps on the bike?

    I do this all the time, and the following gets tedious:

    1. Stop the bike.
    2. Take out the phone from my pocket.
    3. Unlock the phone. Remove gloves beforehand if it's cold.
    4. Wait for maps app to wake up.
    5. Wipe sweat from phone. Wait until moisture evaporates so touch screen works again.
    6. Be ever mindful of the battery, since the GPS chews through it.
    7. Be ever mindful of my credit, since 3G data chews through it.
    8. Etc.

    I *love* having map data on my phone, and I've gone through a few applications that can save map data without an internet connection.

    City Maps 2 Go is nice. It has been made more complex than before with the last few versions. Maps for Mexico don't seem to be updated frequently at all, which is frustrating since I add a lot of data to the base OpenStreetMap myself and can't see it in the app. On the plus side, it uses vector maps.

    MotionX GPS is pretty great. It tries extra-hard not to stop recording when you are creating a GPS track (unlike, ahem, Strava). It lets you save offline maps. It only downloads raster maps from OpenStreetMap and OpenCycleMap — the former is nominally good; the latter is getting very dated these days.

    Maps.Me is very nice! It has offline, vector maps. Maps seem to be updated reasonably frequently. It has routing.

    Go Map!! is a full editor for OpenStreetMap. It can more or less store offline maps. I use it all the time to add little details to the map while out riding. This is a fantastic app.

    Those apps are fine for trips of a few hours (i.e. while the phone's battery lasts), and not good for a full-day trip. I've started carrying an external battery, but that's cumbersome and heavy.

    So, I want a printed map. Since time immemorial there has been hardware to attach printed maps to a bike's handlebar, or even a convenient handlebar bag with a map sleeve on it.

    Render the map yourself, damnit

    The easiest thing would be to download a section of the map from OpenTopoMap, at a zoom level that is useful, and print it. This works in a pinch, but has several problems.

    Maps rendered from OpenStreetMap are generally designed for web consumption, or for moderately high-resolution mobile screens. Both are far from the size and resolution of a good printed map. A laptop or desktop has a reasonably-sized screen, but is low resolution: even a 21" 4K display is only slightly above 200 DPI. A phone is denser, at something between 300 and 400 DPI, but it is a tiny screen... compared to a nice, map-sized sheet of paper — easily 50x50 cm at 1200 DPI.

    ... and you can fold a map into the size of a phablet, and it's still higher-rez and lighter and doesn't eat batteries and OMG I'm a retrogrouch, ain't I.

    Also, web maps are zoomable, while paper maps are at a fixed scale. 1:50,000 works well for a few hours' worth of cycling — in this mountainous region, it's too tiring for me to go much further than what fits in such a map.

    So, my line of thinking was something like:

    1. How big is the sheet of paper for my map? Depends on the printer.

    2. What printed resolution will it have? Depends on the printer.

    3. What map scale do I want? 1:50,000

    4. What level of detail do I want? At zoom=15 there is a nice level of detail; at z=16 it is even more clear. However, it is only until z=17 that very small things like convenience stores start appearing... at least for "normal" OpenStreetMap renderers.

    Zoom levels?

    Web maps are funny. OpenStreetMap normally gets rendered with square tiles; each tile is 256x256 pixels. At zoom=0, the whole world fits in a single tile.

    Whole        world, single tile, zoom=0

    The URL for that (generated) image is http://opentopomap.org/0/0/0.png.

    If we go in one zoom level, to zoom=1, that uber-tile gets divided into 2x2 sub-tiles. Look at the URLs, which end in zoom/x/y.png:

    1/0/0
    http://opentopomap.org/1/0/0.png

    1/1/0
    http://opentopomap.org/1/1/0.png

    1/0/1
    http://opentopomap.org/1/0/1.png

    1/1/1
    http://opentopomap.org/1/1/1.png

    Let's go in one level, to zoom=2, and just focus on the four sub-sub-tiles for the top-left tile above (the one with North America and Central America):

    2/0/0
    http://opentopomap.org/2/0/0.png

    2/1/0
    http://opentopomap.org/2/1/0.png

    2/0/1
    http://opentopomap.org/2/0/1.png

    2/1/1
    http://opentopomap.org/2/1/1.png

    So the question generally is, what zoom level do I want, for the level of detail I want in a particular map scale, considering the printed resolution of the printer I'll use?

    For reference:

    After some playing around with numbers, I came up with a related formula. What map scale will I get, given a printed resolution and a zoom level?

    (defun get-map-scale (dpi tile-size zoom latitude)
      (let* ((circumference-at-equator 40075016.686)
    	 (meridian-length (* circumference-at-equator
    			     (cos (degrees-to-radians latitude))))
    	 (tiles-around-the-earth (exp (* (log 2) zoom)))
    	 (pixels-around-the-earth (* tiles-around-the-earth tile-size))
    
    	 (meters-per-pixel (/ meridian-length pixels-around-the-earth))
    
    	 (meters-in-inch-of-pixels (* meters-per-pixel dpi))
    	 (meters-in-cm-of-pixels (/ meters-in-inch-of-pixels 2.54)))
        (* meters-in-cm-of-pixels 100)))
    
    (get-map-scale 600      ; dpi
    	       256      ; tile-size
    	       16       ; zoom
    	       19.533)  ; latitude of my town
    53177.66240054532 ; pretty close to 1:50,000

    All right: zoom=16 has a useful level of detail, and it gives me a printed map scale close to 1:50,000. I can probably take the tile data and downsample it a bit to really get the scale I want (from 53177 to 50000).

    Why a tile-size argument (in pixels)? Aren't tiles always 256 pixels square? Read on.

    Print ≠ display

    A 1-pixel outline ("hairline") is nice and visible onscreen, but on a 600 DPI or 1200 DPI printout it's pretty hard to see, especially if it is against a background of contour lines, crop markers, and assorted cartographic garbage.

    A 16x16-pixel icon that shows the location of a convenience store, or a viewpoint, or some other marker, is perfectly visible on a screen. However, it is just a speck on paper.

    And text... 10-pixel text is probably readable even on a high-resolution phone, but definitely not on paper at printed resolutions.

    If I just take OpenTopoMap and print it, I get tiny text, lines and outlines that are way too thin, and markers that are practically invisible. I need something that lets me tweak the thickness of lines and outlines, the size of markers and icons, and the size and position of text labels, so that printing the results will give me a legible map.

    Look at these maps and zoom in. They are designed for printing. They are full of detail, but on a screen the text looks way too big. If printed, they would be pretty nice.

    The default openstreetmap.org uses Mapnik as a renderer, which in turn uses a toolchain to produce stylesheets that determine how the map gets rendered. Stylesheets say stuff like, "a motorway gets rendered in red, 20 pixels thick, with a 4-pixel black outline, and with highway markers such and such pixels apart, using this icon", or "graveyards are rendered as solid polygons, using a green background with this repeating pattern of little crosses at 40% opacity". For a zoomable map, that whole process needs to be done at the different zoom levels (since the thicknesses and sizes change, and just linearly scaling things looks terrible). It's a pain in the ass to define a stylesheet — or rather, it's meticulous work to be done in an obscure styling language.

    Recently there has been an explosion of map renderers that work from OpenStreetMap data. I have been using Mapbox Studio, which has the big feature of not requiring you to learn a styling language. Studio is a web app that lets you define map layers and a style for each layer: "the streets layer comes from residential roads; render that as white lines with a black outline". It lets you use specific values for different zoom levels, with an interesting user interface that would be so much better without all the focus issues of a web browser.

    Screenshot of        Mapbox Studio

    I've been learning how to use this beast — initially there's an airplane-cockpit aspect to it. Things went much easier once I understood the following:

    The main OpenStreetMap database is an enormous bag of points, lines, and "relations". Each of those may have a number of key/value pairs. For example, a point may have "shop=bakery" and "name=Bready McBreadface", while a street may have "highway=residential" and "name=Baker Street".

    A very, uh, interesting toolchain slices that data and puts it into vector tiles. A vector tile is just a square which contains layers of drawing-like instructions. For example, the "streets" layer has a bunch of "moveto lineto lineto lineto". However, the tiles don't actually contain styling information. You get the line data, but not the colors or the thicknesses.

    There are many providers of vector tiles and renderers. Mapzen supplies vector tiles and a nifty OpenGL-based renderer. Mapbox supplies vector tiles and a bunch of libraries for using them from mobile platforms. Each provider of vector tiles decides which map features to put into which map layers.

    Layers have two purposes: styling, and z-ordering. Styling is what you expect: the layer for residential roads gets rendered as lines with a certain color/thickness/outline. Z-ordering more or less depends on the purpose of your map. There's the background, based on landcover information (desert=yellow, forest=green, water=blue). Above that there are contour lines. Above those there are roads. Above those there are points of interest.

    In terms of styling, there are some tricks to achieve common visual styles. For example, each kind of road (motorway, secondary road, residential road) gets two layers: one for the casing (outline), and one for the line fill. This is to avoid complicated geometry at intersections: to have red lines with a black outline, you have a layer with black wide lines, and above it a layer with red narrow lines, both from the same data.

    Styling lines in map        layers

    A vector tileset may have not all the data in the main OpenStreetMap database. For example, Mapbox creates and supplies a tileset called mapbox-streets-v7 (introduction, reference). It has streets, buildings, points of interest like shops, fences, etc. It does not have some things that I'm interested in, like high-voltage power lines and towers (they are good landmarks!), wayside shrines, and the extents of industrial areas.

    In theory I could create a tileset with the missing features I want, but I don't want to waste too much time with the scary toolchain. Instead, Mapbox lets one add custom data layers; in particular, they have a nice tutorial on extracting specific data from the map with the Overpass Turbo tool and adding that to your own map as a new layer. For example, with Overpass Turbo I can make a query for "find me all the power lines in this region" and export that as a GeoJSON blob. Later I can take that file, upload it to Mapbox Studio, and tell it how to style the high-voltage power lines and towers. It's sort of manual work, but maybe I can automate it with the magic of Makefiles and the Mapbox API.

    Oh, before I forget: Mapbox uses 512-pixel tiles. I don't know why; maybe it is to reduce the number of HTTP requests? In any case, that's why my little chunk of code above has a tile-size argument.

    So what does it look like?

    My map

    This is a work in progress. What is missing:

    • Styling suitable for printing. I've been tweaking the colors and line styles so that the map is high-contrast and legible enough. I have not figured out the right thicknesses, nor text sizes, for prints yet.

    • Adding data that I care about but that is not in mapbox-streets-v7: shrines, power lines, industrial areas, municipal boundaries, farms, gates, ruins, waterfalls... these are available in the main OpenStreetMap database, fortunately.

    • Add styling for things that are in the vector tiles, but don't have a visible-enough style by default. Crops could get icons like sugarcane or coffee; sports fields could get a little icon for football/baseball.

    • Figure out how to do pattern-like styling for line data. I want cliffs shown somewhat like this (a line with little triangles), but I don't know how to do that in Mapbox yet. I want little arrows to show the direction in which rivers flow.

    • Do a semi-exhaustive ride of all the rural roads in the area for which I'll generate the map, to ensure that I haven't missed useful landmarks. That's supposed to be the fun part, right?

    References

    The design of the Mapbox Outdoors style. For my own map, I started with this style as a base and then started to tweak it to make it high-contrast and have better colors for printing.

    Technical discussion of generating a printable city map — a bit old; uses TileMill and CartoCSS (the precursors to Mapbox Studio). Talks about dealing with SVG maps, large posters, overview pages.

    An ingenious vintage German cycle map, which manages to cram an elevation profile on each road (!).

    The great lost map scale argues that 1:100,000 is the best for long-distance, multi-day cyclists, to avoid carrying so many folded maps. Excellent map pr0n here (look at the Snowdonia map — those hand-drawn cliffs!). I'm just a half-a-day cycling dilettante, so for now 1:50,000 is good for me.

    How to make a bike map focuses on city-scale maps, and on whether roads are safe or not for commuters.

    Rendering the World — how tiling makes it possible to render little chunks of the world on demand.

    Introducing Tilemaker: vector tiles without the stack. Instead of dealing with Postgres bullshit and a toolchain, this is a single command-line utility (... with a hand-written configuration file) to slice OpenStreetMap data into layers which you define.

    My cycling map in Mapbox Studio.

Upgrading Fedora 23 to 24 using GNOME Software

I’ve spent the last couple of days fixing up all the upgrade bugs in GNOME Software and backporting them to gnome-3-20. The idea is that we backport gnome-software plus a couple of the deps into Fedora 23 so that we can offer a 100% GUI upgrade experience. It’s the first time we’ve officially transplanted a n+1 GNOME component into an older release (ignoring my unofficial Fedora 20 whole-desktop backport COPR) and so we’re carefully testing for regressions and new bugs.

If you do want to test upgrading from F23 to F24, first make sure you’ve backed up your system. Then, install and enable this COPR and update gnome-software. This should also install a new libhif, libappstream-glib, json-glib and PackageKit and a few other bits. If you’ve not done the update offline using [the old] GNOME Software, you’ll need to reboot at this stage as well.

Fire up the new gnome-software and look at the new UI. Actually, there’s not a lot new to see as we’ve left new features like the ODRS reviewing service and xdg-app as F24-only features, so it should be mostly the same as before but with better search results. Now go to the Updates page which will show any updates you have pending, and it will also download the list of possible distro upgrades to your home directory.

As we’re testing upgrading to a pre-release, we have to convince gnome-software that we’re living in the future. First, open ~/.cache/gnome-software/3.20/upgrades/fedora.json and search for f24. Carefully change the Under Development string to Active then save the file. Log out, back in and the launch gnome-software again or wait for the notification from the shell. If all has gone well you should see a banner telling you about the new upgrade. If you click Download go and get a coffee and start baking a cake, as it’s going to take a long time to download all that new goodness. Once complete just click Install, which prompts a reboot where the packages will be installed. For this step you’ll probably want to bake another cake. We’re not quite in an atomic instant-apply world yet, although I’ll be talking a lot more about that for Fedora 25.

With a bit of luck, after 30 minutes staring at a progressbar the computer should reboot itself into a fresh new Fedora 24 beta installation. Success!

Screenshot_Fedora23-Upgrade_2016-04-20_15:23:27

If you spot any problems or encounter any bugs, please let me know either in bugzilla, email or or IRC. I’ve not backported all the custom CSS for the upgrade banner just yet, but this should be working soon. Thanks!

April 19, 2016

New Krita 3.0 Alpha/Development Windows Builds

Until now, we have made all Windows releases of Krita with Microsoft’s Visual C++ compiler. Krita 2.9 was built with the 2012 version, 3.0 with the 2015 version of Visual C++. Both compilers have problems building the G’Mic library, and, recently, the latest version of the Vc library. G’Mic provides a wide range of filters and Vc lets us optimize the code that blends pixels and creates brush masks.

We cannot fix the libraries, we cannot fix the Microsoft compiler, and we don’t want to make Krita slower and less functional, so there was only one solution left: find a different compiler. That is pretty scary so late in the development cycle, because every compiler has different quirks and bugs. We are actually making these builds on Linux.

We have now prepared new builds for you to test. There are four builds: with and without debugging information, and for 32 and 64 bits Windows. If you encounter a crash, please download the debug build and try to reproduce the crash. Compared to 3.0 alpha, a number of bugs are fixed. These builds also have more features: the camera raw import plugin and the PDF import/export plugin are included. The 32 bits build now also includes G’Mic, which was completely impossible with Visual Studio. The only feature present on Linux that is still not available on Windows is OpenJPEG support, for loading and saving .jp2 files (not jpeg files, that is present and correct).

You can find all builds here: http://files.kde.org/krita/3/windows/devbuilds. You can verify your downloads with the sha1 checksum files. Direct downloads of the non-debug builds:

To run, simply extract the zip file somewhere and navigate into the bin folder, there execute krita.exe. There is no need anymore for Visual Studion C runtimes! Krita 3.0 looks in a different location for its configuration and resource files, so your existing 2.9 installation is completely untouched.

Setting up the builds was easier than expected, thanks to the MXE project, but it still took time away from fixing bugs, so we’re considering extending the 3.0 release schedule with another month. if you’re on Windows, please give these builds a good, thorough work-out!

April 14, 2016

A logo design process

Designing a logo can be intimidating and the process full of alternating between hope and despair. I recently designed a logo for the team of a friend I work with, and for whatever reason (perhaps mindfulness practice) I decided to try to track the process visually and note my general thinking in choosing a forward direction.

This was just one piece (albeit a major one) of a longer process. This part was just me by myself coming up with an initial proposal for discussion. I think brainstorming as a team effort produces the best results – here I took some initial direction from the team in terms of what they wanted, the look they were going for, the symbols they wanted embedded in the logo. The initial concept in the first frame reflects that opening conversation – they wanted the logo to relate to carbon, they wanted something clean (the Ansible logo was brought up as an example of a style they liked), and they wanted it to somehow depict interoperability.

The process below shows how I came up with an initial proposal from that starting point, and then we worked together back and forth to come up with the final version (which isn’t even shown here. 🙂 )

You can kind of see here, it’s a general cycle of:

  • Logic/brain: what might work conceptually?
  • Creativity/hand: what would that concept look like visually?
  • Evaluation/eyes: does that visual relate the idea? Or does it relate something else (that we don’t want) too strongly?
  • Rinse, repeat.

Anyway, here it is, an example of my process; I thought it might be interesting to look at. (Apologies for the large image, and for whatever it’s worth, Inkscape was used for this; sometimes I do pencil sketches first but I wasn’t feeling it this time.)

logo design comic

April 13, 2016

Horde of Cuteness fundraiser

Guest post by Justin Nichol. More game art with Krita! Support the project here on Indiegogo!

IGG_Header-800

My name is Justin and I have been making illustrations and game art with open source software for some time, and release my work under open licenses for others to use. I use Linux and Blender, and I initially used GIMP but have now transitioned to use Krita, given its more advanced tools and better workflow for painters.
hero_warrior-800
I began Horde of Cuteness as a project through my Patreon (https://www.patreon.com/justinnichol), and created an initial set of 12 figures, but have since added another two for a total of 14. My  Patreon is perfect for creating small packs of art in many styles, but with Indiegogo I can dig deeper into individual collections.
monster_banshee-800
If the preliminary funding is obtained from this campaign, I can eschew freelance for the time necessary to add an additional 10 characters to the collection (2 heroes, 5 monsters, and 3 boss monsters chosen by the backers). I do this because I want to create large packs of art for game designers, writers and other creative people.
hero_bard-800
Backers can also grab rewards that allow them to choose a monster for me to paint, to become one of the heroes themselves, or even to add a whole new boss monster to the campaign.
monster_goblin-800
All the characters will be released under a Creative Commons Attribution Share-Alike 4.0 license, and will be made available as .pngs with transparent backgrounds as well, and will include .kra source files for editing the characters yourself. All of the images will be 2000px by 2000px.
hero_paladin-800
I’ve gotten over half the initial funding I hoped for in just over a week, and I think support from the open source community could push me over the top.
monster_orc-800
The characters I have already created are available on my website: freeforall.cc

 

April 11, 2016

Cross-compiling Krita using MXE

Writing code that builds with multiple compilers is good way to catch errors, improve code quality and conformance. Or so I have always been taught. Hence, when we ported Krita to Windows, we ported it to the native compiler for Windows, Microsoft Visual C++. That took some doing, but in the process we found lots of little things that, once fixed, improved Krita's code. When we ported Krita to OSX, where the native compiler is clang, the same happened.

And then we added two dependencies to Krita that have trouble with Visual C++: G'Mic and Vc. G'Mic implements a parser for a custom scripting language for writing filters, and that parser is written in a way that makes life really hard for Visual C++. Basically, the 32 bits builds never worked and the 64 bits builds need a stack of about a gigabyte to parse the scripts. And Vc, a library to add vectorization/simd support easily, from version 1.0 and up just doesn't build at all on Windows.

It's probably not a coincidence that both are heavy users of templates, and in the case of Vc, of C++11. But Krita needs both libraries: our users love all the filters and tools the G'Mic plugin gives them, and without Vc, our brushes and pixel compositing code becomes really slow.

What could we do? Hair was liberally pulled and not a few not unmanly tears shed. We could try to build Krita on Windows using one of the GCC ports, or we could try to build Krita on Windows using clang. We already tried to use Intel's icc to build Krita, but that broke already when trying to build Qt. (Though that was in the early Qt 4 days, so maybe things are better now.)

But building on Windows will always be slower, because of the slow terminal and the slow file system, and we know that the Windows releases of Gimp and LibreOffice are actually built on Linux. Cross-compiled for Windows. If complex projects like those can manage, we should be able to manage too.

Unfortunately, I'm a bear^Wdeveloper of very little brain, and figuring out which blogs and articles are up to date and relevant for OpenSUSE Leap was already quite hard, and when I saw that the mingw packages for Leap were a year old, I wasn't prepared to even put a lot of time into that.

Enter MXE. It's a kind of KDE's emerge, but for Linux, a bunch of Makefiles that can make a cross-platform build. It comes with a huge bunch of pre-configured libraries, though Unfortunately not everything we need.

So, using MXE, I built Qt and most dependencies. Still missing are Vc, OpenColorIO, GSL, Poppler-qt5 and openjpeg. I also needed to remove some of the hacks we did to make Krita compile with MSVC: we had added a couple of dummy header files to provide things like nothl and friends (these days superseded by QtEndian). A 3rd-party library we embed, xcftools, had its own equivalent of that stuff. But apart from that...

Everything mostly built out of the box, and the result runs in Wine (as evidenced by the "native" file dialog:

What's left to do? Build the remaining dependencies, add translations, create a packaging script (where's windeployqt?), and test.

Interview with Esfenodon

Yaiza_Happy_Esfenodon____

Could you tell us something about yourself?

My name is Iván R. Arós, alias “Esfenodon”. I have been working in computer graphics for the last 13 years, but I think I’ve loved art from the first day I can remember something.

I studied Arts at Vigo University, and I have always been using traditional tools to draw and paint. From the first time I realised that as much as I painted with acrylic or oils, watercolors or ink, I become better with digital tools.

That’s something I always recommend in my work.

Actually I work as Art Director in a small studio at the Vigo University. In this studio, Divulgare, we create scientific divulgative videos. 3D, 2D, real video… In this Studio I had the incredible opportunity to join my two passions: Science and Art, working closely with scientists. Every day I paint something, and have point five or more ideas I would like to paint the next day.

Do you paint professionally, as a hobby artist, or both?

I think that when you paint professionally it’s almost impossible don’t do it as a hobby too. Maybe I paint different things at work or at home, but for me is the same. Professionally and hobby. Both.

What genre(s) do you work in?

That’s a very complex question for me. I think I like realism, but when I’m painting I’m always searching for the way to tell a story in a beautiful way using as few elements as possible. But constantly I come back to realism. So I’m travelling from one style to other all the time. Scientific illustration, people, sci-fi scenes, or just cartoon characters. I think I don’t have a genre or style because I want to try many different things. My work is about joining science and art, to create some kind of visual attraction while the observer obtains information, so I’m always searching for some beautiful style that allows me to tell the story the scientists want to tell. But when I’m at home, I simply draw and paint.

Whose work inspires you most — who are your role models as an artist?

I have so many role models I don’t know where to start. Every time I’m watching someone work I think “oh, that’s awesome, I should try that”, or “hey, I think I can combine this style with that other”. But if I have to name some names, surely I would say Lluís Bargalló, an awesome illustrator in Spain, Ink Dwell, who is putting fresh ideas into scientific illustration, the light of Goro Fujita and Ilya Kuvshinov, the atmosphere of Loish, and many many others.

How and when did you get to try digital painting for the first time?

I have always been painting with computers. I remember Deluxe Paint on an Amstrad PC so many years ago. Just with 16 colours. It was fun. But I started painting seriously around 2005. I spent some years painting with a mouse. With my first tablet in 2009 I started to understand that digital and traditional painting were almost the same.

What makes you choose digital over traditional painting?

Really I haven’t chosen digital over traditional. I think that when you like to paint you always paint. Digital, traditional, or in the sand using your toe. You can’t stop painting. Maybe I do more digital than traditional because of the speed. It allows me to test more styles faster, and make crazy stuff that I can throw on a hard drive and think about it later. I’ve got a computer or a tablet everywhere. It’s great having the possibility to paint any time. Digital allows it. And I enjoy it a lot. Digital painting is enjoyable.

How did you find out about Krita?

I’m always searching for the way to use as much open source software as possible at University. Maybe it was as simple as searching google for it. Krita software, hmmm, interesting, let’s give it a try. Maybe the Krita name was familiar to me because some time ago I read about a collection of open source tools for audiovisual creativity.

What was your first impression?

My first impression was “wow”. The software was very fast. Very light on the computer. And I felt like I was drawing on paper. This is important to me, because with other software I usually draw on paper, then scan, then paint. But I like to draw directly on the computer, so it’s frustrating when I can’t draw digitally as I draw on paper. With Krita I can draw.

What do you love about Krita?

I have been using so much software. Every time I need to start using new tools I feel tired. New interfaces, new tools. I work as a 3D animator too, and I understand that I must stay up to date, reading and learning. But sometimes I just want to concentrate on the artistic field, not the technical. Krita is just that. Open. Paint. It’s great to have a very simple program that allows you to have all the tools you need at the same time.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I like Krita a lot. Maybe more integration with tablets, or smooth some controls with the most used digital tablets in the market. Sometimes it’s hard to set up some express keys. Maybe it’s not a Krita problem but the problem of the tablet maker :-)

What sets Krita apart from the other tools that you use?

Speed. I’m abandoning other tools because I love speed. Krita is relaxing software for me. Everything worked as expected. In a few hours I was painting with Krita as my everyday software.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Maybe P-B-P-B. It was very funny. A pretty girl and a beautiful owl. I like owls. I was like “I think I can do scientific illustrations in Krita… why not to put an owl here?” To be honest someone told me to put in an owl, hahaha.

P-B-P-B___Esfenodon___

What techniques and brushes did you use in it?

I like to use hard flat brushes. I don’t want to hide the brush. Strokes can give a lot of expression. Block basic is a great brush. I often start out using that. It’s easy to create custom brushes but some of the basic presets are great.

I try not to use many layers or “control z”. If something is wrong I try to resolve it painting over. Sometimes you find something interesting that way.

P-B-P-B_process__Esfenodon___

Where can people see more of your work?

I have a flickr account (https://www.flickr.com/photos/100863111@N04/). It’s not a professional flickr. It is just where I upload some experiments.

I upload a lot of stuff to my twitter account too: @esfenodon. I’m always painting or drawing something.

My professional work can be seen in the Vimeo Divulgare channel: https://vimeo.com/user1296710

Anything else you’d like to share?

I would like to thank all the people who make Krita possible, and recommend it for everyone who wants to try digital painting. Thanks!

April 09, 2016

Krita 3.0: First Alpha Release

On the road to Krita 3.0, we’re releasing today the first alpha version. This has all the features that will be in 3.0, and contains the translations, but is still unstable. We’re fixing bugs all the time, but there’s still plenty to fix! That said, we think we nailed the worst problems and we’d love for you all to give this version a try! The final release is planned for May 1st.

Screenshot_20160409_212626

What’s new?

A short overview of new features compared to Krita 2.9:

  • Tools for creating hand-drawn animations
  • Instant Preview for working with big brushes and large canvases
  • Rulers, guides, grids and snapping
  • Revamped layer panel
  • loading and saving gimp brushes
  • and much, much more…

Krita 3.0 is also based on Qt 5 and the KF5 Framework libraries.

Since the last development build, we focused on fixing bugs and improving performance.

There are some changes and improvements that we made outside of the mountain of fixes that were done. This is a list of improvements since the last pre-alpha that was released.

New Features

  • You can now move multiple selected layers at once
  • And move masks with Ctrl + PgUp/PgDn
  • Updated to G’MIC 1.7 (see release notes)
  • Updated Design templates

We also removed the print and print preview options; printing in Krita has never worked well, and after porting to Qt5 broke completely.

User Interface and Usability Improvements

  • Splash screen shows what area is loading on startup
  • Updated Grids and Guides Tool Options UI elements
  • Some checkboxes have been replaced with lock icons like the crop and geometry tool options
  • Global Input pressure curve now has labels with the axes. ( Settings > Configure Krita > Tablet Settings ).
  • Use highlighted color for the selected tool in toolbox (easier to see)
  • Resource manager has separate buttons for importing resources now. This improves the stability with this area.

Screenshot_20160409_212649

Known Issues

We’re fixing bugs like crazy, but there are still a number of known bugs. Please help us by testing this alpha release and checking whether those bugs are still valid! Bug triaging is an awesome way of becoming part of our community.

Download

There are two Windows versions: an msi installer and a portable zip archive. The MSI installer also contains an explorer shell extension that allows you to see thumbnails of krita files in explorer. The shell extension was written by Alvin Wong.

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage should run on any Linux distribution released since 2012. After downloading, make the appimage executable and run it. No installation is needed.

Since this is the first official 3.0 release, we also have source code!