April 29, 2016

Vermillion Cliffs trip, and other distractions

[Red Toadstool, in the Paria Rimrocks] [Cobra Arch, in the Vermillion Cliffs] I haven't posted in a while. Partly I was busy preparing for, enjoying, then recovering from, a hiking trip to the Vermillion Cliffs, on the Colorado River near the Arizona/Utah border. We had no internet access there (no wi-fi at the hotel, and no data on the cellphone). But we had some great hikes, and I saw my first California Condors (they have a site where they release captive-bred birds). Photos (from the hikes, not the condors, which were too far away): Vermillion Cliffs trip.

I've also been having fun welding more critters, including a roadrunner, a puppy and a rattlesnake. I'm learning how to weld small items, like nail legs on spark plug dragonflies and scorpions, which tend to melt at the MIG welder's lowest setting.

[ Welded puppy \ [ Welded Roadrunner ] [ Welded rattlesnake ]

New Mexico's weather is being charmingly erratic (which is fairly usual): we went for a hike exploring some unmapped cavate ruins, shivering in the cold wind and occasionally getting lightly snowed upon. Then the next day was a gloriously sunny hike out Deer Trap Mesa with clear long-distance views of the mountains and mesas in all directions. Today we had graupel -- someone recently introduced me to that term for what Dave and I have been calling "snail" or "how" since it's a combination of snow and hail, soft balls of hail like tiny snowballs. They turned the back yard white for ten or fifteen minutes, but then the sun came out for a bit and melted all the little snowballs.

But since it looks like much of today will be cloudy, it's a perfect day to use up that leftover pork roast and fill the house with good smells by making a batch of slow-cooker green chile posole.

April 28, 2016

Development Builds Ready To Test

So… Yesterday, Dmitry tried to fix an ancient bug that made it inconvenient to work with dockers, popups and the canvas: sometimes the focus would go haywire, and if you’d try to enter a value in a docker, or zoom while the cursor wasn’t over your image, things would go wrong. Well… There’s this fix, and it needs testing. It really needs testing before we can make it part of Krita 3.0. So, here are new builds for Windows, Linux and OSX. Please help by downloading them and giving them a good work-out. There are a dozen or so other fixes in there as well, but I won’t bore you with those. Please test! Spend an hour or two painting, transforming, swapping brushes, setting colors! You can download all of these and use them without interfering with any of your Krita 2.9 settings.

Download

Windows: Unzip and run the bin/krita.exe executable!

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage without G’Mic is provided: it is no longer possible to build the latest version of G’Mic in a way that can run on those distributions.

GCompris: New Chess graphics

For my first sponsored post last month on my recently launched Patreon, I’ve decided to update the graphics of the Chess activities. There are three: End of game, Play against a friend and Play against Tux.

I think those are great to start playing chess, and not only for kids. I must say that I had a lot of fun playing with these activities while integrating the new graphics.

First, a screenshot of what it looked like before:

gcompris-chess-01-before

And now two screenshots with new graphics: the first with the new activity icons, and the second is a fullscreen view of the new chessboard and background.

gcompris-chess-03-icons
gcompris-chess-02-after

I also made some style changes to the overlays while moving the pieces, and to the side text and buttons.

If you were looking for a simple chess game to play alone or with a friend, look for the next release soon, or take a look at the build instructions on the website to test the development version.

Have fun playing chess in Gcompris !

Also, I’m about to update another activity before the end of this month, so stay tuned on my patreon page for the next news, and don’t forget to subscribe if you want to support this work.

Premier livre sur Krita en français

(Post in french, english version below)

Le mois dernier est sorti mon livre “Dessin et Peinture numérique avec Krita”. Il s’agit du premier livre en francais sur ce logiciel. J’espère qu’il contribuera à faire connaitre ce magnifique logiciel libre de dessin à tous les artistes francophones.

Ce livre est disponible en version imprimée couleur, en version numérique téléchargeable sans DRM ou encore en version consultable en ligne, sur le site de l’éditeur D-Booker. Je remercie d’ailleurs mon éditeur pour m’avoir permis d’écrire ce livre.

dessin-et-peinture-numerique-avec-krita

Last month, my book “Dessin et Peinture numérique avec Krita” has been released. It is the first book in french about this software. I hope it will contribute to introduce this wonderful Free Software to all french speaking artists.

This book is available in full-color printed version, as digital download without DRM or as online version, on the website of the publisher D-Booker. By the way, I’d like to thank my publisher who made it possible to write this book.

April 27, 2016

3rd Party Fedora Repositories and AppStream

I was recently asked how to make 3rd party repositories add apps to GNOME Software. This is relevant if you run a internal private repo for employee tools, or are just kind enough to provide a 3rd party repo for Fedora or RHEL users for your free or non-free applications.

In most cases people are already running something like this to generate the repomd metadata files on a directory of RPM files:

createrepo_c --no-database --simple-md-filenames SRPMS/
createrepo_c --no-database --simple-md-filenames x86_64/

So, we need to actually generate the AppStream XML. This works by exploding any interesting .rpm files and merging together the .desktop file, the .appdata.xml file and preprocessing some icons. Only applications installing AppData files will be shown in GNOME Software, so you might need to fix before you start.

appstream-builder			\
	--origin=yourcompanyname	\
	--basename=appstream		\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=1			\
	--min-icon-size=32		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons

This takes a second or two (or 40 minutes if you’re trying to process the entire Fedora archive…) and spits out some files to /tmp/asb-md — you probably want to change some things there to make more sense for your build server.

We then have to take the generated XML and the tarball of icons and add it to the repomd.xml master document so that GNOME Software (via PackageKit) automatically downloads the content for searching. This is as simple as doing:

modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream.xml.gz	\
	x86_64/repodata/
modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

Any questions, please ask. If you’re using a COPR then all these steps are done for you automatically. If you’re using xdg-app already, then this is all magically done for you as well, and automatically downloaded by GNOME Software.

Krita 3.0 BETA builds out!

We have spent the last 20 days making over 80 fixes in Krita since the last Alpha build. We have decided it is time to enter the Beta phase! Furthermore, we’ve also spent time on improving our Windows builds.  This should fix all the scary G’Mic crashes on Windows.

Notable Fixes

  • G’Mic is fixed so that it uses OpenMP for multi-threading on Linux and Windows! This is a big performance increase from Krita 2.9 which was single-threaded. G’Mic probably is still broken on OSX: no need to report that.
  • Mask updating problems have been tackled rigorously!
  • So have transform masks and transform bugs!
  • Scary saving and loading bugs have been fixed. Remember, if you ever having a saving/loading bug with Krita, come to us immediately!
  • The clone and tangent tilt brushes have fixes with crashing and behavior!
  • Tons of little UI fixes with theme colors and consistency.
  • Several fixes for the shortcuts. They should now be saved and loaded properly.
  • Tablet fixes for dealing with animation. This makes duplicating frames easier, as well as using several tools faster.
  • Several fixes in the grids and guides.
  • And much more… See the full list of bug fixes!
  • Linux and Windows builds now should have full access to all translations and all of the menus should be translated.

From here, we will go through another round of bug fixing. We would love it if you could test out the new build and give us feedback in the chatroom or bug reporter. This testing helps us prevent ‘surprise issues’ when Krita 3.0 will be released in the coming weeks.

Kickstarter!

We had intended to get 3.0 ready before the next kickstarter, but we feel its more important to spend another month fixing bugs. We’re still going ahead with the next Kickstarter on schedule, so the May Kickstarter will coincide with the May release of Krita 3.0!

Known Issues

We’re fixing bugs like crazy, but there are still a number of known bugs. Please help us by testing this beta release and checking whether those bugs are still valid! Bug triaging is an awesome way of becoming part of our community.

Download

We haven’t managed to create an MSI installer next to the zip files. But we do have 32 bits builds as well as 64 builds now, and a new setup that makes it really fast and easy to do new builds. These builds also include for the first time the camera raw importer plugin and the PDF import/export plugin.

There are also Windows builds with debug information available, as an experiment from http://files.kde.org/krita/3/windows.

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

  • Disk Image: OSX Beta 1 (sha1: 32098758addbda2971bfbf81420a9bfa70e1b981)

The Linux appimage should run on any Linux distribution released since 2012. After downloading, make the appimage executable and run it. No installation is needed.

Source code:

Git repository:

Plan to level up contributors with Fedora Hubs!

Fedora Hubs

What’s going on with Hubs?

So a little update for those not following closely to get you up to date:

  • We have a big milestone we’re working towards – a working version of Fedora Hubs in time for Flock. It won’t have all of the bells and whistles of the mockups that we’ve presented, but it will be usable and hopefully demonstrate the potential of the app as well and enable more development.
  • We have a number of fantastic interns coming on board (including Devyani) who will be helping us work on Fedora Hubs this summer.
  • pingou is going to be leading development on fedora-hubs.
  • I’m clearly back from an extended leave this past winter and cranking back on mockups again. 🙂
  • ryanlerch has upgraded hubs to fedora-bootstrap so it has a fresh look and feel (which you’ll see reflected in mockups moving forward.)
  • Overall, we’ve gotten more momentum than ever before with a clear goal and timeline, so you’ll hopefully be seeing a lot more of these juicy updates more frequently!

(“Wait, what is Fedora Hubs?” you ask. This older blog post has a good summary.)

Okay, so let’s move on and talk about Hubs and Badges, particularly in light of some convos we’ve had in the regular weekly Fedora Hubs check-in meetings as well as an awesome hack session Remy D. and jflory7 pulled together last Thursday night.

Fedora Hubs + Badges – what’s old is new again

Behold, a mockup from 2011:

Fedora RPG mockup

In a sense, this is actually an early prototype/idea for Fedora Hubs + Badges integration. Remember that one of the two main goals of Fedora Hubs is to enable new Fedora users and make it easier for them to get bootstrapped into the project. Having tasks in the form of badges awarded for completing a task arranged in “missions” makes it clear and easy for new contributors to know what they can do and what they can do next to gradually build up their skills and basically ‘level up’ and feel happy, helpful, and productive. So there’s a clear alignment between badges and hubs in terms of goals.

So that was 2011, where are we going in 2016?

First thoughts about a badge widget

We have a couple of tickets relating to badges in the hubs issue tracker:

As we’ve been figuring out as we’ve been going through the needsmockup queue and building widgets, most widgets have at least two versions: the user version (what data in this widget relates to me? Across all projects, what bugs are assigned to me?) versus the project version (across all users, what bugs relate to this project?) You can’t just have one badges widget, because certain data related to that widget is more or is less useful in the context it’s being viewed in.

Today, the Fedora badges widget in Hubs is not unlike the one on the Fedora wiki (I have both the sidebar version and the content side version on my profile.) It’s basically small versions of the badge icon art in a grid (screenshot from the wiki version):

screenshot of wiki badges widget

The mockup below (from issue #85) shows what a little pushing in terms of working the metadata we’ve got available can do to provide a clearer picture of the badge earner via the badges he or she has won (left version is compressed, right version is expanded):

mockup of badges widget for hubs profiles

Layering on some more badgery

The above mockups are all just layer 0 stuff though. Layer 0? Yeh, here’s a hokey way of explaining how we’re starting to think about hubs development, particularly in the context of getting something out the door for Flock:

  • Layer 0 – stuff we already have in place in hubs, or refinements on what’s already implemented.
  • Layer 1 – new feature development at a base level – no whizbang or hoozits, and absolutely nothing involving modifications to ‘upstream’ / data-providing apps. (Remember that Hubs is really a front-end on front of fedmsg… we’re working with data coming from many other applications. If a particular type or format of data isn’t available to us, we have to modify the apps putting that data on the bus to be able to get it.)
  • Layer 2 – making things a bit nicer. We’re not talking base model here, we’re getting some luxury upgrades, but being pretty sensible about them. Maybe making some modifications to the provider apps.
  • Layer 3 – solid gold, gimme everything! This is the way we want things, having to make modifications to other apps isn’t of concern.

To get something out the door for Flock… we have to focus mostly on layer 0 and layer 1 stuff. This is hard, though, because when this team gets together we have really awesome, big, exciting ideas and it’s hard to scale back. 🙂 It’s really fun to brainstorm together and come up with those ideas too. In the name of fun, let’s talk through some of the layers we’ve been talking about for badges in hubs in particular, and through this journey introduce some of the big picture ideas we have.

Badges Layer 1: Tagging Along

An oftrequested feature of tahrir, the web app that powers badges.fedoraproject.org, is the notion of grouping badges together in a series (similar to the “missions” in the 2011 mockup above.) The badges in the series can be sequentially ordered, or they may have no particular order. Some in a series can have a sequential ordering and some not at all.

Here’s an example of badges with a sequential ordering (this series goes on beyond these, but the example three illustrate the concept well enough):

Here’s an example of badges that are closely related but have no particular sequence or order to them:

You can see, I hope, how having these formally linked together would be a boon for onboarding contributors. If you earned the first badge artist badge, for example, the page could link you to the next in the series… you could view a summary of it and come to understand you’d need to make artwork for only four more badges to get to the next level. Even if there isn’t a specific order, having a group of badges that you have to complete to get the whole group, like a field of unchecked checkboxes (or unpopped packing bubbles), kind of gives you the urge to complete them all. (Pop!) If a set of badges correspond to a set of skills needed to ramp up for work on a given project, that set would make a nice bootstrapping set that you could make a prerequisite for any new join requests to your project hub. So on and so forth.

So here’s the BIG SECRET:

There’s no badge metadata that links these together at all.

How do we present badges in series without this critical piece of metadata? We use a system already in place – badge tags. Each series could have an agreed upon tag, and all badges with that tag can become a group. This won’t give us the sequential ordering that some of the badges demand, but it’ll get us a good layer 1 to start. Mockup forthcoming on this, but it will get us a nicer badge widget for project / team hubs (issue #17).

Badges Layer 2: Real Badge Metadata

Here’s layer 2 for the feature – and I thought this would be the end of the road before Remy set us straight (more on that in the next section on layer 3):

So this one is somewhat simple. We potentially modify the badges themselves by adding additional fields to their yaml files (example behind link), and modify tahrir, the web app that drives badges.fpo, to parse and store those new fields. I tried to piece together a plan of attack for achieving this in tahrir ticket #343.

The problem here is that this would necessarily require changing the data model. It’s possible, but also a bit of a pain, and not something you want to do routinely – so this has to be done carefully.

Part of this would also involve dropping our overloading of tags. Now we can store descriptions for each badge series, and store sequential ordering for individual badges, and a few other nice things tags couldn’t enable.

If we’re changing the data model for layer 2, may as well also change it for *LAYER 3!!*, which I am emphasizing out of excitement.

Layer 3: How the years I spent playing Final Fantasy games finally pay off

skill tree diagram

“A simplified example of a skill tree structure, in this case for the usage of firearms.” Created by user Some_guy on Wikipedia; used under a CC0 license.

Remy D. suggested instead of linear and flat groupings of badges, we also add the ability to link them together into a skill tree. Now, you may already have experience with say the Final Fantasy series, Diablo series, Star Wars: The Old Republic, or other RPG-based games. Related skills are grouped together in particular zones of the tree, and depending on which zones of the tree you have filled out, you sort of fulfill a particular career path or paths. (e.g., in the case of Final Fantasy X… when you work towards filling out Lulu’s sphere grid area, you’re making your character a dark mage. When you work towards filling out Rikku’s area, you’re building skills towards become a skilled thief. So on, and so forth.)

Where this gets cool for Fedora is that we not only can help contributors get started and feeling awesome about their progress by presenting them clear sequences of badges to complete to earn a series (layer 2), but we can also help guide them towards building a ‘career’ or even multiple ‘careers’ (or ‘hats,’ heh) within the project and build their personal skill set as well. Today we already have five main categories for badges in terms of the artwork templates we use, but we can break these down further if need be – as-is, they map neatly to ‘careers’ in Fedora:

  • Community
  • Content
  • Development
  • Quality
  • Events

Fedora contributors could then choose to represent themselves using a radar chart (example displayed below), and others can get a quick visual sense of that contributor’s skillset:

So that’s layer 3. 🙂

Okay, so have you actually thought about what badges should be chained together for what teams?

Yes. 🙂 Remy D. and jflory7 started a list by researching the current onboarding procedures across a number of Fedora teams. Coming up with the actual arrangements of badges within series is important work too that has a big influence on whether or not the system actually works for end users! These suggestions Remy and Justin put together are suggestions of badges new contributors should complete while getting boostrapped and ready to contribute to the corresponding team.

In some cases these involve existing badges, in some cases additional badges we need to create to support the scenario have been uncovered. (This is great work, because over time badges has tended to be unbalanced in terms of awarding folks involved in packaging and who go to a lot of events more than others. It makes sense – the packaging infrastructure was the first part of Fedora’s infrastructure to get hooked up to fedmsg IIRC, so the data was more readily available.)

Here’s an excerpt of the first-cut of that work by Justin and Remy:

Ambassadors
  1. Get a FAS Account (sign the FPCA) (Involvement)
  2. Create a User Wiki Page
  3. Join Mailing Lists and IRC Channels
  4. Contact a Regional Mentor, get sponsored
  5. Get mentor approval
  6. Attend regional ambassador meeting, introduce yourself
CommOps
  1. If no fas account, create one (Involvement)
  2. Intro to commops mailing list
  3. Join IRC #fedora-commops
  4. Get with a mentor and start writing / editing blog / fed mag articles
Design
  1. Create a FAS account (sign the FPCA) (Involvement)
  2. Join the mailing list, introduce yourself: https://admin.fedoraproject.org/mailman/listinfo/design-team
  3. Claim a ticket in the Design Trac: https://fedorahosted.org/design-team/report/1
  4. Update your ticket, send updated information to Design List
  5. Once work is approved, request an invite for your FAS username for the design-team group on the design team list: https://admin.fedoraproject.org/mailman/listinfo/design-team
  6. Add yourself to the contributors list: http://fedoraproject.org/wiki/Artwork/Contributors
  7. Attend Design Team IRC meeting? (Speak Up)
  8. Subscribe to the design tasks fedocal: https://fedorapeople.org/groups/schedule/f-24/
Documentation
  1. Create a FAS Account (sign the FPCA) (Involvement)
  2. Create a GPG Key, and upload it to keyservers, one of which being keys.fedoraproject.org (Crypto Panda)
  3. Write a self-introduction to the mailing list with some ideas on how you would like to contribute: https://fedoraproject.org/wiki/Introduce_yourself_to_the_Docs_Project
  4. Create your own user wiki page, or update with new info if one exists from another prject (Let me Introduce myself Badge)
  5. Attend the freenode.net InterRelay Chat channel #fedora-meeting meetings on Mondays. (Speak Up Badge)
  6. Hang out on freenode.net InterRelay Chat channel #fedora-docs
  7. Interact with other fedora contributors (how to use fasinfo, lookup others wiki user pages, asking for sponsorship)
  8. Make a contribution: Choose an item from this page: https://fedoraproject.org/wiki/How_to_contribute_to_Docs
  9. Post to mailing list, describing which contribution you want to make, asking for feedback
  10. Post to mailing list with links to your contribution
Marketing
  1. Create a FAS Account (and sign the FPCA) (Involvement)
  2. Join the mailing list and introduce yourself: https://fedoraproject.org/wiki/Introduce_yourself_to_the_marketing_group
  3. Choose a marketing task you’d like to help with, and post to the mailing list asking for feedback: https://fedoraproject.org/wiki/Marketing/Schedule
  4. Post to the mailing list with a link to your contribution.
  5. Request to join the marketing group in FAS

Hopefully that gives a better picture on specifics, and what some of the bootstrapping series particularly would involve. You see here how a skill tree rather than badge series makes more sense – you only need create one FAS account, participate in IRC once intially, and participate on a mailing list once initially to learn how those things work before you shoudl really move on. So with this system, you could learn those “skills” joining any team, and where you complete the series for any particular team are the higher-numbered badges in that team’s bootstrap series. (Hope that makes sense.)

Get involved in this business!

we need your help!

Help us build fun yet effective RPG-like components into a platform that can power the free software communities of the future! How do you start? Sadly, we do not have the badge series / skill tree feature done yet, so I can’t simply point you at that. But here’s what I can point you to:

  • hubs-devel Mailing List – our list is powered by HyperKitty, so you don’t even need to have mail delivered to your inbox to participate! Mostly our weekly meeting minutes are posted here. I try to post summaries so you don’t have to read the whole log.
  • The Fedora Hubs Repo – the code with instructions on how to build a development instance and our issue tracker which includes tickets discussed above and many more!
  • Fedora Hubs weekly check-in meeting – our weekly meeting is at 14:00 UTC on Tuesdays in #fedora-hubs. Come meet us!

What do you think?

Questions? Comments? Feedback? Hit me up in the comments (except, don’t literally hit me. Because mean comments make me cry.)

kasapanda

April 26, 2016

Three Slots Awarded to Krita for Google Summer of Code

GSoC2016Logo

Every year Google puts on a program called Google Summer of Code (GSoC). Students from all over the world try to obtain an internship where they can be paid to work on an open source application. This year we are lucky enough to have had three students accepted into the program! (Who gets accepted depends on how many applications there are, how many slots Google has and how many get distributed to KDE.) These three students will be working on Krita for the summer to improve three import areas in Krita.

Here is what they will be trying to tackle in the coming months.

  1. Jouni Pentikäinen – GSoC Project Overview – “This project aims to bring Krita’s animation features to more types of layers and masks, as well as provide means to generate certain types of interpolated frames and extend the user interface to accommodate these features.” In short, Jouni is going to work on animating opacity, filter layers and maybe even transform masks. Not just that, but he’ll work on a sexy curve time-line element for controlling the interpolation!
  2. Wolthera van Hövell tot Westerflier   – GSoC  Project Overview – “Currently, Krita’s architecture has all the bells and whistles for wide-gamut editing. Two big items are missing: Softproofing and a good internal colour selector dialogue for selecting colours that are outside of the sRGB colour space.” Wolthera’s work will make illustration for print workflows much smoother, letting you preview how likely your RGB image will keep at it’s details when printed out. Furthermore, she’ll work on improving your ability to use filters correctly on wide gamut files, extending Krita’s powerful color core.
  3. Julian Thijsen – GSoC Project Overview –  “I aim to seek out the reliance on legacy functionality in the OpenGL engine that powers the QPainter class and to convert this functionality to work using OpenGL 3.2 Core Profile — it needs the compatibility profile at the moment. This will enable OSX to display decorations and will likely allow Krita to run on Mac OS X computers.” This one is best described as a “OpenGL canvs by-pass operation”, Krita currently uses OpenGL 2.1 and 3.0. To run on OSX, we’ll need to be able to run everything in OpenGL 3.0 at the least. It is the biggest blocker for full OSX support, and we’re really excited Nimmy decided to take the challenge!

The descriptions might sound a bit technical for a lay person, but these enhancements will make a big impact. We congratulate the accepted students and wish them the best of luck this summer.

April 25, 2016

Interview with Tomáš Marek

DifferentApproach

Could you tell us something about yourself?

Hi, my name is Tomáš Marek. I’m 22 years old, self-taught digital/traditional artist and student, and I currently live in the Czech Republic. Unlike most of the other artists I started drawing pretty late, about 4 years ago, mainly because I never had any sign of a talent for anything, and I had no idea what I wanted to do with my life. It was 4 years ago when I found out about my great-grandfather who was an artist (landscape painter) which was the initial trigger for me, “I want to be an artist”. Since then I draw pretty much every day.

Right now I’m working on my personal project, it will be a graphic novel, can’t tell you much about it yet, and developing my own style which I call #BigNoses.

PurpleGirl

Do you paint professionally, as a hobby artist, or both?

In this moment I see myself more like a hobbyist than professional, because right now I’m still a student and I’m working on my degree in computer graphics, which is very time consuming. However, from time to time I do some freelance or commissions. So lets say I’m both.

What genre(s) do you work in?

I actually never thought about drawing in some specific genre, I pretty much draw what and how I feel that day.

TheLastWatchman

Whose work inspires you most — who are your role models as an artist?

Well, I can’t pick just one artist, there are so many of them. But if I could pick three of them, the first would be my great-grandfather who introduced me to art, the second is Sycra Yasin who taught me that mileage is more important than talent, and the most recent one is Kim Jung Gi because, well, just look at his work and you will know why.

How and when did you get to try digital painting for the first time?

My first time was in 2012 in the house of my friend, who had a Wacom Cintiq 13. He let me try it with Photoshop CS4 and my first impression of it was “I want one”.

What makes you choose digital over traditional painting?

That would probably be freedom of tools. Because I’m the constantly changing and erasing type of guy. And with digital not only that changing stuff is fast and clean but also, as the saying goes, “pixels are cheap”.

TheLonelyFisherman

How did you find out about Krita?

The first time I heard about Krita was about 2 years ago on Sycra’s Youtube channel, I think he drew his self-portrait. But I didn’t pay much attention to it because in that time I was using Photoshop for my paintings, which I didn’t like but it was the only software that I knew how to use.

What was your first impression?

OK, I remember this moment very well. When I first opened Krita, picked the first brush I saw, I think that it was the Color Smudge type, then I started with painting, and this is what I had in my mind: “This is weird, but kinda cool, but weird… yeah I love it”. I hope this sums it up well.

What do you love about Krita?

Mainly these almost traditional-like brush engines, and the fact that it runs on GNU/Linux, Windows and Mac OS.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like to see a realtime histogram of all visible layers, not only for one selected layer. And some performance improvement for filters.

What sets Krita apart from the other tools that you use?

I’m a GNU/Linux user and when I wanted to paint I always had to reboot to Windows to use Photoshop for painting, so with Krita I don’t have to use Windows at all.

And as I said before, I love Krita’s brush engines.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

This is hard; it’s like asking parents which is their favourite child, but if I had to choose it would be probably my recent painting from my series #BigNoses called “It’s Something”

It'sSomething

What techniques and brushes did you use in it?

For most of my work I’m using my own brush, which is a rectangle brush with pressure size and softness, plus airbrush and some texture brushes. And my technique is pretty simple pipeline: lineart → base colors → shades → rendering.

Where can people see more of your work?

I’m frequently posting my work on these sites:

Twitter https://twitter.com/marts_art
Instagram https://www.instagram.com/marts_struggle_with_drawing/
DeviantArt http://marts-art.deviantart.com/

Or on my Youtube channel https://www.youtube.com/channel/UC0099tv90SxjQJ2PtIh7_EQ

Anything else you’d like to share?

I would like to thank you for inviting me to this interview. I really admire the work you’re doing on Krita, so keep going this way.

April 20, 2016

Wed 2016/Apr/20

  • A Cycling Map

    Este post en español

    There are no good, paper cycling maps for my region. There are 1:20,000 street maps for car navigation within the city, but they have absolutely no detail in the rural areas. There are 1:200,000 maps for long trips by car, but that's too big of a scale.

    Ideally there would be high-quality printed maps at 1:50,000 scale (i.e. 1 Km in the real world is 2 cm in the map), with enough detail and some features:

    • Contour lines. Xalapa is in the middle of the mountains, so it's useful to plan for (often ridiculously steep) uphills/downhills.

    • Where can I replenish my water/food? Convenience stores, roadside food stands.

    • What's the quality and surface of the roads? This region is full of rural tracks that go through coffee and sugarcane plantations. The most-transited tracks can be ridden with reasonable "street" tyres. Others require fatter tyres, or a lot of skill, or a mountain bike, as they have rocks and roots and lots of fist-sized debris.

    • Any interesting sights or places? It's nice to have a visual "prize" when you reach your destination, apart from the mountainous landscape itself. Any good viewpoints? Interesting ruins? Waterfalls?

    • As many references as possible. The rural roads tend to look all the same — coffee plants, bananas, sugarcane, dirt roads. Is there an especially big tree at the junction of two trails so you know when to turn? Is there one of the ubiquitous roadside shrines or crosses? Did I just see the high-voltage power lines overhead?

    Make the map yourself, damnit

    For a couple of years now, I have been mapping the rural roads around here in OpenStreetMap. This has been an interesting process.

    For example, this is the satellite view that gets shown in iD, the web editor for OpenStreetMap:

    Satellite view        of rural area

    One can make out rural roads there between fields (here, between the blue river and the yellow highway). They are hard to see where there are many trees, and sometimes they just disappear in the foliage. When these roads are just not visible or 100% unambiguous in the satellite view, there's little else to do but go out and actually ride them while recording a GPS track with my phone.

    These are two typical rural roads here:

    Rural road between plantations Rural road with     view to the mountains

    Once I get back home, I'll load the GPS track in the OpenStreetMap editor, trace the roads, and add some things by inference (the road crosses a stream, so there must be a bridge) or by memory (oh, I remember that especially pretty stone bridge!). Behold, a bridge in an unpaved road:

    Bridge in the editor Bridge in the real        world

    It is also possible to print a map quickly, say, out of Field Papers, annotate it while riding, and later add the data to the map when on the computer. After you've fed the dog.

    Field papers in use

    Now, that's all about creating map data. Visualization (or rendering for printing) is another matter.

    Visualization

    Here are some interesting map renderers that work from OpenStreetMap data:

    OpenTopoMap

    OpenTopoMap. It has contour lines. It is high contrast! Paved roads are light-colored with black casing (cartography jargon for the outline), like on a traditional printed map; unpaved rural tracks are black. Rivers have a dark blue outline. Rivers have little arrows that indicate the flow direction (that means downhill!) — here, look for the little blue arrow where the river forks in two. The map shows things that are interesting in hiking/biking maps: fences, gates, viewpoints, wayside crosses, shelters. Wooded areas, or farmland and orchards, are shaded/patterned nicely. The map doesn't show convenience stores and the like.

    GPSies with Sigma Cycle layer

    GPSies with its Sigma Cycle layer. It has contour lines. It tells you the mountain biking difficulty of each trail, which is a nice touch. It doesn't include things like convenience stores unless you go into much higher zoom levels. It is a low-contrast map as is common for on-screen viewing — when printed, this just makes a washed-out mess.

    Cycle.Travel

    Cycle.Travel. The map is very pretty onscreen, not suitable for printing, but the bicycle routing is incredibly good. It gives preference to small, quiet roads instead of motorways. It looks for routes where slopes are not ridiculous. It gives you elevation profiles for routes... if you are in the first world. That part doesn't work in Mexico. Hopefully that will change — worldwide elevation data is available, but there are some epic computations that need to happen before routing works on a continent-level scale (see the end of that blog post).

    Why don't you take your phone with maps on the bike?

    I do this all the time, and the following gets tedious:

    1. Stop the bike.
    2. Take out the phone from my pocket.
    3. Unlock the phone. Remove gloves beforehand if it's cold.
    4. Wait for maps app to wake up.
    5. Wipe sweat from phone. Wait until moisture evaporates so touch screen works again.
    6. Be ever mindful of the battery, since the GPS chews through it.
    7. Be ever mindful of my credit, since 3G data chews through it.
    8. Etc.

    I *love* having map data on my phone, and I've gone through a few applications that can save map data without an internet connection.

    City Maps 2 Go is nice. It has been made more complex than before with the last few versions. Maps for Mexico don't seem to be updated frequently at all, which is frustrating since I add a lot of data to the base OpenStreetMap myself and can't see it in the app. On the plus side, it uses vector maps.

    MotionX GPS is pretty great. It tries extra-hard not to stop recording when you are creating a GPS track (unlike, ahem, Strava). It lets you save offline maps. It only downloads raster maps from OpenStreetMap and OpenCycleMap — the former is nominally good; the latter is getting very dated these days.

    Maps.Me is very nice! It has offline, vector maps. Maps seem to be updated reasonably frequently. It has routing.

    Go Map!! is a full editor for OpenStreetMap. It can more or less store offline maps. I use it all the time to add little details to the map while out riding. This is a fantastic app.

    Those apps are fine for trips of a few hours (i.e. while the phone's battery lasts), and not good for a full-day trip. I've started carrying an external battery, but that's cumbersome and heavy.

    So, I want a printed map. Since time immemorial there has been hardware to attach printed maps to a bike's handlebar, or even a convenient handlebar bag with a map sleeve on it.

    Render the map yourself, damnit

    The easiest thing would be to download a section of the map from OpenTopoMap, at a zoom level that is useful, and print it. This works in a pinch, but has several problems.

    Maps rendered from OpenStreetMap are generally designed for web consumption, or for moderately high-resolution mobile screens. Both are far from the size and resolution of a good printed map. A laptop or desktop has a reasonably-sized screen, but is low resolution: even a 21" 4K display is only slightly above 200 DPI. A phone is denser, at something between 300 and 400 DPI, but it is a tiny screen... compared to a nice, map-sized sheet of paper — easily 50x50 cm at 1200 DPI.

    ... and you can fold a map into the size of a phablet, and it's still higher-rez and lighter and doesn't eat batteries and OMG I'm a retrogrouch, ain't I.

    Also, web maps are zoomable, while paper maps are at a fixed scale. 1:50,000 works well for a few hours' worth of cycling — in this mountainous region, it's too tiring for me to go much further than what fits in such a map.

    So, my line of thinking was something like:

    1. How big is the sheet of paper for my map? Depends on the printer.

    2. What printed resolution will it have? Depends on the printer.

    3. What map scale do I want? 1:50,000

    4. What level of detail do I want? At zoom=15 there is a nice level of detail; at z=16 it is even more clear. However, it is only until z=17 that very small things like convenience stores start appearing... at least for "normal" OpenStreetMap renderers.

    Zoom levels?

    Web maps are funny. OpenStreetMap normally gets rendered with square tiles; each tile is 256x256 pixels. At zoom=0, the whole world fits in a single tile.

    Whole        world, single tile, zoom=0

    The URL for that (generated) image is http://opentopomap.org/0/0/0.png.

    If we go in one zoom level, to zoom=1, that uber-tile gets divided into 2x2 sub-tiles. Look at the URLs, which end in zoom/x/y.png:

    1/0/0
    http://opentopomap.org/1/0/0.png

    1/1/0
    http://opentopomap.org/1/1/0.png

    1/0/1
    http://opentopomap.org/1/0/1.png

    1/1/1
    http://opentopomap.org/1/1/1.png

    Let's go in one level, to zoom=2, and just focus on the four sub-sub-tiles for the top-left tile above (the one with North America and Central America):

    2/0/0
    http://opentopomap.org/2/0/0.png

    2/1/0
    http://opentopomap.org/2/1/0.png

    2/0/1
    http://opentopomap.org/2/0/1.png

    2/1/1
    http://opentopomap.org/2/1/1.png

    So the question generally is, what zoom level do I want, for the level of detail I want in a particular map scale, considering the printed resolution of the printer I'll use?

    For reference:

    After some playing around with numbers, I came up with a related formula. What map scale will I get, given a printed resolution and a zoom level?

    (defun get-map-scale (dpi tile-size zoom latitude)
      (let* ((circumference-at-equator 40075016.686)
    	 (meridian-length (* circumference-at-equator
    			     (cos (degrees-to-radians latitude))))
    	 (tiles-around-the-earth (exp (* (log 2) zoom)))
    	 (pixels-around-the-earth (* tiles-around-the-earth tile-size))
    
    	 (meters-per-pixel (/ meridian-length pixels-around-the-earth))
    
    	 (meters-in-inch-of-pixels (* meters-per-pixel dpi))
    	 (meters-in-cm-of-pixels (/ meters-in-inch-of-pixels 2.54)))
        (* meters-in-cm-of-pixels 100)))
    
    (get-map-scale 600      ; dpi
    	       256      ; tile-size
    	       16       ; zoom
    	       19.533)  ; latitude of my town
    53177.66240054532 ; pretty close to 1:50,000

    All right: zoom=16 has a useful level of detail, and it gives me a printed map scale close to 1:50,000. I can probably take the tile data and downsample it a bit to really get the scale I want (from 53177 to 50000).

    Why a tile-size argument (in pixels)? Aren't tiles always 256 pixels square? Read on.

    Print ≠ display

    A 1-pixel outline ("hairline") is nice and visible onscreen, but on a 600 DPI or 1200 DPI printout it's pretty hard to see, especially if it is against a background of contour lines, crop markers, and assorted cartographic garbage.

    A 16x16-pixel icon that shows the location of a convenience store, or a viewpoint, or some other marker, is perfectly visible on a screen. However, it is just a speck on paper.

    And text... 10-pixel text is probably readable even on a high-resolution phone, but definitely not on paper at printed resolutions.

    If I just take OpenTopoMap and print it, I get tiny text, lines and outlines that are way too thin, and markers that are practically invisible. I need something that lets me tweak the thickness of lines and outlines, the size of markers and icons, and the size and position of text labels, so that printing the results will give me a legible map.

    Look at these maps and zoom in. They are designed for printing. They are full of detail, but on a screen the text looks way too big. If printed, they would be pretty nice.

    The default openstreetmap.org uses Mapnik as a renderer, which in turn uses a toolchain to produce stylesheets that determine how the map gets rendered. Stylesheets say stuff like, "a motorway gets rendered in red, 20 pixels thick, with a 4-pixel black outline, and with highway markers such and such pixels apart, using this icon", or "graveyards are rendered as solid polygons, using a green background with this repeating pattern of little crosses at 40% opacity". For a zoomable map, that whole process needs to be done at the different zoom levels (since the thicknesses and sizes change, and just linearly scaling things looks terrible). It's a pain in the ass to define a stylesheet — or rather, it's meticulous work to be done in an obscure styling language.

    Recently there has been an explosion of map renderers that work from OpenStreetMap data. I have been using Mapbox Studio, which has the big feature of not requiring you to learn a styling language. Studio is a web app that lets you define map layers and a style for each layer: "the streets layer comes from residential roads; render that as white lines with a black outline". It lets you use specific values for different zoom levels, with an interesting user interface that would be so much better without all the focus issues of a web browser.

    Screenshot of        Mapbox Studio

    I've been learning how to use this beast — initially there's an airplane-cockpit aspect to it. Things went much easier once I understood the following:

    The main OpenStreetMap database is an enormous bag of points, lines, and "relations". Each of those may have a number of key/value pairs. For example, a point may have "shop=bakery" and "name=Bready McBreadface", while a street may have "highway=residential" and "name=Baker Street".

    A very, uh, interesting toolchain slices that data and puts it into vector tiles. A vector tile is just a square which contains layers of drawing-like instructions. For example, the "streets" layer has a bunch of "moveto lineto lineto lineto". However, the tiles don't actually contain styling information. You get the line data, but not the colors or the thicknesses.

    There are many providers of vector tiles and renderers. Mapzen supplies vector tiles and a nifty OpenGL-based renderer. Mapbox supplies vector tiles and a bunch of libraries for using them from mobile platforms. Each provider of vector tiles decides which map features to put into which map layers.

    Layers have two purposes: styling, and z-ordering. Styling is what you expect: the layer for residential roads gets rendered as lines with a certain color/thickness/outline. Z-ordering more or less depends on the purpose of your map. There's the background, based on landcover information (desert=yellow, forest=green, water=blue). Above that there are contour lines. Above those there are roads. Above those there are points of interest.

    In terms of styling, there are some tricks to achieve common visual styles. For example, each kind of road (motorway, secondary road, residential road) gets two layers: one for the casing (outline), and one for the line fill. This is to avoid complicated geometry at intersections: to have red lines with a black outline, you have a layer with black wide lines, and above it a layer with red narrow lines, both from the same data.

    Styling lines in map        layers

    A vector tileset may have not all the data in the main OpenStreetMap database. For example, Mapbox creates and supplies a tileset called mapbox-streets-v7 (introduction, reference). It has streets, buildings, points of interest like shops, fences, etc. It does not have some things that I'm interested in, like high-voltage power lines and towers (they are good landmarks!), wayside shrines, and the extents of industrial areas.

    In theory I could create a tileset with the missing features I want, but I don't want to waste too much time with the scary toolchain. Instead, Mapbox lets one add custom data layers; in particular, they have a nice tutorial on extracting specific data from the map with the Overpass Turbo tool and adding that to your own map as a new layer. For example, with Overpass Turbo I can make a query for "find me all the power lines in this region" and export that as a GeoJSON blob. Later I can take that file, upload it to Mapbox Studio, and tell it how to style the high-voltage power lines and towers. It's sort of manual work, but maybe I can automate it with the magic of Makefiles and the Mapbox API.

    Oh, before I forget: Mapbox uses 512-pixel tiles. I don't know why; maybe it is to reduce the number of HTTP requests? In any case, that's why my little chunk of code above has a tile-size argument.

    So what does it look like?

    My map

    This is a work in progress. What is missing:

    • Styling suitable for printing. I've been tweaking the colors and line styles so that the map is high-contrast and legible enough. I have not figured out the right thicknesses, nor text sizes, for prints yet.

    • Adding data that I care about but that is not in mapbox-streets-v7: shrines, power lines, industrial areas, municipal boundaries, farms, gates, ruins, waterfalls... these are available in the main OpenStreetMap database, fortunately.

    • Add styling for things that are in the vector tiles, but don't have a visible-enough style by default. Crops could get icons like sugarcane or coffee; sports fields could get a little icon for football/baseball.

    • Figure out how to do pattern-like styling for line data. I want cliffs shown somewhat like this (a line with little triangles), but I don't know how to do that in Mapbox yet. I want little arrows to show the direction in which rivers flow.

    • Do a semi-exhaustive ride of all the rural roads in the area for which I'll generate the map, to ensure that I haven't missed useful landmarks. That's supposed to be the fun part, right?

    References

    The design of the Mapbox Outdoors style. For my own map, I started with this style as a base and then started to tweak it to make it high-contrast and have better colors for printing.

    Technical discussion of generating a printable city map — a bit old; uses TileMill and CartoCSS (the precursors to Mapbox Studio). Talks about dealing with SVG maps, large posters, overview pages.

    An ingenious vintage German cycle map, which manages to cram an elevation profile on each road (!).

    The great lost map scale argues that 1:100,000 is the best for long-distance, multi-day cyclists, to avoid carrying so many folded maps. Excellent map pr0n here (look at the Snowdonia map — those hand-drawn cliffs!). I'm just a half-a-day cycling dilettante, so for now 1:50,000 is good for me.

    How to make a bike map focuses on city-scale maps, and on whether roads are safe or not for commuters.

    Rendering the World — how tiling makes it possible to render little chunks of the world on demand.

    Introducing Tilemaker: vector tiles without the stack. Instead of dealing with Postgres bullshit and a toolchain, this is a single command-line utility (... with a hand-written configuration file) to slice OpenStreetMap data into layers which you define.

    My cycling map in Mapbox Studio.

Upgrading Fedora 23 to 24 using GNOME Software

I’ve spent the last couple of days fixing up all the upgrade bugs in GNOME Software and backporting them to gnome-3-20. The idea is that we backport gnome-software plus a couple of the deps into Fedora 23 so that we can offer a 100% GUI upgrade experience. It’s the first time we’ve officially transplanted a n+1 GNOME component into an older release (ignoring my unofficial Fedora 20 whole-desktop backport COPR) and so we’re carefully testing for regressions and new bugs.

If you do want to test upgrading from F23 to F24, first make sure you’ve backed up your system. Then, install and enable this COPR and update gnome-software. This should also install a new libhif, libappstream-glib, json-glib and PackageKit and a few other bits. If you’ve not done the update offline using [the old] GNOME Software, you’ll need to reboot at this stage as well.

Fire up the new gnome-software and look at the new UI. Actually, there’s not a lot new to see as we’ve left new features like the ODRS reviewing service and xdg-app as F24-only features, so it should be mostly the same as before but with better search results. Now go to the Updates page which will show any updates you have pending, and it will also download the list of possible distro upgrades to your home directory.

As we’re testing upgrading to a pre-release, we have to convince gnome-software that we’re living in the future. First, open ~/.cache/gnome-software/3.20/upgrades/fedora.json and search for f24. Carefully change the Under Development string to Active then save the file. Log out, back in and the launch gnome-software again or wait for the notification from the shell. If all has gone well you should see a banner telling you about the new upgrade. If you click Download go and get a coffee and start baking a cake, as it’s going to take a long time to download all that new goodness. Once complete just click Install, which prompts a reboot where the packages will be installed. For this step you’ll probably want to bake another cake. We’re not quite in an atomic instant-apply world yet, although I’ll be talking a lot more about that for Fedora 25.

With a bit of luck, after 30 minutes staring at a progressbar the computer should reboot itself into a fresh new Fedora 24 beta installation. Success!

Screenshot_Fedora23-Upgrade_2016-04-20_15:23:27

If you spot any problems or encounter any bugs, please let me know either in bugzilla, email or or IRC. I’ve not backported all the custom CSS for the upgrade banner just yet, but this should be working soon. Thanks!

April 19, 2016

New Krita 3.0 Alpha/Development Windows Builds

Until now, we have made all Windows releases of Krita with Microsoft’s Visual C++ compiler. Krita 2.9 was built with the 2012 version, 3.0 with the 2015 version of Visual C++. Both compilers have problems building the G’Mic library, and, recently, the latest version of the Vc library. G’Mic provides a wide range of filters and Vc lets us optimize the code that blends pixels and creates brush masks.

We cannot fix the libraries, we cannot fix the Microsoft compiler, and we don’t want to make Krita slower and less functional, so there was only one solution left: find a different compiler. That is pretty scary so late in the development cycle, because every compiler has different quirks and bugs. We are actually making these builds on Linux.

We have now prepared new builds for you to test. There are four builds: with and without debugging information, and for 32 and 64 bits Windows. If you encounter a crash, please download the debug build and try to reproduce the crash. Compared to 3.0 alpha, a number of bugs are fixed. These builds also have more features: the camera raw import plugin and the PDF import/export plugin are included. The 32 bits build now also includes G’Mic, which was completely impossible with Visual Studio. The only feature present on Linux that is still not available on Windows is OpenJPEG support, for loading and saving .jp2 files (not jpeg files, that is present and correct).

You can find all builds here: http://files.kde.org/krita/3/windows/devbuilds. You can verify your downloads with the sha1 checksum files. Direct downloads of the non-debug builds:

To run, simply extract the zip file somewhere and navigate into the bin folder, there execute krita.exe. There is no need anymore for Visual Studion C runtimes! Krita 3.0 looks in a different location for its configuration and resource files, so your existing 2.9 installation is completely untouched.

Setting up the builds was easier than expected, thanks to the MXE project, but it still took time away from fixing bugs, so we’re considering extending the 3.0 release schedule with another month. if you’re on Windows, please give these builds a good, thorough work-out!

April 14, 2016

A logo design process

Designing a logo can be intimidating and the process full of alternating between hope and despair. I recently designed a logo for the team of a friend I work with, and for whatever reason (perhaps mindfulness practice) I decided to try to track the process visually and note my general thinking in choosing a forward direction.

This was just one piece (albeit a major one) of a longer process. This part was just me by myself coming up with an initial proposal for discussion. I think brainstorming as a team effort produces the best results – here I took some initial direction from the team in terms of what they wanted, the look they were going for, the symbols they wanted embedded in the logo. The initial concept in the first frame reflects that opening conversation – they wanted the logo to relate to carbon, they wanted something clean (the Ansible logo was brought up as an example of a style they liked), and they wanted it to somehow depict interoperability.

The process below shows how I came up with an initial proposal from that starting point, and then we worked together back and forth to come up with the final version (which isn’t even shown here. 🙂 )

You can kind of see here, it’s a general cycle of:

  • Logic/brain: what might work conceptually?
  • Creativity/hand: what would that concept look like visually?
  • Evaluation/eyes: does that visual relate the idea? Or does it relate something else (that we don’t want) too strongly?
  • Rinse, repeat.

Anyway, here it is, an example of my process; I thought it might be interesting to look at. (Apologies for the large image, and for whatever it’s worth, Inkscape was used for this; sometimes I do pencil sketches first but I wasn’t feeling it this time.)

logo design comic

April 13, 2016

Horde of Cuteness fundraiser

Guest post by Justin Nichol. More game art with Krita! Support the project here on Indiegogo!

IGG_Header-800

My name is Justin and I have been making illustrations and game art with open source software for some time, and release my work under open licenses for others to use. I use Linux and Blender, and I initially used GIMP but have now transitioned to use Krita, given its more advanced tools and better workflow for painters.
hero_warrior-800
I began Horde of Cuteness as a project through my Patreon (https://www.patreon.com/justinnichol), and created an initial set of 12 figures, but have since added another two for a total of 14. My  Patreon is perfect for creating small packs of art in many styles, but with Indiegogo I can dig deeper into individual collections.
monster_banshee-800
If the preliminary funding is obtained from this campaign, I can eschew freelance for the time necessary to add an additional 10 characters to the collection (2 heroes, 5 monsters, and 3 boss monsters chosen by the backers). I do this because I want to create large packs of art for game designers, writers and other creative people.
hero_bard-800
Backers can also grab rewards that allow them to choose a monster for me to paint, to become one of the heroes themselves, or even to add a whole new boss monster to the campaign.
monster_goblin-800
All the characters will be released under a Creative Commons Attribution Share-Alike 4.0 license, and will be made available as .pngs with transparent backgrounds as well, and will include .kra source files for editing the characters yourself. All of the images will be 2000px by 2000px.
hero_paladin-800
I’ve gotten over half the initial funding I hoped for in just over a week, and I think support from the open source community could push me over the top.
monster_orc-800
The characters I have already created are available on my website: freeforall.cc

 

April 11, 2016

Cross-compiling Krita using MXE

Writing code that builds with multiple compilers is good way to catch errors, improve code quality and conformance. Or so I have always been taught. Hence, when we ported Krita to Windows, we ported it to the native compiler for Windows, Microsoft Visual C++. That took some doing, but in the process we found lots of little things that, once fixed, improved Krita's code. When we ported Krita to OSX, where the native compiler is clang, the same happened.

And then we added two dependencies to Krita that have trouble with Visual C++: G'Mic and Vc. G'Mic implements a parser for a custom scripting language for writing filters, and that parser is written in a way that makes life really hard for Visual C++. Basically, the 32 bits builds never worked and the 64 bits builds need a stack of about a gigabyte to parse the scripts. And Vc, a library to add vectorization/simd support easily, from version 1.0 and up just doesn't build at all on Windows.

It's probably not a coincidence that both are heavy users of templates, and in the case of Vc, of C++11. But Krita needs both libraries: our users love all the filters and tools the G'Mic plugin gives them, and without Vc, our brushes and pixel compositing code becomes really slow.

What could we do? Hair was liberally pulled and not a few not unmanly tears shed. We could try to build Krita on Windows using one of the GCC ports, or we could try to build Krita on Windows using clang. We already tried to use Intel's icc to build Krita, but that broke already when trying to build Qt. (Though that was in the early Qt 4 days, so maybe things are better now.)

But building on Windows will always be slower, because of the slow terminal and the slow file system, and we know that the Windows releases of Gimp and LibreOffice are actually built on Linux. Cross-compiled for Windows. If complex projects like those can manage, we should be able to manage too.

Unfortunately, I'm a bear^Wdeveloper of very little brain, and figuring out which blogs and articles are up to date and relevant for OpenSUSE Leap was already quite hard, and when I saw that the mingw packages for Leap were a year old, I wasn't prepared to even put a lot of time into that.

Enter MXE. It's a kind of KDE's emerge, but for Linux, a bunch of Makefiles that can make a cross-platform build. It comes with a huge bunch of pre-configured libraries, though Unfortunately not everything we need.

So, using MXE, I built Qt and most dependencies. Still missing are Vc, OpenColorIO, GSL, Poppler-qt5 and openjpeg. I also needed to remove some of the hacks we did to make Krita compile with MSVC: we had added a couple of dummy header files to provide things like nothl and friends (these days superseded by QtEndian). A 3rd-party library we embed, xcftools, had its own equivalent of that stuff. But apart from that...

Everything mostly built out of the box, and the result runs in Wine (as evidenced by the "native" file dialog:

What's left to do? Build the remaining dependencies, add translations, create a packaging script (where's windeployqt?), and test.

Interview with Esfenodon

Yaiza_Happy_Esfenodon____

Could you tell us something about yourself?

My name is Iván R. Arós, alias “Esfenodon”. I have been working in computer graphics for the last 13 years, but I think I’ve loved art from the first day I can remember something.

I studied Arts at Vigo University, and I have always been using traditional tools to draw and paint. From the first time I realised that as much as I painted with acrylic or oils, watercolors or ink, I become better with digital tools.

That’s something I always recommend in my work.

Actually I work as Art Director in a small studio at the Vigo University. In this studio, Divulgare, we create scientific divulgative videos. 3D, 2D, real video… In this Studio I had the incredible opportunity to join my two passions: Science and Art, working closely with scientists. Every day I paint something, and have point five or more ideas I would like to paint the next day.

Do you paint professionally, as a hobby artist, or both?

I think that when you paint professionally it’s almost impossible don’t do it as a hobby too. Maybe I paint different things at work or at home, but for me is the same. Professionally and hobby. Both.

What genre(s) do you work in?

That’s a very complex question for me. I think I like realism, but when I’m painting I’m always searching for the way to tell a story in a beautiful way using as few elements as possible. But constantly I come back to realism. So I’m travelling from one style to other all the time. Scientific illustration, people, sci-fi scenes, or just cartoon characters. I think I don’t have a genre or style because I want to try many different things. My work is about joining science and art, to create some kind of visual attraction while the observer obtains information, so I’m always searching for some beautiful style that allows me to tell the story the scientists want to tell. But when I’m at home, I simply draw and paint.

Whose work inspires you most — who are your role models as an artist?

I have so many role models I don’t know where to start. Every time I’m watching someone work I think “oh, that’s awesome, I should try that”, or “hey, I think I can combine this style with that other”. But if I have to name some names, surely I would say Lluís Bargalló, an awesome illustrator in Spain, Ink Dwell, who is putting fresh ideas into scientific illustration, the light of Goro Fujita and Ilya Kuvshinov, the atmosphere of Loish, and many many others.

How and when did you get to try digital painting for the first time?

I have always been painting with computers. I remember Deluxe Paint on an Amstrad PC so many years ago. Just with 16 colours. It was fun. But I started painting seriously around 2005. I spent some years painting with a mouse. With my first tablet in 2009 I started to understand that digital and traditional painting were almost the same.

What makes you choose digital over traditional painting?

Really I haven’t chosen digital over traditional. I think that when you like to paint you always paint. Digital, traditional, or in the sand using your toe. You can’t stop painting. Maybe I do more digital than traditional because of the speed. It allows me to test more styles faster, and make crazy stuff that I can throw on a hard drive and think about it later. I’ve got a computer or a tablet everywhere. It’s great having the possibility to paint any time. Digital allows it. And I enjoy it a lot. Digital painting is enjoyable.

How did you find out about Krita?

I’m always searching for the way to use as much open source software as possible at University. Maybe it was as simple as searching google for it. Krita software, hmmm, interesting, let’s give it a try. Maybe the Krita name was familiar to me because some time ago I read about a collection of open source tools for audiovisual creativity.

What was your first impression?

My first impression was “wow”. The software was very fast. Very light on the computer. And I felt like I was drawing on paper. This is important to me, because with other software I usually draw on paper, then scan, then paint. But I like to draw directly on the computer, so it’s frustrating when I can’t draw digitally as I draw on paper. With Krita I can draw.

What do you love about Krita?

I have been using so much software. Every time I need to start using new tools I feel tired. New interfaces, new tools. I work as a 3D animator too, and I understand that I must stay up to date, reading and learning. But sometimes I just want to concentrate on the artistic field, not the technical. Krita is just that. Open. Paint. It’s great to have a very simple program that allows you to have all the tools you need at the same time.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I like Krita a lot. Maybe more integration with tablets, or smooth some controls with the most used digital tablets in the market. Sometimes it’s hard to set up some express keys. Maybe it’s not a Krita problem but the problem of the tablet maker :-)

What sets Krita apart from the other tools that you use?

Speed. I’m abandoning other tools because I love speed. Krita is relaxing software for me. Everything worked as expected. In a few hours I was painting with Krita as my everyday software.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Maybe P-B-P-B. It was very funny. A pretty girl and a beautiful owl. I like owls. I was like “I think I can do scientific illustrations in Krita… why not to put an owl here?” To be honest someone told me to put in an owl, hahaha.

P-B-P-B___Esfenodon___

What techniques and brushes did you use in it?

I like to use hard flat brushes. I don’t want to hide the brush. Strokes can give a lot of expression. Block basic is a great brush. I often start out using that. It’s easy to create custom brushes but some of the basic presets are great.

I try not to use many layers or “control z”. If something is wrong I try to resolve it painting over. Sometimes you find something interesting that way.

P-B-P-B_process__Esfenodon___

Where can people see more of your work?

I have a flickr account (https://www.flickr.com/photos/100863111@N04/). It’s not a professional flickr. It is just where I upload some experiments.

I upload a lot of stuff to my twitter account too: @esfenodon. I’m always painting or drawing something.

My professional work can be seen in the Vimeo Divulgare channel: https://vimeo.com/user1296710

Anything else you’d like to share?

I would like to thank all the people who make Krita possible, and recommend it for everyone who wants to try digital painting. Thanks!

April 09, 2016

Krita 3.0: First Alpha Release

On the road to Krita 3.0, we’re releasing today the first alpha version. This has all the features that will be in 3.0, and contains the translations, but is still unstable. We’re fixing bugs all the time, but there’s still plenty to fix! That said, we think we nailed the worst problems and we’d love for you all to give this version a try! The final release is planned for May 1st.

Screenshot_20160409_212626

What’s new?

A short overview of new features compared to Krita 2.9:

  • Tools for creating hand-drawn animations
  • Instant Preview for working with big brushes and large canvases
  • Rulers, guides, grids and snapping
  • Revamped layer panel
  • loading and saving gimp brushes
  • and much, much more…

Krita 3.0 is also based on Qt 5 and the KF5 Framework libraries.

Since the last development build, we focused on fixing bugs and improving performance.

There are some changes and improvements that we made outside of the mountain of fixes that were done. This is a list of improvements since the last pre-alpha that was released.

New Features

  • You can now move multiple selected layers at once
  • And move masks with Ctrl + PgUp/PgDn
  • Updated to G’MIC 1.7 (see release notes)
  • Updated Design templates

We also removed the print and print preview options; printing in Krita has never worked well, and after porting to Qt5 broke completely.

User Interface and Usability Improvements

  • Splash screen shows what area is loading on startup
  • Updated Grids and Guides Tool Options UI elements
  • Some checkboxes have been replaced with lock icons like the crop and geometry tool options
  • Global Input pressure curve now has labels with the axes. ( Settings > Configure Krita > Tablet Settings ).
  • Use highlighted color for the selected tool in toolbox (easier to see)
  • Resource manager has separate buttons for importing resources now. This improves the stability with this area.

Screenshot_20160409_212649

Known Issues

We’re fixing bugs like crazy, but there are still a number of known bugs. Please help us by testing this alpha release and checking whether those bugs are still valid! Bug triaging is an awesome way of becoming part of our community.

Download

There are two Windows versions: an msi installer and a portable zip archive. The MSI installer also contains an explorer shell extension that allows you to see thumbnails of krita files in explorer. The shell extension was written by Alvin Wong.

The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

The Linux appimage should run on any Linux distribution released since 2012. After downloading, make the appimage executable and run it. No installation is needed.

Since this is the first official 3.0 release, we also have source code!

April 08, 2016

Pre-LGM Photowalk


Pre-LGM Photowalk

Time to take some photos!

It’s that time of year again! The weather is turning mild, the days are smelling fresh, and a bunch of photography nerds are all going to get together in a new country to roam around and (possibly) annoy locals by taking a ton of photographs! It’s the Pre-Libre Graphics Meeting photowalk of 2016!

Come join us the day before LGM kicks off to have a stroll through a lovely park and get a chance to shoot some photos between making new friends and having a pint.

Thanks to the wonderful work by the local LGM organizing team, we are able to invite everyone out to the photowalk on Thursday, April 14th the day before LGM kicks off.

Furtherfield Logo

They were able to get us in touch with the kind folks at Furtherfield Gallery & Commons in Finsbury Park. They’ve graciously offered us the use of their facilities at the Furtherfield Commons as a base to start from. So we will meet at the Commons building at 10:00 on Thursday morning.

Pre-LGM Photowalk
10:00 (AM), Thursday, April 14th
Furtherfield Commons
Finsbury Gate - Finsbury Park
Finsbury Park, London, N4 2NQ

An overview of the photowalk venue relative to the LGM venue at the University of Westminster, Harrow:

If you would like to join us but may not make it to the Commons by 10:00, email me and let me know. I’ll try my best to make arrangements to meet up so you can join us a little later. I can’t imagine we’d be very far away (likely somewhere relatively near by in the park).

We’ll plan on meandering through the park with frequent stops to shoot images that strike our fancy. I will personally be bringing along my off-camera lighting equipment and a model (Mairi) to pose for us during the day. In case anyone wanted to play/learn a little about that type of photography.

There is no set time for finishing up. I figured we would play it by ear through lunch and to possibly all finish up at a nice pub together. (Taking advantage of the golden hour light at the end of the day hopefully).

In the spirit of saying “Thank you!” and sharing, I have also offered the Furtherfield folks our services for headshots and architectural/environmental shots of the Commons and Gallery spaces. For sure I will be taking these images for them but if anyone else wanted to pitch in and try, help, or assist the effort would be very welcome!

Dot in the Leipzig Market, 2014 Dot in the Leipzig Market from the 2014 Pre-LGM photowalk.

Speaking of which, if you plan on attending and would like to explore some particular aspect of photography please feel free to let me know. I’ll do my best to match folks up based on interest. I sincerely hope this will be a fun opportunity to learn some neat new things, make some new friends, and to maybe grab some great images at the same time!

If there are any questions, please don’t hesitate to reach out to me!
patdavid@gmail.com
patdavid on irc://irc.gimp.org/#gimp

April 06, 2016

Happy Birthday DISCUSS.PIXLS.US


Happy Birthday DISCUSS.PIXLS.US

Where did the time go?!

For some reason I was checking my account on the forums earlier today and noticed that it was created in April, 2015. On further inspection it looks like my, and @darix, accounts were created on April 2nd 2015.

(Not to be confused with the main site because apparently it took me about 8 months to get a forum stood up…)

Which means that the forums have been around for just over a year now?!

So, Happy Birthday discuss!

We’re just over a year old and just under 500 users on the forum!

For fun, I looked for the oldest (public) post we had and it looks like it’s the “Welcome to PIXLS.US Discussion“ thread. In case anyone wanted to revisit a classic…

THANK YOU so much to everyone who has made this an awesome place to be and nerd out about photography and software and more! Since we started we migrated the official G’MIC forums here as well as our friends at RawTherapee! We’ve been introduced to some awesome projects like PhotoFlow as well as Filmulator. And everyone has just been amazing, supportive, and fun to be around.

As I posted in the original Welcome thread…

April 05, 2016

Modifying a git repo so you can pull without a password

There's been a discussion in the GIMP community about setting up git repos to host contributed assets like scripts, plug-ins and brushes, to replace the long-stagnant GIMP Plug-in Repository. One of the suggestions involves having lots of tiny git repos rather than one that holds all the assets.

That got me to thinking about one annoyance I always have when setting up a new git repository on github: the repository is initially configured with an ssh URL, so I can push to it; but that means I can't pull from the repo without typing my ssh password (more accurately, the password to my ssh key).

Fortunately, there's a way to fix that: a git configuration can have one url for pulling source, and a different pushurl for pushing changes.

These are defined in the file .git/config inside each repository. So edit that file and take a look at the [remote "origin"] section.

For instance, in the GIMP source repositories, hosted on git.gnome.org, instead of the default of url = ssh://git.gnome.org/git/gimp I can set

pushurl = ssh://git.gnome.org/git/gimp
url = git://git.gnome.org/gimp
(disclaimer: I'm not sure this is still correct; my gnome git access stopped working -- I think it was during the Heartbleed security fire drill, or one of those -- and never got fixed.)

For GitHub the syntax is a little different. When I initially set up a repository, the url comes out something like url = git@github.com:username/reponame.git (sometimes the git@ part isn't included), and the password-free pull URL is something you can get from github's website. So you'll end up with something like this:

pushurl = git@github.com:username/reponame.git
url = https://github.com/username/reponame.git

Automating it

That's helpful, and I've made that change on all of my repos. But I just forked another repo on github, and as I went to edit .git/config I remembered what a pain this had been to do en masse on all my repos; and how it would be a much bigger pain to do it on a gazillion tiny GIMP asset repos if they end up going with that model and I ever want to help with the development. It's just the thing that should be scriptable.

However, the rules for what constitutes a valid git passwordless pull URL, and what constitutes a valid ssh writable URL, seem to encompass a lot of territory. So the quickie Python script I whipped up to modify .git/config doesn't claim to handle everything; it only handles the URLs I've encountered personally on Gnome and GitHub. Still, that should be useful if I ever have to add multiple repos at once. The script: repo-pullpush (yes, I know it's a terrible name) on GitHub.

April 04, 2016

Lighting Diagrams


Lighting Diagrams

Help Us Build Some Assets!

Community member Eric Mesa asked on the forums the other day if there might be some Free resources for photographers that want to build a lighting diagram of their work. These are the diagrams that show how a shot might be set up with the locations of lights, what types of modifiers might be used, and where the camera/photographer might be positioned with respect to the subject. These diagrams usually also include lighting power details and notes to help the production.

It turns out there wasn’t really anything openly available and permissively licensed. So we need to fix that…

These diagrams are particularly handy for planning a shoot conceptually or explaining what the lighting setup was to someone after the fact. For instance, here’s a look at the lighting setup for Sarah (Glance):

Sarah (Glance) by Pat David Sarah (Glance)
Sarah (Glance) Lighting Diagram YN560 full power into a 60” Photek Softlighter, about 20” from subject.
She was actually a bit further from the rear wall…

There are a few different commercial or restrictive-licensed options for photographers to create a lighting diagram, but nothing truly Free.

So thanks to the prodding by Eric, I thought it was something we should work on as a community!

I already had a couple of simple, basic shapes created in Inkscape for another tutorial so I figured I could at least get those files published for everyone to use.

I don’t have much to start with but that shouldn’t be a problem! I already had a backdrop, person, camera, octabox (+grid), and a softbox (+grid):

Lighting Diagram Assets

PIXLS.US Github Organization

I already have a GitHub organization setup just for PIXLS.US, you can find the lighting-diagram assets there:

https://github.com/pixlsus/pixls-lighting-diagram

Feel free to join the organization!

Even better: join the organization and fork the repo to add your own additions and to help us flesh out the available diagram assets for all to use! From the README.md on that repo, I compiled a list of things I thought might be helpful to create:

  • Cameras
    • DSLR
    • Mirrorless
    • MF
  • Strobes
    • Speedlight
    • Monoblock
  • Lighting Modifiers
    • Softbox (+ grid?)
    • Umbrella (+ grid?)
    • Octabox (+ grid?)
    • Brolly
  • Reflectors
  • Flags
  • Barn Doors / Gobo
  • Light stands? (C-Stands?)
  • Environmental
    • Chairs
    • Stools
    • Boxes
    • Backgrounds (+ stands)
  • Models

If you don’t want to create something from scratch, perhaps grabbing the files and tweaking the existing assets to make them better in some way?

Hopefully we can fill out the list fairly quickly (as it’s a fairly limited subset of required shapes). Even better would be if someone picked up the momentum to possibly create a nice lighting diagram application of some sort!

The files that are there now are all licensed Creative Commons By-Attribution, Share-Alike 4.0.

April 2016 drawing challenge

The topic of the new drawing challenge is Mythical Creatures. Here it is on the forum. Post your entries in on or before April 23, 12.00am UTC, so we can vote in the last week.

Happy drawing!

April 03, 2016

Liquify, liquify?

Most modules in darktable are working on changing pixels color, lightness, etc. Few modules are moving pixels and when they do they are doing it in a very constraint way like to rotate, fix the lens' distortions or remove spots.

The liquify module offer more ways to move pixels around by applying some free style distortions to parts of the image. There is three tools to help doing that:

  • point
  • line
  • curve

liquify-0

Each tool is based on nodes. A point is given by a single node, a line
or curve by a set of nodes which defines the path.

Next to the count, in order we have the following tools:

  • hide/show the warps
  • the point tool
  • the line tool
  • the curve tool
  • the node edit tool

Let's see what a node does:

liquify-1

  • center : with the central point, it is possible to drag this point with the mouse to move it around
  • radius : the radius describes the area of the applied effect, that is the distortion occurs only inside this radius. It is possible to increase the radius using the small dot on the circle.
  • strength vector : the vector starting from the center describes the direction of the distortion and the strength. The strength depends on the length of the vector.

The point, line and curve tools are all based on nodes as described above. That is, a line is a set of nodes linked together for example.

Point Tool

A point is formed by a single node. In a point the strength vector
has three different modes which are toggled using ctrl-click over the
strength vector itself.

  • linear : the linear mode make the distortion linear inside the circle. Starting from the opposite side of the strength vector and following the strength vector direction. This is the default mode.
  • radial growing : in this mode the strength vector effect is radial, starting with a strength of 0% in the center and growing when going away from the center.
  • liquify-4radial shrinking : in this mode the strength vector effect is radial, starting with a strength of 100% in the center and shrinking when going away from the center.

liquify-3

Furthermore it is possible to set the feathered effect by clicking on the center of the circle.

liquify-2

  • default : linear from the center to the radius
  • feathered : two control circles are displayed and can be used to feather the strength of the effect.

Line Tool

liquify-5

A line is a set of point. The points are linked together, the effect is interpolated by a set of strength vectors.

It is possible to add a control point on a line by ctrl-click on a segment.

A right-click on a segment will remove the shape completely.

A ctrl-alt-click on a segment will change it to a curve segment.

Curve Tool

liquify-6

A curve is a set of point. The points are linked together, the effect is interpolated as a bezier curve by a set of strength vectors.

It is possible to add a control point on a line by ctrl-click on a segment.

A right-click on a segment will remove the shape completely.

A ctrl-alt-click on a segment will change it to a line segment.

It is possible to change the way the points of the curve are linked together by using ctrl-click on the center. There is four modes which correspond to different way of handling the two bezier curve points:

  • autosmooth : control points are always giving a smooth curve, this is the default mode in which the control points are not displayed (as automatically computed).
  • cups : control points can be moved independently.
  • smooth : control points are always giving a smooth curve
  • symmetrical : control points are always moved together

Finally, note that at any moment it is possible to right-click on the image to show or hide the liquify controls.

We feel that such tool will be quite handy in Studio photography, but not only.

April 01, 2016

Running on non-x86 platforms

For many years darktable would only run on x86 CPUs that also support at least SSE2. While that is nowadays almost everything looking like a PC it's still limiting. Consequently Roman sat down and started work on dropping that hard requirement. While his work isn't complete yet it's definitely becoming useful. So with a little tweaking you can for example use the development versions on an ARM, like the Raspberry Pi. Together with a touchscreen that has the potential to make a fun little package.

At the moment this is still in heavy development and you have to compile darktable yourself, but we expect it to be ready for the next feature release.

If you want to meet us and see stuff like this live you should come to LGM in London. If you can't go we'd like to ask you for a little donation to the Pixls.us Pledgie. The money collected will be used to pay our stay in London.

March 30, 2016

Wed 2016/Mar/30

  • Changeset Consulting is up and running

    One of the great things about GNOME being a training ground for experience and talent is all the small businesses that have sprung up around it.

    Sumana Harihareswara, who is an extremely deep thinker (and with a blog to prove it), has started Changeset Consulting, to help people and companies manage their open source projects.

    If you know a business that is struggling with managing its open source dependencies, they might be interested in hiring Changeset to make sure bugs get escalated and fixed, and new releases come out on a dependable cadence. Changeset can also help companies with their initial open source releases, or with developer onboarding audit and revamp. Or, if you know an open source project that has been trying to get the next version out for several months, or to prepare for summer interns or a hack week, Changeset can expedite a release, bring in new volunteers, clean out the bug tracker and the wiki, and plan out a useful in-person sprint.

    Sumana is a great person to work with. I hope her business does great!

March 29, 2016

darktable 2.0.3 released

we're proud to announce the third bugfix release for the 2.0 series of darktable, 2.0.3!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.3.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.0.3.tar.xz
a03e5c1d786799e63c8b4a9f32e9e6f27b3a7d7ab0bbbb7753a516e630490345  darktable-2.0.3.tar.xz
$ sha256sum darktable-2.0.3.dmg 
0568d2d2551cfd2b8a55e8ff111857588f9fb986236bc11bff869ecec68ddebd  darktable-2.0.3.dmg

and the changelog as compared to 2.0.2 can be found below.

Bugfixes

  • Actually allow printing with ctrl-p shortcut as advertised in the tooltip
  • Fix scrolling of the histogram to change the exposure
  • Fix the restricted histogram when color picking an area
  • Fix a bug in color reconstruction
  • Fix an OpenCL bug in tonecurve
  • Fix a small memory leak
  • Better error messages in darktable-cli
  • Fix params introspection for unsigned types
  • Only depend on glib 2.32

Base Support

  • Fujifilm X70
  • Olympus PEN-F
  • Panasonic DMC-LX3 (1:1)

White Balance Presets

  • Canon EOS 1200D
  • Canon EOS Kiss X70
  • Canon EOS Rebel T5
  • Canon EOS 5DS
  • Canon EOS 5DS R
  • Canon EOS 750D
  • Canon EOS Kiss X8i
  • Canon EOS Rebel T6i
  • Canon EOS 760D
  • Canon EOS 8000D
  • Canon EOS Rebel T6s
  • Fujifilm X-Pro2
  • Fujifilm X20
  • Fujifilm X70
  • Olympus PEN-F

Noise Profiles

  • Canon EOS 5DS R
  • Fujifilm X20
  • Olympus E-PL6

Translation updates

  • Danish
  • German
  • Swedish

March 28, 2016

Interview with Eris Luna

undertale_persona800

Could you tell us something about yourself?

Well I’ve been drawing for most of my life and I’ve always wanted to make art for games, so I pursued a degree at Collins College and I graduated with my Bachelor of Science in Game Design.

Do you paint professionally, as a hobby artist, or both?

Both! I’m freelance at the moment. While I enjoy drawing for fun, and most of my experience has been from doing it as a hobby, I’m more than ecstatic to draw something for someone and when they’re offering to pay for it, it does give that motivation to make it the best I can do. After all if you’re paying for something, you expect the artist to put in their heart and soul!

What genre(s) do you work in?

Would you consider “Anime and Manga Inspired” as a genre? Most of my art borrows from Anime styles, typically Takehito Harada’s style.

Other than that, I can usually imitate any genre and style if I really want to.

Whose work inspires you most — who are your role models as an artist?

Takehito Harada is hands down my favorite artist, I own all of his art books and study his drawing style a lot. I hope to be able to replicate his style someday.

How and when did you get to try digital painting for the first time?

The first time I ever painted digitally was in 2013, I had just started my college classes, had gotten my first graphics tablet ever and before that I mostly stayed away from digital painting. I lacked a tablet and using vectors or a mouse was foreign and confusing to me.

What makes you choose digital over traditional painting?

It’s so much more convenient! I had always wanted to go digital because it has nigh infinite resources, that coupled with the benefit of having a wonderful community out there that creates many cool tools and brushes, which overall makes the drawing experience a blast. Before digital I could spend up to two days working on a single piece and now I can finish a drawing in hours, it is a real time saver.

How did you find out about Krita?

My college instructor and fellow classmates showed me the Kickstarter back in May 2015, I had been excited from the very first moment I saw it.

What was your first impression?

It looked really cool, both my instructor and classmates all seemed impressed by the program, which of course had an effect on me since I looked up to them as artists.

What do you love about Krita?

The community and developer support, it’s really awesome and I find it gives the encouragement to keep making more works.

What do you think needs improvement in Krita? Is there anything that really annoys you?

The ‘file layer’ tool, it’s a really powerful tool but it’s a little bit lacking, I originally thought it was just like the “smart object” tool from Photoshop, only to find out you can’t scale or transform your ‘file layer’, which is a shame because you could use that tool to draw higher resolution details in another file, import it in, scale it down just like the ‘smart objects’ in Photoshop and retain the detail. I would really like to see that feature improved upon in the future.

What sets Krita apart from the other tools that you use?

Well for one, it doesn’t take forever to do things in, even when drawing huge images I don’t get slow down or the pen lag I get while using other products!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I’d say it’d be as of this time my most recent finished piece, my Undertale OC Eris Luna.

I used a ton of brushes and played with more features while drawing it, I got to learn quite a bit from it.

What techniques and brushes did you use in it?

I mostly stuck to the Default Ink brushes and FX Brushes, but I used a handful of the ‘Deevad’ brushes in it as well.

A technique I use for shading involves about three to four layers painted over the image in solid colours set to 25% opacity. Then I blended all of the shading layers out to create a soft effect or at least the effect I desire. To be honest I don’t really follow any strict techniques, I have a habit of just doing anything necessary to get the picture looking the way I want it!

Where can people see more of your work?

My twitch channel http://www.twitch.tv/erisluna and my Deviantart http://educated-zombie.deviantart.com

Anything else you’d like to share?

I hope you had a wonderful read. I’m hoping for my twitch channel to begin growing, right now I’m very small but I feel over time and with the more works I do, I may even become a name people recognize! Thank you for this wonderful opportunity and I look forward to making many more wonderful works in Krita.

March 27, 2016

Free Krita tutorials out every week on GDquest

gaq-vol1-banner-800

(Guest post by Nathan Lovato)

Game Art quest, the 2d game art training series with Krita, is finally coming out publicly! Every week, two new videos will be available on YouTube. One on Tuesdays, and one on Thursdays. This first volume of the project won’t cost you a penny. You can thank the Kickstarter backers for that: they are the ones who voted for a free release.

The initial goal was to produce a one-hour long introduction to Krita. But right now, it’s getting both bigger and better than planned! I’m working on extra course material and assignments to make your learning experience as insightful as possible.

Note that this training is designed with intermediate digital art enthusiasts in mind. Students, dedicated hobbyists, or young professionals. It might be a bit fast for you if you are a beginner, although it will allow you to learn a lot of techniques in little time.

Subscribe to Gdquest on Youtube to get the videos as soon as they come out: http://youtube.com/c/gdquest

Here are the first two tutorials:

March 26, 2016

Debian: Holding packages you build from source, and rebuilding them easily

Recently I wrote about building the Debian hexchat package to correct a key binding bug.

I built my own version of the hexchat packages, then installed the ones I needed:

dpkg -i hexchat_2.10.2-1_i386.deb hexchat-common_2.10.2-1_all.deb hexchat-python_2.10.2-1_i386.deb hexchat-perl_2.10.2-1_i386.deb

That's fine, but of course, a few days later Debian had an update to the hexchat package that wiped out my changes.

The solution to that is to hold the packages so they won't be overwritten on the next apt-get upgrade:

aptitude hold hexchat hexchat-common hexchat-perl hexchat-python

If you forget which packages you've held, you can find out with aptitude:

aptitude search '~ahold'

Simplifying the rebuilding process

But now I wanted an easier way to build the package. I didn't want to have to search for my old blog post and paste the lines one by one every time there was an update -- then I'd get lazy and never update the package, and I'd never get security fixes.

I solved that with a zsh function:

newhexchat() {
    # Can't set errreturn yet, because that will cause mv and rm
    # (even with -f) to exit if there's nothing to remove.
    cd ~/outsrc/hexchat
    echo "Removing what was in old previously"
    rm -rf old
    echo "Moving everything here to old/"
    mkdir old
    mv *.* old/

    # Make sure this exits on errors from here on!
    setopt localoptions errreturn

    echo "Getting source ..."
    apt-get source hexchat
    cd hexchat-2*
    echo "Patching ..."
    patch -p0 < ~/outsrc/hexchat-2.10.2.patch
    echo "Building ..."
    debuild -b -uc -us
    echo
    echo 'Installing' ../hexchat{,-python,-perl}_2*.deb
    sudo dpkg -i ../hexchat{,-python,-perl}_2*.deb
}

Now I can type newhexchat and pull a new version of the source, build it, and install the new packages.

How do you know if you need to rebuild?

One more thing. How can I find out when there's a new version of hexchat, so I know I need to build new source in case there's a security fix?

One way is the Debian Package Tracking System. You can subscribe to a package and get emails when a new version is released. There's supposed to be a package tracker web interface, e.g. package tracker: hexchat with a form you can fill out to subscribe to updates -- but for some packages, including hexchat, there's no form. Clicking on the link for the new package tracker goes to a similar page that also doesn't have a form.

So I guess the only option is to subscribe by email. Send mail to pts@qa.debian.org containing this line:

subscribe hexchat [your-email-address]
You'll get a reply asking for confirmation.

This may turn out to generate too much mail: I've only just subscribed, so I don't know yet. There are supposedly keywords you can use to limit the subscription, such as upload-binary and upload-source, but the instructions aren't at all clear on how to include them in your subscription mail -- you say keyword, or keyword your-email, so where do you put the actual keywords you want to accept? They offer no examples.

Use apt to check whether your version is current

If you can't get the email interface to work or suspect it'll be too much email, you can use apt to check whether the current version in the repository is higher than the one you're running:

apt-cache policy hexchat

You might want to automate that, to make it easy to check on every package you've held to see if there's a new version. Here's a little shell function to do that:

# Check on status of all held packages:
check_holds() {
    for pkg in $( aptitude search '~ahold' | awk '{print $2}' ); do
        policy=$(apt-cache policy $pkg)
        installed=$(echo $policy | grep Installed: | awk '{print $2}' )
        candidate=$(echo $policy | grep Candidate: | awk '{print $2}' )
        if [[ "$installed" == "$candidate" ]]; then
            echo $pkg : nothing new
        else
            echo $pkg : new version $candidate available
        fi
    done
}

March 24, 2016

Ways to Help Krita: Work on Feature Requests

“vOpenBlackCanvasMischiefPhotoPaintStormToolKaikai has a frobdinger tool! Krita will never amount to a row of beans unless Krita gets a frobdinger tool, too!”

The cool thing about open source is that you can add features yourself, and even if you cannot code, you talk directly with the developers about the features you need in your workflow. Try that with closed-source proprietary software! But, often, the communication goes awry, leaving both users with bright ideas and developers with itchy coding fingers unsatisfied.

This post is all about how to work, first together with other artists, then with developers to create good feature requests, feature requests that are good enough that they can end up being implemented.

For us as developers it’s sometimes really difficult to read feature requests, and we have a really big to-do list (600+ items at the time of writing, excluding our own dreams and desires). So, having a well written feature proposal is very helpful for us and will lodge the idea better into our conscious. Conversely, a demand for a frobdinger tool because another application has it, so Krita must have it, too, is likely not to get far.

Writing proposals is a bit of an art form in itself, and pretty difficult to do right! Asking for a copy of a feature in another application is almost always wrong because it doesn’t tell us the most important thing:

What we primarily need like to know is HOW you intend to use the feature. This is the most important part. All Krita features are carefully considered in terms of the workflow they affect, and we will not start working on any feature unless we know why it is useful and how it is exactly used. Even better, once we know how it’s used, we as developers can start thinking about what else we can do to make the workflow even more fluid!

Good examples of this approach can be found in the pop-up palette using tags, the layer docker redesign of 3.0, the onion skin docker, the time-line dockers and the assistants.

Feature requests should start on the forum, so other artists can chime in. What we want is that a consensus about workflow, about use-cases emerges, something our UX people can then try to grok and create a design for. Once the design emerges, we’ll try an implementation, and that needs testing.

For your forum post about the feature you have in mind, check this list:

  • It is worth investigating first if the feature in question has similar functionality in Krita that might need to be extended to solve the problem. We in fact kind of expect that you have used Krita for a while before making feature requests. Check the manual first!
  • If your English is not very good or you have difficulty finding the right words, make pictures. If you need a drawing program, I heard Krita is pretty good.
  • In fact, mock-ups are super useful! And why wouldn’t you make them? Krita is a drawing program made for artists, and a lot of us developers are artists ourselves. Furthermore, this gets past that nasty problem called ‘communication problems’. (Note: If you are trying to post images from photobucket, pasteboard or imgur, it is best to do so with [thumb=][/thumb]. The forum is pretty strict about image size, but thumb gets around this)
  • Focus on the workflow. You need to prepare a certain illustration, comic, matte painting, you would be (much) more productive if you could just do — whatever. Tell us about your problem and be open to suggestions about alternatives. A feature request should be an exploration of possibilities, not a final demand!
  • The longer your request, the more formatting is appreciated. Some of us are pretty good at reading long incomprehensible texts, but not all of us. Keep to the ABC of clarity, brevity, accuracy. If you format and organize your request we’ll read it much faster and will be able to spent more time on giving feedback on the exact content. This also helps other users to understand you and give detailed feedback! The final proposal can even be a multi-page pdf.
  • We prefer it if you read and reply to other people’s requests than to start from scratch. For animation we’ve had the request for importing frames, instancing frames, adding audio support, from tons of different people, sometimes even in the same thread. We’d rather you reply to someone else’s post (you can even reply to old posts) than to list it amongst other newer requests, as it makes it very difficult to tell those other requests apart, and it turns us into bookkeepers when we could have been programming.

Keep in mind that the Krita development team is insanely overloaded. We’re not a big company, we’re a small group of mostly volunteers who are spending way too much of our spare time on Krita already. You want time from us: it’s your job to make life as easy as possible for us!

So we come to: Things That Will Not Help.

There’s certain things that people do to make their feature request sound important but are, in fact, really unhelpful and even somewhat rude:

  • “Application X has this, so Krita must have it, too”. See above. Extra malus points for using the words “industry standard”, double so if it refers to an Adobe file format.
    We honestly don’t care if application X has feature Y, especially as long as you do not specify how it’s used.
    Now, instead of thinking ‘what can we do to make the best solution for this problem’, it gets replaced with ‘oh god, now I have to find a copy of application X, and then test it for a whole night to figure out every single feature… I have no time for this’.
    We do realize that for many people it’s hard to think in terms of workflow instead of “I used to use this in ImagePainterDoublePlusPro with the humdinger tool, so I need a humdinger tool in krita” — but it’s your responsibility when you are thinking about a feature request to go beyond that level and make a good case: we cannot play guessing games!
  • “Professionals in the industry use this”. Which professionals? What industry? We cater to digital illustrators, matte painters, comic book artists, texture artists, animators… These guys don’t share an industry. This one is peculiar because it is often applied to features that professionals never actually use. There might be hundreds of tutorials for a certain feature, and it still isn’t actually used in people’s daily work.
  • “People need this.” For the exact same reason as above. Why do they need it, and who are these ‘people’? And what is it, exactly, what they need?
  • “Krita will never be taken seriously if it doesn’t have a glingangiation filter.” Weeell, Krita is quite a serious thing, used by hundreds of thousands of people, so whenever this sentence shows up in a feature request, we feel it might be a bit of emotional blackmail: it tries to to get us upset enough to work on it. Think about how that must feel.
  • “This should be easy to implement.” Well, the code is open and we have excellent build guides, why doesn’t the feature request come with a patch then? The issue with this is very likely it is not actually all that easy. Telling us how to implement a feature based on a guess about Krita’s architecture, instead of telling us the problem the feature is meant to solve makes life really hard!
    A good example of this is the idea that because Krita has an OpenGL accelerated canvas, it is easy to have the filters be done on the GPU. It isn’t: The GPU accelerated canvas is currently pretty one-way, and filters would be a two-way process. Getting that two way process right is very difficult and also the difference between GPU filters being faster than regular filters or them being unusable. And that problem is only the tip of the iceberg.

Some other things to keep in mind:

  • It is actually possible to get your needed features into Krita outside of the Kickstarter sprints by funding it directly via the Krita foundation, you can mail the official email linked on krita.org for that.
  • It’s also actually possible to start hacking on Krita and make patches. You don’t need permission or anything!
  • Sometimes developers have already had the feature in question on their radar for a very long time. Their thinking might already be quite advanced on the topic and then they might say things like ‘we first need to get this done’, or an incomprehensible technical paragraph. This is a developer being in deep thought while they write. You can just ask for clarification if the feedback contains too much technobabble…
  • Did we mention we’re overloaded already? It can easily be a year or two, three before we can get down to a feature. But that’s sort of fine, because the process from idea to design should take months to a year as well!

To summarize: a good feature request:

  • starts with the need to streamline a certain workflow, not with the need for a copy of a feature in another application
  • has been discussed on the forums with other artists
  • is illustrated with mock-ups and example
  • gets discussed with UX people
  • and is finally prepared as a proposal
  • and then it’s time to find time to implement it!
  • and then you need to test the result

(Adapted from Wolthera’s forum post on this topic).

A New Logo for Hyperkitty

hyperkitty logo

I was working on Fedora Hubs and I needed a nice icon for Hyperkitty for some feed widget mockups I was working on. I really love the updated Pagure logo Ryan Lerch made for pagure.io:

pagure logo

Pagure and Hyperkitty, you know, they are kind of cousins, so they should look like they are part of the same family, no? 🙂

So here’s what I came up with, what do you think?

hyperkitty logo update mockup

(SVG available here.)

Unpackaged Open Font of the Week: Montserrat

montserrat type sample

It’s been quite a while since I’ve done one of these posts – actually, five years – lol – but no reason not to pick an old habit back up! 🙂

Montserrat is a sans serif font created by Julieta Ulanovsky inspired by the street signs of the Montserrat neighborhood of Buenos Aires. It is the font we have used in Fedora for the Fedora Editions logos:

03-treatment

It is also used as the official headline / titling font for Fedora project print materials. Packaging this font is of particular important to Fedora, since we have started using it as an official font in our design materials. It would be lovely to be able to install it via our software install tools rather than having designers have to download and install it manually.

Montserrat is licensed under the Open Font License.

 

So, you want to package Montserrat?

Sweet! You’ll want to follow the first steps here next to the ‘if you intend to do some packaging’ header:

Our fonts packaging policy, which the above refers to, is documented here:

And if you have any questions throughout the process, don’t hesitate to ask on the Fedora Fonts SIG mailing list:

 

March 22, 2016

WebKitGTK+ 2.12

We did it again, the Igalia WebKit team is pleased to announce a new stable release of WebKitGTK+, with a bunch of bugs fixed, some new API bits and many other improvements. I’m going to talk here about some of the most important changes, but as usual you have more information in the NEWS file.

FTL

FTL JIT is a JavaScriptCore optimizing compiler that was developed using LLVM to do low-level optimizations. It’s been used by the Mac port since 2014 but we hadn’t been able to use it because it required some patches for LLVM to work on x86-64 that were not included in any official LLVM release, and there were also some crashes that only happened in Linux. At the beginning of this release cycle we already had LLVM 3.7 with all the required patches and the crashes had been fixed as well, so we finally enabled FTL for the GTK+ port. But in the middle of the release cycle Apple surprised us announcing that they had the new FTL B3 backend ready. B3 replaces LLVM and it’s entirely developed inside WebKit, so it doesn’t require any external dependency. JavaScriptCore developers quickly managed to make B3 work on Linux based ports and we decided to switch to B3 as soon as possible to avoid making a new release with LLVM to remove it in the next one. I’m not going to enter into the technical details of FTL and B3, because they are very well documented and it’s probably too boring for most of the people, the key point is that it improves the overall JavaScript performance in terms of speed.

Persistent GLib main loop sources

Another performance improvement introduced in WebKitGTK+ 2.12 has to do with main loop sources. WebKitGTK+ makes an extensive use the GLib main loop, it has its own RunLoop abstraction on top of GLib main loop that is used by all secondary processes and most of the secondary threads as well, scheduling main loop sources to send tasks between threads. JavaScript timers, animations, multimedia, the garbage collector, and many other features are based on scheduling main loop sources. In most of the cases we are actually scheduling the same callback all the time, but creating and destroying the GSource each time. We realized that creating and destroying main loop sources caused an overhead with an important impact in the performance. In WebKitGTK+ 2.12 all main loop sources were replaced by persistent sources, which are normal GSources that are never destroyed (unless they are not going to be scheduled anymore). We simply use the GSource ready time to make them active/inactive when we want to schedule/stop them.

Overlay scrollbars

GNOME designers have requested us to implement overlay scrollbars since they were introduced in GTK+, because WebKitGTK+ based applications didn’t look consistent with all other GTK+ applications. Since WebKit2, the web view is no longer a GtkScrollable, but it’s scrollable by itself using native scrollbars appearance or the one defined in the CSS. This means we have our own scrollbars implementation that we try to render as close as possible to the native ones, and that’s why it took us so long to find the time to implement overlay scrollbars. But WebKitGTK+ 2.12 finally implements them and are, of course, enabled by default. There’s no API to disable them, but we honor the GTK_OVERLAY_SCROLLING environment variable, so they can be disabled at runtime.

But the appearance was not the only thing that made our scrollbars inconsistent with the rest of the GTK+ applications, we also had a different behavior regarding the actions performed for mouse buttons, and some other bugs that are all fixed in 2.12.

The NetworkProcess is now mandatory

The network process was introduced in WebKitGTK+ since version 2.4 to be able to use multiple web processes. We had two different paths for loading resources depending on the process model being used. When using the shared secondary process model, resources were loaded by the web process directly, while when using the multiple web process model, the web processes sent the requests to the network process for being loaded. The maintenance of this two different paths was not easy, with some bugs happening only when using one model or the other, and also the network process gained features like the disk cache that were not available in the web process. In WebKitGTK+ 2.12 the non network process path has been removed, and the shared single process model has become the multiple web process model with a limit of 1. In practice it means that a single web process is still used, but the network happens in the network process.

NPAPI plugins in Wayland

I read it in many bug reports and mailing lists that NPAPI plugins will not be supported in wayland, so things like http://extensions.gnome.org will not work. That’s not entirely true. NPAPI plugins can be windowed or windowless. Windowed plugins are those that use their own native window for rendering and handling events, implemented in X11 based systems using XEmbed protocol. Since Wayland doesn’t support XEmbed and doesn’t provide an alternative either, it’s true that windowed plugins will not be supported in Wayland. Windowless plugins don’t require any native window, they use the browser window for rendering and events are handled by the browser as well, using X11 drawable and X events in X11 based systems. So, it’s also true that windowless plugins having a UI will not be supported by Wayland either. However, not all windowless plugins have a UI, and there’s nothing X11 specific in the rest of the NPAPI plugins API, so there’s no reason why those can’t work in Wayland. And that’s exactly the case of http://extensions.gnome.org, for example. In WebKitGTK+ 2.12 the X11 implementation of NPAPI plugins has been factored out, leaving the rest of the API implementation common and available to any window system used. That made it possible to support windowless NPAPI plugins with no UI in Wayland, and any other non X11 system, of course.

New API

And as usual we have completed our API with some new additions:

 

March 21, 2016

PlayRaw (Again)


PlayRaw (Again)

The Resurrectioning

On the old RawTherapee forums they used to have a contest sharing a single raw file amongst the members to see how everyone would approach processing from the same starting point. They called it PlayRaw. This seemed to really bring out some great work from the community so I thought it might be fun to start doing something similar again here.

I took a (relatively) recent image of Mairi and decided to see how it would be received (I’d say fairly well given the responses). This was my result from the raw file that I called Mairi Troisième:

Mairi Troisieme

I made the raw file available under a Creative Commons, By-Attribution, Non-Commercial, Share-Alike license so that anyone could freely download and process the file as they wanted to.

The only things I asked for was to see the results and possibly the processing steps through either an XMP or PP3 sidecar file (darktable and RawTherapee respectively).

Here’s a montage of the results from everyone:

I loved being able to see what everyone’s approaches looked like. It’s neat to get a feel for all the different visions out there among the users and there were some truly beautiful results!

If you haven’t given it a try yourself yet, head on over to the [PlayRaw] Mairi Troisieme thread to get the raw file and try it out yourself! Just don’t forget to show us your results in the topic.

I’ll be soliciting options for a new image to kick off another round of processing again soon.

Speaking of Mairi

Don’t forget that we still have a Pledgie Campaign going on to help us offset the costs of getting everyone together at the 2016 Libre Graphics Meeting in London this April!

Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !

Donations go to help cover to costs of various projects to come together and meet, photograph, discuss, and hack at things. Please consider donating as every little bit helps us immensely! If you can’t donate then please consider helping us to raise awareness of what we’re trying to do! Either link the Pledgie campaign to others or let them know we’re here to help and share!

Even better is if you’re in the vicinity of London this April 15–18! Come out and join us as well as many other awesome Free Software projects all focused on the graphics community! We (PIXLS) will be conducting photowalks and meet-ups the Thursday before LGM kicks off as well!

Oh, and I finally did convince Mairi to join us through the weekend to model for us as needed. She’s super awesome and worth raising a glass to/with! Even more reason to come out and join us!

Mairi Deux

March 20, 2016

Stellarium 0.14.3

The Stellarium development team after two months of development is proud to announce the third correcting release of Stellarium in series 0.14.x - version 0.14.3. This version contains few closed bugs (ported from version 0.15.0).

A huge thanks to our community whose contributions help to make Stellarium better!

List of changes between version 0.14.2 and 0.14.3:
- Added Bengali description for landscapes (LP: #1548627)
- Added background transparency in Oculars plugin (LP: #1511393)
- Fixed serial port issue on Windows version (LP: #1543813)
- Fixed MESA mode on Windows (LP: #1509735)
- Fixed Stellarium crashes in ocular view of Saturn/Neptune/Uranus (LP: #1495232)
- Fixed artifacts in rendering of Mercury in the Sun (LP: #1533647)
- Fixed loading scenes for Scenery 3D plugin (LP: #1533069)
- Fixed movement of radiant when time is switched manually (LP: #1535950)
- Fixed changing name of planet (LP: #1548008)

March 17, 2016

Changing X brightness and gamma with xrandr

I switched a few weeks ago from unstable ("Sid") to testing ("Stretch") in the hope that my system, particularly X, would break less often. The very next day, I updated and discovered I couldn't use my system at night any more, because the program I use to reduce the screen brightness by tweaking X gamma no longer worked. Neither did other related programs, such as xgamma and xcalib.

The Dell monitor I use doesn't have reasonable hardware brightness controls: strangely, the brightness button works when the monitor is connected over VGA, but if I want to use the sharper HDMI connection, brightness adjustment no longer works. So I depend on software brightness adjustment in order to use my computer at night when the room is dim.

Fortunately, it turns out there's a workaround. xrandr has options for both brightness and gamma:

xrandr --output HDMI1 --brightness .5
xrandr --output HDMI1 --gamma .5:.5:.5

I've always put xbrightness on a key, so I can use a function key to adjust brightness interactively up and down according to conditions. So a command that sets brightness to .5 or .8 isn't what I need; I need to get the current brightness and set it a little brighter or a little dimmer. xrandr doesn't offer that, so I needed to script it.

You can get the current brightness with

xrandr --verbose | grep -i brightness

But I was hoping there would be a more straightforward way to get brightness from a program. I looked into Python bindings for xrandr; there are some, but with no documentation and no examples. After an hour of fiddling around, I concluded that I could waste the rest of the day poring through the source code and trying things hoping something would work; or I could spend fifteen minutes using subprocess.call() to wrap the command-line xrandr.

So subprocesses it was. It made for a nice short script, much simpler than the old xbrightness C program that used <X11/extensions/xf86vmode.h> and XF86VidModeGetGammaRampSize(): xbright on github.

March 16, 2016

First Krita Book in French!

dessin-et-peinture-numerique-avec-krita

After Japanese and English, here is the first book for french-speaking people who want to get started using Krita! The author is  Timothée Giet, long time user and contributor and the creator of two of  acclaimed training DVDs for the Krita Foundation, Comics with Krita and Secrets of Krita. And there is also a preface by Boudewijn Rempt, the Krita project lead.

“Dessin et peinture numérique avec Krita” is meant for beginning to intermediate users. After a quick introduction of the common basics, it presents several examples of workflow for different kinds of tasks: sketching, character design, illustration, web-comics, storyboard and cut-out. Through these examples, you can learn all the most important concepts and take advantage of digital drawing and painting with Krita.

The book is available in printed full color! It is also available digitally, as online HTML version or as PDF or EPUB download. The digital download files are DRM-FREE, at least if you buy them directly from the publisher, D-Booker.

Also, the creation of this book did contribute a lot to improve the French translation of Krita, with both the author and the publisher sending improvements to the translation team.

If you know some French artists who are not using Krita yet, make sure to share this news with them!

krita-webcomics-aplats

March 14, 2016

3.0 Pre-alpha 3 is out!

Today was an important day for the Krita project! We entered feature freeze! That means that from now on until the release of Krita 3.0, which is planned for April 27th, we won’t be working on adding new features, but we’ll be fixing bugs, fixing bugs and fixing more bugs! If you want to help us identify and triage bugs, read this article: “Ways to Help Krita: Bug Triaging“. It’s the first of what’s intended to be a series of reference articles on ways to help Krita grow and become better and better.

Since Krita 3.0 is frozen, we know exactly which Kickstarter and Stretch Goal features will be in this release, and which features will be in the 3.1 and following releases. Krita 3.0 will have: Instant Preview, Animation Support, Rulers and Guides, Grid Docker, layer multi-selection handling improvements, loading and saving Gimp brushes as images, and a completely revamped layer management panel! As an extra, originally not part of the Kickstarter Stretch Goals, snapping to guides and grids has also been implemented! Work has already started on the on-canvas brush settings editor (HUD), stacked brushes brush engine and the lazy brush or (anime coloring) tool.

And additionally, Krita 3.0 has been ported to Qt5, which was in itself a huge task, much bigger than we had even imagined. We intend to make Krita 3 fully supported on OSX, too, but that still needs a lot of work. And there are a ton of features that weren’t part of the Kickstarter, of course. Volunteers from all over the world have been adding cool new stuff. We’re really dreading writing the full release announcement for April!

Anyway… Here are the highlights compared to the last pre-alpha build:

Improved Layer Docker

A stupendous amount of work has gone into the layer docker’s new load, one of the 2015 Kickstarter stretch goals!

  • More condensed and clean looking to make it easier to use.
  • Layers can have colors be associated with them. You can later filter the layers by color.
  • A lot of new shortcuts for grouping, moving, and working with multiple layers.
  • Ctrl+Alt+G now quick ungroups layers! (Not in the video below)

See Nathan Lovato’s overview for all the features:

Updated Grid Handling

We removed the Grid Tool… And created a new grid docker that exposes all the grid and guide settings. This makes it much easier and logic to work with and edit your grids. Krita’s grid handling has been streamlined by removing the perspective grid tool (you can find a much more powerful perspective grid in the assistants) and the snap settings docker is gone…

 

gridsguidessnapping
… To be replaced by the Snap Settings pop-up(on shift+S)!

Snapping

Snapping to grids and guides wasn’t part of the original plan, and we were even considering making it a 2016 Kickstarter stretch goal, but we implemented snapping for all vector, selection and raster tools! Well, with the exception of the distance measurement tool and the freehand tools, like the freehand brush tool, multibrush, dynabrush, and freehand path tools.

Export filters

There are two new export filters:

  • A Spriter exporter! We’ve worked together with BrashMonkey to create a new export filter to create sprite maps from Krita images. Based on the Photoshop plugin, this filter is still in its early stages. Plenty of known bugs, but very promising.
  • Compatible with TV-Paint, Fazek created a new export filter that can generated CSV-based layered animation projects

Gradient Map Filter

Turn your greyscale artwork into color with a gradient map filter.  This wasn’t planned to be done, but we had a surprise special guest hacker Spencer Brown who submitted this feature out of the blue! Thank you Spencer! For the future, we want to make it possible to use the Gradient Map filter as a filter layer, but that isn’t possible yet.

gradientmapfilter

The “Greater” Blending Mode

greaterblendmode
Nicolas Guttenberg implemented the “Greater” blending mode to make it easier to create semi-transparent strokes.

Move Tool

There are still known bugs in the move tool, but it also got a really nice additional feature: Move Increment Multipliers for the Move Tool. (Accessible with shift+arrow keys)

Other Tools

The Crop tool, Assistant editing tool and the Straight line tool got an improved user interface, and the Straight line tool’s on-canvas preview has been improved as well.

Screenshot from 2016-03-12 18-15-58

More…

And there are more and more improvements

  • Shortcut settings have been moved to sit with the other settings
  • The steps for the lighter/darker and similar hotkeys can now be configured in the settings.
  • The HSY selector’s Gamma can be configured in the settings!
  • New cursor options: single pixel black and white, for those who REALLY need precision.
  • Pixel art brush presets
  • And many, many, many bugsfixes!

Next

The next time we release, we’ll call it a Real Alpha Release! We also intend to make development builds available every week. They’ll end up in files.kde.org/krita/3/, with weird version numbers including git hashes for extra confusion, that is, to make it easier to figure out what exactly went into the build. And we’ll try to package the brand new Krita Shell Extension by Alvin Wong for Windows. Finally your .kra files will have thumbnails in Explorer. And we’ll fix bugs. And fix bugs. And fix some more bugs!

Downloads

(If you’re wondering about the version numbering… This is Krita 3, it’s pre-alpha, so it’s not even alpha yet, but a development build. It’s the third pre-alpha we make available (and the last) and the weird string of numbers and letters makes it possible to figure out exactly which version of the source code we built.)

Windows

Download the zip file. Unzip the zip file where you want to put Krita..

Run the vcredist_x64.exe installer to install Microsoft’s Visual Studio runtime. (You only need to do this once.)

Then double-click the krita link.

Known issues on Windows:

  • The issue where Krita would show a black window if OpenGL is enabled on certain Intel GPU + driver combinations should be fixed now.
  • The spriter export plugin may not be found.

OSX

Download the DMG file and open it. Then drag the krita app bundle to the Applications folder, or any other location you might prefer. Double-click to start Krita.

Known issues on OSX:

  • We built Krita on El Capitan. The bundle is tested to work on a mid 2011 Mac Mini running Mavericks. It looks like you will need hardware that is capable of running El Capitan to run this build, but you do not have to have El Capitan, you can try running on an earlier version of OSX.
  • You will not see a brush outline cursor or any other tool that draws on the canvas, for instance the gradient tool. This is known, we’re working on it, it needs the same fix as the black screen you can get with some Intel drivers on Windows. Basically, we need to port a core chunk of Qt functionality to a new version of OpenGL because Apple refuses to implement the OpenGL compatibility profile in their drivers.

Linux

For the Linux builds we now have AppImages! These are completely distribution-independent. To use the AppImage, download it, and make it an executable in your terminal or using the file properties dialog of your file manager. Another change is that configuration and custom resources are now stored in the .config/krita.org/kritarc and .local/share/krita.org/ folders of the user home folder, instead of .kde or .kde4.

Known issues on Linux:

  • Your distribution needs to have Fuse enabled
  • On some distributions or installations, you can only run an AppImage as root because the Fuse system is locked down. Since an AppImage is a simple iso, you can still mount it as a loopback device and execute Krita directly using the AppRun executable in the top folder.

Back from conf.kde.in 2016

Namaste,
Ten days ago, I’ve traveled to India to make a conference about my experience contributing to Krita at conf.kde.in 2016. This year the event was at LNMIIT next to Jaipur city. It was an amazing experience! I met a lot of very interesting people, with great speakers, enthusiast students and a very warm welcome from everybody.

confkdein2016
Photo by Shivam Dixit, you can find some more here. I think the talks were recorded, I hope they get released online soon to can share the links here.

After the conference, I spent a few more days exploring this wonderful country, along with Bruno Coudoin the creator and maintainer of Gcompris. Great adventure!

I can’t thank enough all the people behind the organization of this nice event, and KDE e.v. as much for the support.

Ways to Help Krita: Bug Triaging

This is the first article of short series on ways everyone who wants to put some time into helping Krita can make a real difference. There are many ways of helping Krita, ranging from coding to writing tutorials, helping users on forums to helping with fund raisers. But let’s take a look at one task that is really important: bug triaging.

Now that there are hundreds of thousands of Krita users, we’re getting lots of bug reports. And a lot of those bugs are specific to the reporter’s system. Or so it seems. Some bugs only happen with certain combinations of hardware, operating system, other installed software. Some bugs happen for everyone but are rare because not that many use a feature, and some bugs suddenly turn up because we’re human and making mistakes.

And every report needs to be read, preferably by several people, who can try to determine whether:

  • they can trigger the bug — and in that case, confirm it
  • cannot understand the report, or determine that the report is incomplete — and in that case ask for more information
  • know that the bug has been reported before, and then close the bug as a duplicate of the earlier report.

We’re using KDE’s bugzilla to track bugs for Krita. Now we admit up-front that bugzilla is an old-fashioned monster of a web application, but it’s what we have, and for now we’ll have to make it work for us! Here’s what you see when you search for open Krita bugs:

Spectacle.Lh7342

Whoops, 320 open bugs… Important bits of information are:

  • Version: the version of Krita the user was using.
  • Severity: is it a crash, or a minor issue or a wish.
  • Status: UNCO means Unconfirmed. That means that we either haven’t tried to reproduce the bug or couldn’t reproduce it.
  • Summary: a short description of the issue
  • Reporter: who reported it
  • OS: which operating system the user used. Windows (often reported as Windows CE), Linux or OSX

As you can see, we really need help! Your friendly author, Boudewijn, is also the maintainer of the project, developer and manager. Together with Wolthera, we’re trying to triage all reported bugs, and we’re not managing to reply on time. That’s where you come in! If you’re a reasonably experienced Krita user and want to help out, here’s how to get ready and set up!

Get a Bugzilla Account

Go to https://bugs.kde.org and select “Create new Account”

Spectacle.nS7342

Complete the registration forms, and click on the confirmation link in the email you get sent (I’m using Alpine for reading email, which is a text-mode client… That’s optional!).

Spectacle.kn7342

Setup Email Notifications

Log in into bugzilla and click on the Preferences link at the top, then “Email Preferences”. Bugzilla can send a lot of email, but fortunately it includes a number of special headers that make it easy to filter bug mail into special mailboxes. There are two steps: first are the general email preferences:

Spectacle.Ti7342

And then there’s the important bit, that makes sure you get email for all bugs that are about Krita. Add the user “krita-bugs-null@kde.org” to the list of User Watches:

Spectacle.wZ7342

Now you will get mail whenever anything happens to a Krita bug. Using the special bugzilla headers, you can sort all the mail ready for handling:

  • X-Bugzilla-Reason: None
  • X-Bugzilla-Type: new
  • X-Bugzilla-Watch-Reason: AssignedTo krita-bugs-null@kde.org
  • X-Bugzilla-Product: krita
  • X-Bugzilla-Component: tablet support
  • X-Bugzilla-Version: 2.9.11
  • X-Bugzilla-Keywords:
  • X-Bugzilla-Severity: normal
  • X-Bugzilla-Who: boudewijnrempt@gmail.com
  • X-Bugzilla-Status: UNCONFIRMED
  • X-Bugzilla-Priority: NOR
  • X-Bugzilla-Assigned-To: krita-bugs-null@kde.org
  • X-Bugzilla-Target-Milestone: —
  • X-Bugzilla-Flags:
  • X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform
    op_sys bug_status bug_severity priority component assigned_to reporter

For instance, I have split my bug mail according to whether it’s a new bug, a changed bug, a new wish, a changed wish, or a reply to a needs-info query.

Triage a Bug

And then when a bug lands in your inbox, you can open it in bugzilla and triage it. There are a couple of steps here:

  1. Check if the severity is “wish”, if so, leave alone. Wish bugs are special.
  2. Check whether it’s a duplicate, if so, close it.
  3. Check whether the subject makes sense and mentions the most pertinent information, if not, update it
  4. Check whether the OS is correct. For instance, Windows bugs are often reported for Windows CE, and that needs to be changed to MS Windows.
  5. Check whether the version isn’t too old. At this moment, we don’t try to fix bugs for 2.8 anymore, and after 3.0 is released, if a bug is reported for 2.9 the bug should be marked as NEEDSINFO and the user asked to try to reproduce the issue with 3.0
  6. Thank the reporter for his bug report. Keep in mind that reporting a bug can be scary enough, especially for new users, and that it is also a certain amount of effort. Bug reporters are awesome contributors, too!

These are the preliminaries: now then the real work starts. Trying to reproduce the bug. At first, you won’t be able to close bugs; so at first all you can do is add comments to the bug report. If you’ve done that a couple of times, you’ll have learned how the system works and gotten a feel for what should be done with new bugs. That’s the moment to ask me, Boudewijn (boud@kde.org, boud on #krita on irc.freenode.org) for more powers!

  • It’s easy to reproduce: add any notes on how you reproduced the issue, on your OS, version of Krita, hardware and set the bug to CONFIRMED.
  • There is not enough information to reproduce the bug: set the bug to NEEDSINFO and ask the reporter for more information
  • You cannot reproduce the bug even though the report is complete and you can follow all the steps: Note that you cannot reproduce the bug and ask the reporter to reproduce. Set the bug to NEEDSINFO and wait. If the reporter answers that they cannot reproduce either, close the bug as WORKSFORME
  • You cannot reproduce the bug but suspect that that’s because it’s already fixed: add a comment to the bug but don’t change the status yet.
  • You suspect that the reporter simply doesn’t realize that they are using Krita in the wrong way, for instance by painting in 16 bits linear light RGB and saving to a PNG file without a profile. Point the reporter to the manual (https://docs.krita.org) and close with NEEDSINFO. If the reporter replies that that is the case, we can close the bug.
  • The bug is about an issue with a drawing tablet. Please ask the user to upload a tablet log.

And that’s it, really.  Now start triaging bugs and earn our undying gratitude! Help with bug triaging is a real and lasting contribution to Krita!

Interview with Anne Derenne

Adene-800

Could you tell us something about yourself?

I’m a French illustrator currently living and working in Madrid – Spain. I’ve been drawing since childhood but I studied economics instead of art, I was really interested in political news and geopolitics. After university, I began to work in administrative jobs and at the same time I began to focus on illustration. I’ve been spending all my weekends in drawing to improve my techniques.

My favourite topic is political drawing/cartoon. It is a perfect way to combine my 2 passions: my interest for the news and my drawing skills.

Do you paint professionally, as a hobby artist, or both?

Right now I’m painting professionally and also as a hobby. I have begun to earn money with my passion. But not enough to live so I still have an administrative job to help to pay my bills. But I now work part time, the rest of the time I spend it in drawing and painting.

What genre(s) do you work in?

I work in political/editorial cartoon but also in children’s book illustration. They are 2 different genres, but I like changing from time to time what kind of topics I’m working on. According to my mood I will spend more time in one or another genre.I like to denounce with my cartoons, but sometimes it is also good to put some poetry in this complicated world and the children illustrations help me to focus in something more positive.

Whose work inspires you most — who are your role models as an artist?

In the political-cartoons genre I like Ares and Boligan´s work for their graphic style. I also admire the work of Quino. But if I had to do a list with all people I admire it would be very long, as I’m discovering talented cartoonists from all over the world every week!

In children´s illustration, I could give a lot of names also but if I had to choose only one I can say I really admire the work of Rebecca Dautremer. I’m fascinated by her work.

How and when did you get to try digital painting for the first time?

I got into digital painting 4-5 years ago more or less. Before that I was only working with traditional painting.

What makes you choose digital over traditional painting?

I still do some work with traditional painting, but this is when I do things for myself or for really specific orders.

For the rest, when I began to get professional orders I had no choice but doing them with digital. I mean customers can ask you to make a lot of changes in your illustration so with traditional painting you will spend a lot more time. If you are very well known and well paid maybe you can afford it, but I think in the case of most illustrators it is complicated to work with traditional techniques for professional orders.

How did you find out about Krita?

From my boyfriend. He was looking for free software painting programs which were better than Gimp, to replace Photoshop and Painter and he discovered the existence of Krita. He downloaded it to see what it looked like and then told me I would probably like it.

What was your first impression?

I was quite impressed because I wasn’t expecting something so professional. But when I saw I could have a really nice result with it, in few months I began to work with Krita for my illustrations.

What do you love about Krita?

First of all that it is free software. You can see people have been working hard on it, that’s why I also think it’s important to support this work by donations. The result is really professional and in my case the tools cover all my needs for digital painting.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Sometimes with large canvas and big brushes Krita becomes really slow. I have tested Krita 3.0 pre-alpha and I have found that it has really improved!! So I’m waiting for a stable 3.0 build to use it in my professional works.

What sets Krita apart from the other tools that you use?

I like the Kickstarter campaigns and the fact that we, users, can choose some of the features that will be implemented in the next releases.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“Mediterranean migrant tragedy” … because it´s a drawing I had in mind for a long time. It´s one of my first illustrations 100% done with Krita.

What techniques and brushes did you use in it?

I’m not to complicated with brushes, basically I use the default set of Krita but sometimes I use some of David Revoy’s brushes (they are great!!!).

Where can people see more of your work?

My cartoons blog: http://adene-editorialcartoon.blogspot.com/

My children’s illustration blog: http://illustrationannederenne.blogspot.com/

March 11, 2016

Shimming an Adapter to be Parallel


Shimming an Adapter to be Parallel

Achieving perfect infinity focus

Some of you may know I exclusively use Contax manual focus lenses on my Canon cameras. I have had one reliable adapter from the start, that just happened to be perfect in every way: perfectly parallel, and lets my lenses focus exactly to infinity, and none of my lenses hit the mirror on my 5D.

However, swapping adapters between cameras gets mighty tedious, so recently I have been trying a variety of different adapters for my cameras, several quality tiers ranging from the cheapest ($15) up to the most expensive ($70).

39cc6bc295d7b8fb61f7f30bddb439236c3c07ba.jpg

However, I wasn’t satisfied with any of them. In order to assure that the adapted lenses can focus to infinity even with manufacturing tolerances, they’re made thinner than necessary. This means that they focus past infinity, and with some lenses the mirror of my 5D would hit the back of the lens, needing me to wiggle it to free the mirror after taking a photo.

e2d3556dfa31bafeebe55be3503cd31d320ca418.jpg

I measured my fancier Fotodiox Pro adapter, and found that not only was it too thin, but it was unevenly thick! The top was 8 thousandths of an inch thin, the bottom right was 2 thousandth of an inch thin, and the bottom left was exactly the right thickness.

I decided I could do something about it.

c8f2904056b5956c424217eac2e5ff8c071bcd35.jpg

I bought some shim stock from McMaster Carr, plastic and 2 thousandths of an inch thick, figuring I might be able to fold it to build up thickness if necessary. (Spoiler: it does fold.) It comes as a giant sheet five by twenty inches, but you’ll only need the tiniest amount of it.

9e62a1fa5ec3df578b5068e04c06bf70826cea6c.jpg

Then I went about removing the screws that hold the two sides together.

23fcb9581ed7ba5b4b1ab8dc8f6d6abbd1b1edd5.jpg

The screws are incredibly small.

3328b75d620272e42a07e6d923012e762f244736.jpg

Here you can see that there are only three points on the ring that actually control the thickness; I point to one with the scissors. I had to be careful when measuring the thickness to only measure it between the screws, and that was challenging because the EF mount diameter is larger than the C/Y mount diameter, and there was only the slightest overlap between the outside of the C/Y registration surface and the inside of the EF mount.

630d554c266458e194fa65c77c21d00b2426cfe7.jpg

Next I just cut a narrow strip out of this piece of shim stock using scissors, and put slits in it so it could fold more easily.

bc177c29ec559927f3f1b8df373a53dea4d2270a.jpg

The right hand shim is folded in the shape of a W, and the left hand shim is only one layer.

b7b673db42db682c8681e11363500892230d11f6.jpg

The thicker shim went on the top, and the thinner shim went on the bottom-right.

2c15b643aabc97d65f6fce6547d80e769391d70c.jpg

Put the ring back on, and then…

201a553a455b9780fc4120632b4db51bb2bf3a6c.jpg

Reinstall the screws.

Test your lenses for infinity focus and, if applicable, mirror slap, and rejoice if they’re good!


If you don’t have a perfect adapter as a reference for the proper thickness, you can first adjust the adapter to be perfectly even thickness all the way around, and then you can add thickness uniformly until your lenses just barely focus to infinity. It might be time consuming, but it’s very rewarding being able to trust the infinity stop on your lenses.

This method isn’t only applicable to the two-part SLR->SLR Fotodiox adapters; it should also work for SLR or rangefinder to mirrorless adapters as well.

I’ve seen it written that you can’t be sure whether or not your adapters are even thickness all the way around, but with this technique, you can make sure that your adapters are perfect.


Carlo originally posted this as a thread on the forums but I thought it would be useful as a post. He has graciously allowed us to re-publish it here. —Pat

March 10, 2016

A new module for automatic perspective correction

Since many years darktable offers a versatile tool for manual perspective correction in the crop & rotate module [1]. Although the principle is simple and straightforward, there are cases where it can prove difficult to get a convincing correction, especially if no distinct vertical or horizontal features can be spotted in the image. To overcome these limitations a new “perspective correction” module has just been added that is able to automatically correct for converging lines. The underlying mechanism is inspired by the program ShiftN developed by Marcus Hebel and published under the GPL [2].

The GUI of the perspective correction module.

The GUI of the perspective correction module.

Background

Perspective distortions are a natural effect when projecting a three dimensional scene onto a two dimensional plane photographically. As such they are not to be confused with lens errors which can be corrected in darktable’s lens correction module [3]. Perspective distortions cause objects close to the viewer to appear much larger than objects further away in the background. The closer you get to a subject the stronger the effect. As lenses with a short focal length force you to get closer to your subject, photos taken with wide angle lenses are more prone to show strong perspective distortions than telephoto lenses. Once again this is not an effect of the lens but an effect of the perspective (distance) of the viewer or camera to the scene.

Two lions and a castle. Taken with 14mm (APS-C) the two sculptures cover different areas in the image although they have the same size

Two lions and a castle. Taken with 14mm (APS-C) the two sculptures cover different areas in the image although they have the same size. Note the converging lines of the vertical architectural features.

Converging lines are a special case of perspective distortions frequently seen in architecture photographs. Parallel lines are an essential feature of most types of architecture; when photographed at an angle parallel lines get transformed into converging lines that meet at some vantage point within or outside the image frame.

Interestingly, viewers are mostly disturbed if they see a photo with converging lines that they know or assume to be vertical in reality. The reason seems to be that our brain is trained to unconsciously correct for vertical lines in the picture delivered by our eyes – vertical lines appear still vertical to us although the eye sees an image with converging lines as a camera would do. When watching the same scene in a photographic image the viewing situation is different and our brain does not apply its corrections. Now we clearly can identify the lines as converging and that conflicts to what we are used to see naturally.

There are a few ways to overcome this effect when taking a photo. One way is keeping the camera's optical axis pointing to the horizon so that vertical lines run parallel to the sensor plane. However, this will bring a lot of potentially boring foreground into the lower part of the image which typically needs to be cropped afterward in image processing. Alternatively, one could use a shift lens [4] which gives more control on what part of the scene gets onto the camera sensor. Shift lenses tend to be heavy and expensive, so not everybody keeps them in his or her camera bag. That’s where perspective corrections in image processing come into play.

To illustrate the workflow of this module we take this example image with converging lines and a deliberately skewed horizon.

To illustrate the workflow of this module we take this example image with converging lines and a deliberately skewed horizon.

Working principle of the perspective correction module

Converging lines can be corrected by warping the image in such a way that the lines in question get parallel. The perspective correction module in darktable simulates the effects of a shift lens and – in its simplest form – only needs a single lens shift parameter to correct converging lines along one direction. Corrections can be done in vertical and horizontal direction, either separately or in combination.

Images quite often come with a tilted horizon. As we want lines not only to be parallel among themselves but also to align with the image frame the module additionally applies a rotation angle which is controlled by a further parameter.

The three basic adjustment parameters are controlled with sliders in the upper part of the module's GUI.

The three basic adjustment parameters are controlled with sliders in the upper part of the module's GUI.

Automatic corrections

Although a manual adjustment of the parameters is possible users will typically want to rely on the auto-correction feature of this module.

The icons presented in the lower part of the module's GUI trigger structure detection and automatic perspective correction.

The icons presented in the lower part of the module's GUI trigger structure detection and automatic perspective correction.

The principle way of working is as follows. darktable analyzes the image for structural features consisting of line segments. These line segments are evaluated and ranked to identify those lines which form a set of converging lines meeting in a vantage point. Please note that by far not all line segments in an image represent suited vertical or horizontal lines of the scene – it is crucial that unsuited lines are identified and eliminated from the set. Based on the remaining lines an automatic fitting procedure is then started that tries to identify the best values of the module parameters (rotation angle and lens shift in one direction, or rotation angle and lens shift in both directions) to get the image straight, i.e. adjust the converging lines to run parallel and/or horizontal to the image frame.

 

ashift_2

Pressing the “get structure” icon causes darktable to analyze the image for structural elements. Line segments are detected, evaluated and ranked. The line segments are then displayed as overlays on the image base. A color code tells you what type of line darktable has found:

  • green lines that are selected as relevant vertical converging lines
  • red lines that are vertical but are not part of the set of converging lines
  • blue lines that are selected as relevant horizontal converging lines
  • yellow lines that are horizontal but are not part of the set of converging lines
  • grey lines that are detected but not of interest to this module
Structure detection with additional edge enhancement.

Structure detection with additional edge enhancement.

Lines marked in green and blue are taken for the further processing steps.

darktable is quite aggressive when it comes to marking lines in red or yellow, thus eliminating them from the set of selected lines. Reason is that we need to kick out unsuited lines as far as possible so that they do not negatively influence the following parameter fitting. This outlier elimination involves a statistical process with random sampling. As a consequence each time you press the “get structure” button the color pattern of the lines will look a bit different and some lines will get deselected that are obviously “good” ones. This is of no concern as long as all really unsuited lines are marked in red and yellow and as long as enough suited green and/or blue lines remain.

If you disagree with darktable's decision you can always manually change it. Left-clicking on a line selects it (turns the color to green or blue) while right-clicking deselects it (turns the color to red or yellow). Keeping the mouse button pressed allows for a sweeping action to select/deselect multiple lines. Manual interaction is typically only needed if the line structure is distributed in a strongly inhomogeneous manner, e.g. all selected lines are on the right hand side of the image, none on the left. In this case you might need to manually select at least some lines on the left to allow for a successful automatic correction.

As this module needs to find straight lines, it is generally a good idea to have the image corrected for lens artifacts before line detection. You should therefore activate the lens correction module [3] before this one or click on the “get structure” icon again after having activated lens correction.

If your image exhibits a rather low contrast with few defined features the overall number of detected lines might be low. In this case you may try to ctrl-click on the “get structure” icon which activates an additional edge enhancing step before line detection. This leads to significantly more lines being detected.

ashift_3

Pressing the “clear structure” icon discards all collected structural information.

ashift_4

Pressing the “display structure” icon temporarily blocks the overlay display of lines so you have a better view of your image. Press the icon again to re-enable overlay display.

ashift_5

Pressing one of the “automatic fit” icons starts an optimization process which tries to detect the best fitting parameters for perspective correction of the image.

The leftmost icon performs an optimization of rotation angle and vertical lens shift based on selected vertical lines (the green ones).

The middle icon performs an optimization of rotation angle and horizontal lens shift based on selected horizontal lines (the blue ones).

The rightmost icon performs an optimization of all three parameters based on selected vertical as well as horizontal lines (the green and the blue ones). This typically leads to a compromise between the two other options.

The example image after fitting for all three correction parameters by clicking on the rightmost icon.

The example image after fitting for all three correction parameters by clicking on the rightmost icon.

In some cases you may want to only fit the rotation angle parameter while keeping the lens shift parameter(s) constant. You can do so by ctrl-clicking on one of the icons. The structural data used for fitting are the same as described above. Likewise you may shift-click on the icons to only fit the lens shift parameter(s) while keeping the rotation angle constant.

As a general rule all parameters that are not affected by the respective fitting step are kept constant.

One remark about correcting horizontal perspective distortions. Due to our way of living in a “mostly planar world” horizontal perspective distortions tend to be much stronger than vertical distortions in many images without viewers taking much notice. We are used to see converging horizontal lines, e.g. when looking along a street, and in fact we take advantage of them to estimate relative distances. However, that also means that horizontal lines tend to converge so strongly in a photographic image that it is out of the scope of this module to correct them. The horizontal vantage point may even lie within the image frame, giving a so called central perspective. For obvious reasons there is no way to correct for this image characteristic.

Further options

This module offers a number of further useful features which are controlled by a set of comboboxes in the middle part of the GUI.

ashift_1_3

Further options of the perspective correction module.

The “guides” combobox lets you activate a set of horizontal and vertical guidelines laid over the image. You may use them to judge the quality of the automatic correction or let them assist you when doing manual adjustments.

The “automatic cropping” feature does what the name implies. When working with this module you will see immediately that any correction leads to more or less pronounced black corners around the warped image. You need to crop the image in order to get rid of the corners. This can be done either in the crop & rotate module [1], or by activating the “automatic cropping” combobox. “Largest area” finds the largest rectangular area which just fits into the warped image, while “original format” does the same with the additional prerequisite of keeping the aspect ratio of the original image.

Activate guide lines and automatic cropping for "largest area".

Activate guide lines and automatic cropping for "largest area".

Please note that you always see the full image as soon as the “perspective correction” module is in focus; cropping is not applied in order to grant a complete overview of the image. Automatic cropping, if activated, displays a frame around the cropping area that it has selected.

The example image above illustrates that you may lose a quite significant part of your image border when doing perspective corrections. You should take this into account when taking the shot and leave sufficient room around your main subject.

Of course the automatic cropping feature does not have any knowledge of the contents of your image – we recommend to use it in simple cases. Frequently you will want to have full artistic control on what parts of the image to crop; this power is only offered by the crop & rotate module [1]. We suggest to use either the automatic or the manual cropping option; combining both may lead to surprising results as soon as you change any parameters.

The “lens model” option controls some details of the underlying warping algorithm, namely the level of image compression perpendicular to the lens shift direction.

ashift_6

The default setting is “generic” which gives a realistic results in most cases. For images taken with a long focal length you may want to switch to “lens specific” which opens a number of sliders that let you define the focal length and the crop factor of your camera. You may also freely adjust the aspect ratio of the resulting image with the “aspect adjust” slider.

The final image after perspective correction.

The final image after perspective correction.

Availability

The new module has recently been added to the development branch of darktable [5]. It can be found in the “correction group” after activating it in “more modules”. Users of the unstable branch are invited to test it and give feedback. The usual disclaimer applies: as this module is still in the development phase we might change its behavior without further notice, don’t use it for productive work unless you know what you are doing. Users of the stable branch will see the final “perspective correction” module as part of darktable's next feature release.

[ Update 18.03.16 ]

An updated version of this module has just been merged into the master branch. There is now a fourth adjustment parameter "shear" that warps the image along one of the diagonals. Together with the three other parameters this now allows to fully correct an image in almost all reasonable cases. Only perspective distortions so extreme, that they would lead to very poor image quality if corrected, are out of scope.

Clicking on the "fit both" icon (the rightmost in the automatic fit row) now finds optimal values for all four parameters. You get the former behavior of fitting only rotation and lens shift by ctrl-shift-clicking on that same icon.

A GUI enhancement suggested by Aldric has been added which allows to mass-select or deselect lines (changing their color from green/blue to red/yellow, or vice versa). You press the shift button  and then draw a rectangular area on the image base. All lines within that area are being selected or deselected on button release. The left mouse button selects, the right mouse button deselects.

[1] http://www.darktable.org/usermanual/ch03s04.html.php#basic_group

[2] http://www.shiftn.de/

[3] http://www.darktable.org/usermanual/ch03s04s04.html.php

[4] https://en.wikipedia.org/wiki/Perspective_control_lens

[5] https://github.com/darktable-org/darktable

March 09, 2016

An announcement

Since 2010, with the help of an irreplaceable set of contributors, we’ve published eight issues of Libre Graphics magazine, spread over two volumes. Sometimes we’ve been slow, but we’ve always gotten the issue out in the end. With the release of issue 2.4, we’re announcing the end of the project. Magazines take a lot of care and feeding, and we think it’s time to let this one go. We’re proud of the two volumes we’ve produced, and we’re stopping while we’re ahead.

Over the last five-and-a-bit years, we’ve published written and visual work that we believe shows off what’s most exciting about Free/Libre and Open Source graphics, design, and art. We’ve published the work of a range of people, with varied opinions on the present and future of a whole collection of issues and concerns in the libre graphics world. We’re pleased to have been a venue for thoughtful writing on subjects as broad as the licensing of fonts, gender representation in F/LOSS, and automatic typesetting. One of the things we’re proudest of is providing a venue where artists and designers who are new to F/LOSS can get their bearings, and can see that amazing work can and is being done with libre software and licenses.

So we’re ending things on a high note. But, being a project inspired by F/LOSS, we’re not disappearing entirely. Though we won’t be continuing active development of the magazine and won’t be publishing new issues, we’ll be continuing to make the work that’s already happened available. Our repositories are still up, and you can still branch them, copy them, and use their contents. We’re leaving our website up, too, so that you can download the PDFs and point other people to them. You can still get and read digital copies of every issue of Libre Graphics magazine, and you can still print them out should you so desire. And, until we run out of the stock we have on hand, you can still order copies of most back issues in print.

We want to thank you for an excellent few years, for the encouragement, the contributions, and for reading.

Ana Isabel Carvalho
ginger coons
Ricardo Lafuente

Juniper allergy season

It's spring, and that means it's the windy season in New Mexico -- and juniper allergy season.

When we were house-hunting here, talking to our realtor about things like local weather, she mentioned that spring tended to be windy and a lot of people got allergic. I shrugged it off -- oh, sure, people get allergic in spring in California too. Little did I know.

A month or two after we moved, I experienced the worst allergies of my life. (Just to be clear, by allergies I mean hay fever, sneezing, itchy eyes ... not anaphylaxis or anything life threatening, just misery and a morbid fear of ever opening a window no matter how nice the temperature outside might be.)

[Female (left) and male junipers in spring]
I was out checking the mail one morning, sneezing nonstop, when a couple of locals passed by on their morning walk. I introduced myself and we chatted a bit. They noticed my sneezing. "It's the junipers," they explained. "See how a lot of them are orange now? Those are the males, and that's the pollen."

I had read that juniper plants were either male or female, unlike most plants which have both male and female parts on every plant. I had never thought of junipers as something that could cause allergies -- they're a common ornamental plant in California, and also commonly encountered on trails throughout the southwest -- nor had I noticed the recent color change of half the junipers in our neighborhood.

But once it's pointed out, the color difference is striking. These two trees, growing right next to each other, are the same color most of the year, and it's hard to tell which is male and which is female. But in spring, suddenly one turns orange while the other remains its usual bright green. (The other season when it's easy to tell the difference is late fall, when the female will be covered with berries.)

Close up, the difference is even more striking. The male is dense with tiny orange pollen-laden cones.

[Female juniper closeup] [male juniper closeup showing pollen cones]

A few weeks after learning the source of my allergies, I happened to be looking out the window on a typically windy spring day when I saw an alarming sight -- it looked like the yard was on fire! There were dense clouds of smoke billowing up out of the trees. I grabbed binoculars and discovered that what looked like fire smoke was actually clouds of pollen blowing from a few junipers. Since then I've gotten used to seeing juniper "smoke" blowing through the canyons on windy spring days. Touching a juniper that's ready to go will produce similar clouds.

The good news is that there are treatments for juniper allergies. Flonase helps a lot, and a lot of people have told me that allergy shots are effective. My first spring here was a bit miserable, but I'm doing much better now, and can appreciate the fascinating biology of junipers and the amazing spectacle of the smoking junipers (not to mention the nice spring temperatures) without having to hide inside with the windows shut.

March 08, 2016

Relatório da Hackatona de Design

Durante semana, no mês de janeiro, eu estive no Brasil, no Rio de Janeiro para uma hackatona de design, com os designers de Endless e do projeto GNOME.

Que é o produto de Endless?

O maior produto de Endless é um sistema operativo para mini computadores que eles fazem, o Endless Mini e o Endless (Maxi?). O sistema operativo usa Linux e uma versão de GNOME com algumas changes (mudanças). O uso principal desses computadores é de ter muitas informações sem acesso à Internet. Por exemplo, tem muitos aplicativos sobre viagens, animais e etc que são diretamente dentro do computador, usando Wikipedia como fonte, e uma outra aplicação de receitas, com uma outra terceira fonte.

A hackatona em si

Os dois primeiros dias foram para viajar e visitar os usuários “beta” do Endless computadores, um dia na Rocinha, uma favela do Rio. E um outro dia em Magé, uma cidade rural do estado do Rio.
Os três últimos dias foram para discussões no escritório de Endless.

Observações

É uma coisa para fazer testes de usabilidade nos EUA e na Europa, e uma outra coisa de fazer isso num país sem habitude de usar “computadores pessoais” com Windows o MacOS X, mas muita mais habitude com celulares.

Por exemplo:
- Se se tem um mouse, vão dar dublo clique. Não é um problema com teclados sensíveis.
- Dividir a tela para ter um aplicativo ao lado de uma outra é difícil também.
- Se não se tem um acesso à Internet, não vão tentar instalar o acessar outros aplicativos que estão já no computador.
- Não estão acostumados a fechar aplicativos que não usam mais. Um sistema operativo de celular vai fechar os aplicativos antigos de maneira transparente.

Conclusões

Muitas coisas que o Endless ou GNOME podem mudar ou melhorar.

- GNOME tem alguns vídeos para explicar o “overview”. Um jogo ou tutorial podem ser melhor para explicar e ter certeza de que os usuários entendem.
- GNOME precisa melhorar a integração de modems celulares. ModemManager tem as funções que GNOME não usa.
- “Web” precisa de integração com detecção de malware, que ele não tem agora, mas foi uma ideia para o Summer Of Code dos anos precedentes.
- GNOME pode melhorar a primeira tela de todos os aplicativos e do sistema também, especialmente se o usuário não tem Internet para baixar conteúdo.

Muito obrigado a fundação GNOME pelas minhas passagens. Obrigado ao Endless e o Allan Day pela a organizacão. Obrigado ao meu empregador Red Hat pela oportunidade. E, enfim, obrigado à Caro pela correcção!



March 07, 2016

darktable 2.0.2 released

we're proud to announce the second bugfix release for the 2.0 series of darktable, 2.0.2!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.2.

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

$ sha256sum darktable-2.0.2.tar.xz
75ea6354eb08aab8f25315a2de14c68dc6aad6ee5992061beea624afc7912400 darktable-2.0.2.tar.xz
$ sha256sum darktable-2.0.2.dmg 
33789b5a791770f9308cc653aaf50b0e9c054a0fbdd5e4a1d2e48e2bf6553f95  darktable-2.0.2.dmg

and the changelog as compared to 2.0.1 can be found below.

General

  • Require glib of at least version 2.40

New features

  • Add support for DNGs from x3f_extract
  • Support XMP files from Ramperpro timelapse controllers from ElysiaVisuals

Bugfixes

  • Fix some problems with sluggish GUI when Lua is compiled in
  • Some High DPI fixes
  • Small theming fixes
  • Fix some strings being too long in the GUI, especially when using localized versions
  • Fix a potential crash with malformed GPX files
  • Fix wrong zoom level of the map when searching for a location
  • Put XMP metadata into the right Exif fields
  • Fix a crash in masks.
  • Fix a crash in demosaicing
  • Fix Markesteijn demosaicing
  • Fix a crash when moving the mouse while going to darkroom when crop&rotate is active
  • Fix discrepancy between CPU and OpenCL codepath in invert
  • Fix some crashes with certain TIFF files
  • Fix build with GCC6
  • Fix build with osmgpsmap older than 1.1.0
  • Fix compilation when there are spaces in the path names

Camera support

  • Fujifilm X-Pro2

White balance presets

  • Pentax K-S2

Noise profiles

  • Fujifilm X-T10
  • Pentax K-S2

Translations

  • new
    • Hebrew
  • updated
    • German
    • Slovak
    • Swedish

Age Ratings in GNOME Software: Introducing OARS?

In GNOME Software we show lots of applications ranging from games aimed at pre-schoolers to applications explicitly designed to download, well, porn. A concept that is fairly well understood by parents is age ratings, and there are well known and trusted ratings bodies such as the ESRB and PEGI, as well as other country-specific schemes. Parents can use the ratings to control what kind of content is available to install, and vendors can use the ratings as a legal (or common-sense) control who gets to purchase what.

The ratings systems between countries are varied, varying from descriptions such as “M” which will be familiar for US users, “R” for Australian users to the slightly more obvious “18+” rating for European users. The differing ratings authorities define what is allowed in each category in slightly different way, some allowing mild profanity for a “7+” rating, and others none at all. Some countries consider drug taking in a video game to be no more dangerous as to mild cursing, other countries consider this on the same level as sexual violence.

OARS

So, we’re sunk, right? Nearly. There exists a group called “International Age Rating Coalition” which allows developers to register (sometimes for free), answer a simple questionnaire and out pops the ratings they should use for various countries. The IARC is made up of the regulatory bodies all over the planet, and so you can use the actual trademarked age rating images for your product. ish.

If you want to build a a software center, say GNOME Software for example, you have to pay a license fee. A $100,000 annual fee, plus extra per application shown in the software center. This is prohibitive for us, and would mean we couldn’t have the same functionality in other software center interfaces.

We could easily provide in the AppData files details about the application/game. This can be combined with a rule engine specific to the country of viewing, which would pop out a rating. I think the ESRB would be hard pushed to trademark “M” as an age rating, although I completely agree they have correctly and sensibly trademarked the stylized logo for the PG rating, along with the “ESRB” name itself. I don’t think this should stop us using an “PG” or “M” rating in the software center as long as we avoid these trademarks and copyrights.

I’m happy to work on a new system to both generate the AppData upstream information from a questionaire, and the rule engines that processes these rules and pops out a rating. The question then becomes, is this useful? Is this something that people would actually want? Comments welcome.

p.s. OARS: “Open Age Rating System”, name is work in process.

March 05, 2016

March 2016 drawing challenge!

A new drawing challenge on the forum: https://forum.kde.org/viewtopic.php?f=277&t=131341

This month’s subject is Fan-Art-Remix.

Please only post fan art when the creator gave you the explicit permission to make fan art or the creator explicitly allows fan art in general! (And don’t forget to credit your sources!)

 

March 04, 2016

The Dummies’ Guide to US Presidential Elections

The US presidential primaries

For those following the US presidential primaries from a distance, and wondering what is happening, here’s a brief dummies’ guide to the US presidential primaries and general election. It’s too early to say that Trump has won the Republican primary yet, even though (given his results and the media narrative) he is a strong favourite. To learn more than you will ever need to know about US presidential primaries, read on.

Primaries elect delegates

The presidential candidates are elected by the major parties at their party conventions, held during the Summer before the election. The primary elections are one way that the parties decide who gets to vote in the convention, and who they vote for.

Both parties have the concept of pledged and unpledged delegates – if you are pledged, then your vote in the 1st ballot of the nomination election has been decided in the primary. If you are unpledged, then you are free to change vote at any time. The Democrats have about 15% of their delegates unpledged, these are called superdelegates. The Republican party has about 170 unpledged delegates, representing about 7% of the total delegate count. Each state decides how to award pledged delegates, with a variety of processes which I will describe later.

If no candidate has a majority of delegated on the 1st ballot, then the fun starts – delegates are free to change their affiliation for 2nd and further ballots. This scenario, which used to happen often but now happens rarely, is called a contested or brokered convention. The last brokered convention was in 1952 for the Democrats, and 1948 for the Republicans. We have come close on a number of occasions, most recently 2008 for the Democrats, and 1976 for the Republicans.

Primary states vs caucus states

A primary is a straight vote – you turn up any time during the day, cast your ballot, and leave. Caucuses are different: voting is open, and caucus sites are typically only open for a couple of hours. In Iowa, for example, caucus-goers turn up and have a two hour meeting/debate about the candidates, when people can change who they support before the final count. The way caucuses work varies from state to state.

Turn-out in caucuses is much lower than in primaries, making them much more difficult to predict than primary states. Voters tend to be more committed supporters of the candidates, and caucus states favour candidates who have a strong core of supporters, and a strong “get out the vote” ground game.

Open primaries vs closed primaries

In the United States, when people register to vote, they also declare which party they support. Typically, people declare either Democrat, Republican, or Independent/Unaffiliated.

Some states allow independents to decide whether to vote in the Democratic or Republican primaries on the day of the vote. These primaries are called open primaries. They tend to favour candidates who are seen to be more moderate, and are also watched closely to get an indication of which party has an advantage among independents in the general election. On the other hand, closed primaries tend to favour candidates who appeal to the party base. In general, early primaries this year are open, while more of the primaries in March and later are closed.

Proportional delegate allocation vs winner take all

In the Democratic primaries, all of the primary contests split delegates proportional to the vote received by the candidates, as long as they get a minimum threshold of 15%. However, in the republican primaries, delegates are allocated differently from state to state. Some states have a strict proportional allocation of delegates, some allocate delegates based on the winner of the popular vote per congressional district, and some votes specify a minimum vote threshold of 20% to get any candidates at all, and award all delegates to a candidate who gets over 50% of the vote. Most states have a mix of delegates per congressional district and a number of “at-large” delegates, allocated according to the statewide vote.

For the Republican primaries, all states voting before a specific deadline (March 15th this year) are required to have some kind of proportional delegate distribution. Once that deadline is passed, some states choose a “winner take all” scheme – whoever gets a plurality of the vote receives all of the delegates. From March 15th onward, a number of states choose to allocate all of their candidates to whichever candidate wins the plurality of votes, with the intention of giving a strong mandate to whoever is the most popular candidate left at that point, and to enable the party to unite behind a single candidate. Ohio, Florida and Illinois are three such states, and they award a combined 220 delegates by themselves on March 15th.

Pulling the strings together

What does this all mean? Well, for one thing, it means that in spite of having won a plurality of the vote in 10 of the 15 states contested so far, Donald Trump does not have a lock on the nomination. It is possible that a more conservative candidate like Ted Cruz can rack up delegates in closed primaries and caucuses in states like Louisiana, Kansas, and Kentucky, or that a more mainstream candidate like Marco Rubio or John Kasich makes up ground in winner take all states like Florida, Illinois, and Ohio.

The difficulty for the Republicans is that they still have three viable “not Trump” candidates. As long as there are multiple candidates splitting the non-Trump vote, Trump can keep winning delegates without winning majority support anywhere. But with only about 30% of the total delegates awarded so far, the race is far from over. Given the acrimony in the race to date, there is a good chance that we might see no candidate with a majority of delegates by the convention.

If the candidates get to the convention with 2 or 3 candidates holding big blocks of delegates, but with no-one having a majority, then we would have a brokered convention for the first time in over 60 years. At that point, all bets are off. It is possible that some of the delegates who were pledged to one candidate switch alliances, to vote for one of the others. But it is also possible for a prominent republican who did not compete in the primaries to present themselves to the Convention as a compromise candidate – a senior house Republican with a nationally recognised face like Paul Ryan or Mitch McConnell, or a former candidate like Mitch Romney or Scott Walker. If that were to happen, it would be anyone’s guess who would represent the Republican party in November.

The big difference on the Democratic side is that Democratic states all vote proportionally, and there are more superdelegates.What that means for Bernie Sanders is that, while he is trailing by only about 200 in pledged delegates after 16 contests, he is trailing by over 600 delegates when superdelegates are counted. To be nominated at the Convention, he would need to win upcoming states heavily, and convince a significant number of unpledged superdelegates to change their support from Hillary Clinton. In addition, since the demographics of Super Tuesday states, where he underperformed with respect to expectations, were unfavourable to him, the narrative in the public media is now creating a self-fulfilling prophesy. “Clinton is beating Sanders” will remain the story, until election results cast doubt on that conclusion, and people tend to vote for the frontrunner. With Clinton expected to win the primaries in Louisiana, Michigan, and Mississippi, and with low expectations in this weekend’s caucus states, that narrative is likely to continue unless Sanders can pull off big wins in Florida, Illinois, or Ohio, on March 15th. And even after that, he would need to rack up some big wins in states like New York and California to close the gap on Hillary. Given the state of the race, it is fair to say that Hillary Clinton has a near-lock on the Democratic nomination.

Next time: The Electoral College

After the parties nominate candidates, and the candidates name their running mates, it is on to the general election, and then we meet another of America’s controversial institutions, the Electoral College, which will be elected on Tuesday, November 8th, and which elects the President. How the electoral college works, and how it affects the presidential campaign, and what happens if, for some strange reason, the elctoral college does not give a majority vote to any candidate, will be the subject of another post.

 

Recipe: Easy beef (or whatever) jerky

You don't need a special smoker or dehydrator to make great beef jerky.

Winter is the time to make beef jerky -- hopefully enough to last all summer, because in summer we try to avoid using the oven, cooking everything outside so as not to heat up the house. In winter, having the oven on for five hours is a good thing.

It took some tuning to get the flavor and the amount of saltiness right, but I'm happy with my recipe now.

Beef jerky

Ingredients

  • thinly sliced beef or pork: about a pound or two
  • 1-1/2 cups water
  • 1/4 cup soy sauce
  • 3/4 tbsp salt
  • Any additional seasonings you desire: pepper, chile powder, sage, ginger, sugar, etc.

Directions

Heat water slightly (30-40 sec in microwave) to help dissolve salt. Mix all ingredients except beef.

Cut meat into small pieces, trimming fat as much as possible.

Marinate in warm salt solution for 15 min, stirring occasionally. (For pork, you might want a shorter marinating time. I haven't tried other meats.)

Set the oven on its lowest temperature (170F here).

Lay out beef on a rack, with pieces not touching or overlapping.
Nobody seems to sell actual cooking racks, but you can buy "cooling racks" for cooling cookies, which seem to work fine for jerky. They're small so you probably need two racks for a pound of beef.

Ideally, put the rack on one oven shelf with a layer of foil on the rack below to catch the drips.
You want as much air space as possible under the meat. You can put the rack on a cookie sheet, but it'll take longer to cook and you'll have to turn the meat halfway through. Don't lay the beef directly on cookie sheet or foil unless you absolutely can't find a rack.

Cook until sufficiently dry and getting hard, about 4 to 4-1/2 hours at 170F depending on how dry you like your jerky. Drier jerky will keep longer unrefrigerated, but it's not as tasty. I cook mine a little less and store it in the fridge when I'm not actually carrying it hiking or traveling.

If you're using a cookie sheet, turn the pieces once at around 2-3 hours when the tops start to look dry and dark.

Tip: if you're using a rack without a cookie sheet, a fork wedged between the bars of the rack makes it easy to remove a rack from the oven.

February 29, 2016

Interview with Mozart Couto

dragon-and-warrior-

Could you tell us something about yourself?

I was born and live in Brazil. I have worked as an illustrator and comic book artist since 1979 and a few years ago I started to use only open source software. I published my comics in Brazil, in Europe (https://www.lambiek.net/artists/c/couto_mozart.htm) and in the American comics market (Marvel, DC, image, Dark Horse and Valiant) in the late 1990s.

Do you paint professionally, as a hobby artist, or both?

Both.

What genre(s) do you work in?

I work in various genres, including illustration for children’s books and also for teenagers, fantasy art and, most recently, abstract digital painting.

29-01-16-abstract-krita-800

Whose work inspires you most — who are your role models as an artist?

When I was younger, my inspiration came from the great artists of American comics, like John Buscema, Hal Foster, Frank Frazetta and others. Later, the European stars, like Moebius, Serpieri, Herman, etcetera. And also the Japanese manga.

How and when did you get to try digital painting for the first time?

I think it was around 1998 or 2000. I bought a tablet and started. Then the PC became a much more interesting thing to me.

What makes you choose digital over traditional painting?

I think it’s fascinating. You can experience many things and get the craziest results. The possibility of immediate correction is a major factor in my choice of digital painting for professional production. But also I like to try something new in art all the time. I think that artists should always keep experimenting with new tools.

How did you find out about Krita?

It was a long time ago. I used a Brazilian Linux distribution called “Kurumin”, which is very famous here. As this distribution used KDE, I did find Krita. Then I started using GIMP and also MyPaint. I liked Krita but thought it needed improvement. Currently, I use it again because I think that the program is much more robust now and can better serve the digital illustrator.

What was your first impression?

As I said before, I really liked the program, but thought it needed improvement.

What do you love about Krita?

Many things. I’m still learning how to use the resources, so I would say that the various possibilities of use of the brushes are very useful for my work.

krita-study-12-2014-800

What do you think needs improvement in Krita? Is there anything that really annoys you?

I wish there was the possibility of working with brushes with impasto effects (as in paintings Alla Prima) and real watercolor. Also, to use a dual brush effect (two brushes painting together). What annoys me? I do not like having to open a file each time the program starts in order to access the interface or the program crashes if I touch somewhere on the screen! Also I miss a free selection tool that could be used as a brush.

What sets Krita apart from the other tools that you use?

Krita has many useful functions that I like to use but can not find in other tools. One is to work with textures in the brushes. Something simple but very useful for me. Another thing are the functions of mixing colors that can be applied to brushes. Also distortion in the images directly on the screen is great.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think my best work is always the last because it always applies new things I learned.

krita-the-tree-of-death-07-02-16-800

What techniques and brushes do you use most?

Often I use brushes that I have built myself and use in Gimp and also the excellent brushes of https://krita.org/learn/resources/. I usually work exploring layers styles and mix brushes and various brushes that simulate bristles and acrylic or oil painting. Also I always use textures in some default brushes in some phases of the work.

Where can people see more of your work?

http://mozartcoutoimagens.blogspot.com.br/

Anything else you’d like to share?

As I said before, I am still in a learning phase but I think Krita perfectly meets the illustrator and comic book artists in general through its additional tools and resources to create perspective drawings, for example. I hope that the team of developers and users will work enough for the program to become increasingly used and known. Thank you for the invitation to the interview.

February 27, 2016

Learning to Weld

I'm learning to weld metal junk into art!

I've wanted to learn to weld since I was a teen-ager at an LAAS star party, lusting after somebody's beautiful homebuilt 10" telescope on a compact metal fork mount. But building something like that was utterly out of reach for a high school kid. (This was before John Dobson showed the world how to build excellent alt-azimuth mounts out of wood and cheap materials ... or at least before Dobsonians made it to my corner of LA.)

Later the welding bug cropped up again as I worked on modified suspension designs for my X1/9 autocross car, or fiddled with bicycles, or built telescopes. But it still seemed out of reach, too expensive and I had no idea how to get started, so I always found some other way of doing what I needed.

But recently I had the good fortune to hook up with Los Alamos's two excellent metal sculptors, David Trujillo and Richard Swenson. Mr. Trujillo was kind enough to offer to mentor me and let me use his equipment to learn to make sculptures like his. (Richard has also given me some pointers.)

[My first metal art piece] MIG welding is both easier and harder than I expected. David Trujillo showed me the basics and got me going welding a little face out of a gear and chain on my very first day. What a fun start!

In a lot of ways, MIG welding is actually easier than soldering. For one thing, you don't need three or four hands to hold everything together while also holding the iron and the solder. On the other hand, the craft of getting a good weld is something that's going to require a lot more practice.

Setting up a home workshop

I knew I wanted my own welder, so I could work at home on my own schedule without needing to pester my long-suffering mentors. I bought a MIG welder and a bottle of gas (and, of course, safety equipment like a helmet, leather apron and gloves), plus a small welding table. But then I found that was only the beginning.

[Metal art: Spoon cobra] Before you can weld a piece of steel you have to clean it. Rust, dirt, paint, oil and anti-rust coatings all get in the way of making a good weld. David and Richard use a sandblasting cabinet, but that requires a big air compressor, making it as big an investment as the welder itself.

At first I thought I could make do with a wire brush wheel on a drill. But it turned out to be remarkably difficult to hold the drill firmly enough while brushing a piece of steel -- that works for small areas but not for cleaning a large piece or for removing a thick coating of rust or paint.

A bench grinder worked much better, with a wire brush wheel on one side for easy cleaning jobs and a regular grinding stone on the other side for grinding off thick coats of paint or rust. The first bench grinder I bought at Harbor Freight had a crazy amount of vibration that made it unusable, and their wire brush wheel didn't center properly and added to the wobble problem. I returned both, and bought a Ryobi from Home Depot and a better wire brush wheel from the local Metzger's Hardware. The Ryobi has a lot of vibration too, but not so much that I can't use it, and it does a great job of getting rust and paint off.

[Metal art: grease-gun goony bird] Then I had to find a place to put the equipment. I tried a couple of different spots before finally settling on the garage. Pro tip: welding on a south-facing patio doesn't work: sunlight glints off the metal and makes the auto-darkening helmet flash frenetically, and any breeze from the south disrupts everything. And it's hard to get motivated to out outside and weld when it's snowing. The garage is working well, though it's a little cramped and I have to move the Miata out whenever I want to weld if I don't want to risk my baby's nice paint job to welding fumes. I can live with that for now.

All told, it was over a month after I bought the welder before I could make any progress on welding. But I'm having fun now. Finding good junk to use as raw materials is turning out to be challenging, but with the junk I've collected so far I've made some pieces I'm pretty happy with, I'm learning, and my welds are getting better all the time.

Earlier this week I made a goony bird out of a grease gun. Yesterday I picked up some chairs, a lawnmower and an old exercise bike from a friend, and just came in from disassembling them. I think I see some roadrunner, cow, and triceratops parts in there.

Photos of everything I've made so far: Metal art.

February 25, 2016

Migrating from xchat: a couple of hexchat fixes

I decided recently to clean up my Debian "Sid" system, using apt-get autoclean, apt-get purge `deborphan`, aptitude purge ~c, and aptitude purge ~o. It gained me almost two gigabytes of space. On the other hand, it deleted several packages I had long depended on. One of them was xchat.

I installed hexchat, the fully open replacement for xchat. Mostly, it's the same program ... but a few things didn't work right.

Script fixes

The two xchat scripts I use weren't loading. Turns out hexchat wants to find its scripts in .config/hexchat/addons, so I moved them there. But xchat-inputcount.pl still didn't work; it was looking for a widget called "xchat-inputbox". That was fairly easy to patch: I added a line to print the name of each widget it saw, determined the name had changed in the obvious way, and changed

    if( $child->get( "name" ) eq 'xchat-inputbox' ) {
to
    if( $child->get( "name" ) eq 'xchat-inputbox' ||
        $child->get( "name" ) eq 'hexchat-inputbox' ) {
That solved the problem.

Notifying me if someone calls me

The next problem: when someone mentioned my nick in a channel, the channel tab highlighted; but when I switched to the channel, there was no highlight on the actual line of conversation so I could find out who was talking to me. (It was turning the nick of the person addressing me to a specific color, but since every nick is a different color anyway, that doesn't make the line stand out when you're scanning for it.)

The highlighting for message lines is set in a dialog you can configure: Settings→Text events...
Scroll down to Channel Msg Hilight and click on that elaborate code on the right: %C2<%C8%B$1%B%C2>%O$t$2%O
That's the code that controls how the line will be displayed.

Some of these codes are described in Hexchat: Appearance/Theming, and most of the rest are described in the dialog itself. $t is an exception: I'm not sure what it means (maybe I just missed it in the list).

I wanted hexchat to show the nick of whoever called me name in inverse video. (Xchat always made it bold, but sometimes that's subtle; inverse video would be a lot easier to find when scrolling through a busy channel.) %R is reverse video, %B is bold, and %O removes any decorations and sets the text back to normal text, so I set the code to: %R%B<$1>%O $t$2 That seemed to work, though after I exited hexchat and started it up the next morning it had magically changed to %R%B<$1>%O$t$2%O.

Hacking hexchat source to remove hardwired keybindings

But the big problem was the hardwired keybindings. In particular, Ctrl-F -- the longstanding key sequence that moves forward one character -- in hexchat, it brings up a search window. (Xchat had this problem for a little while, many years ago, but they fixed it, or at least made it sensitive to whether the GTK key theme is "Emacs".)

Ctrl-F doesn't appear in the list under Settings→Keyboard shortcuts, so I couldn't fix it that way. I guess they should rename that dialog to Some keyboard shortcuts. Turns out Ctrl-F is compiled in. So the only solution is to rebuild from source.

I decided to use the Debian package source:

apt-get source hexchat

The search for the Ctrl-F binding turned out to be harder than it had been back in the xchat days. I was confident the binding would be in one of the files in src/fe-gtk, but grepping for key, find and search all gave way too many hits. Combining them was the key:

egrep -i 'find|search' *.c | grep -i key

That gave a bunch of spurious hits in fkeys.c -- I had already examined that file and determined that it had to do with the Settings→Keyboard shortcuts dialog, not the compiled-in key bindings. But it also gave some lines from menu.c including the one I needed:

    {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK, 0, 0, 1, GDK_KEY_f},

Inspection of nearby lines showed that the last GDK_KEY_ argument is optional -- there were quite a few lines that didn't have a key binding specified. So all I needed to do was remove that GDK_KEY_f. Here's my patch:

--- src/fe-gtk/menu.c.orig      2016-02-23 12:13:55.910549105 -0700
+++ src/fe-gtk/menu.c   2016-02-23 12:07:21.670540110 -0700
@@ -1829,7 +1829,7 @@
        {N_("Save Text..."), menu_savebuffer, GTK_STOCK_SAVE, M_MENUSTOCK, 0, 0,
 1},
 #define SEARCH_OFFSET (70)
        {N_("Search"), 0, GTK_STOCK_JUSTIFY_LEFT, M_MENUSUB, 0, 0, 1},
-               {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK,
 0, 0, 1, GDK_KEY_f},
+               {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK,
 0, 0, 1},
                {N_("Search Next"   ), menu_search_next, GTK_STOCK_FIND, M_MENUS
TOCK, 0, 0, 1, GDK_KEY_g},
                {N_("Search Previous"   ), menu_search_prev, GTK_STOCK_FIND, M_M
ENUSTOCK, 0, 0, 1, GDK_KEY_G},
                {0, 0, 0, M_END, 0, 0, 0},

After making that change, I rebuilt the hexchat package and installed it:

sudo apt-get build-dep hexchat
sudo apt-get install devscripts
cd hexchat-2.10.2/
debuild -b -uc -us
sudo dpkg -i ../hexchat_2.10.2-1_i386.deb

Update: I later wrote about how to automate this here: Debian: Holding packages you build from source, and rebuilding them easily.

And the hardwired Ctrl-F key binding was gone, and the normal forward-character binding from my GTK key theme took over.

I still have a couple of minor things I'd like to fix, like the too-large font hexchat uses for its channel tabs, but those are minor. At least I'm back to where I was before foolishly deciding to clean up my system.

FreeCAD Arch Workbench presentation

A video presentation of the Arch workbench of href=http://www.freecadweb.org>FreeCAD that I did last week at href=https://twitter.com/hashtag/ODC2016PN>ODC2016PN

February 23, 2016

Targeted selection for job interviews

A post by Amanda McPherson about her best interviewing tip over on LinkedIn got me thinking about an interview technique I was taught while on the GNOME board many years ago:

Focus on behavior. In jobs related to product management, business development, sales, marketing or communications, you  have people who are verbally skilled. Ask them anything and you will likely get a good verbal response, but that doesn’t mean it’s true. Focusing on behavior — how they follow up, how and when they respond to your emails and questions, how they treat you vs others on the team for instance — yields more accurate data of how they will be on a daily basis.

She quotes the story of a Charles Schwab executive who would take candidates to breakfast interviews, and ask the restaurant to mix up the order deliberately – just to see how they would react to the stressful event.

The technique, which was taught to the GNOME board by Jonathan Blandford, goes one step further. The principle of targeted selection is that the best predictor of future behaviour is past behaviour. So if you are hiring someone to manage a team, ask about a time they were a manager in the past. If you need someone who can learn quickly in a new and fast moving domain, ask them about a time they were in a similar situation. Then dig deep for details – what did they do, how did they interact with others, how effective was the outcome of the situation?

As an example: if you want to know how someone reacts under pressure, ask about a time that they were working on a project that ran late. Ask them to describe the moment when they realised that they were not going to make the release date on time, on quality, as planned. Then ask how they reacted – did they reduce scope, fight for a schedule extension, add people, get everyone working weekends? Was there a post mortem after the project shipped? Who took the lead on that? How were the lessons applied in the next project? You can use a line of questioning like this to identify the people who will power through obstacles, regardless of the cost; people who are more consensual, but may lack decisiveness; people who seek help versus taking on too much burden. This type of insight is gold-dust when you are evaluating a candidate.

Some other ideas for questions:

  • If you want someone who can ramp up quickly in a new area, ask about the last technology they discovered and became expert on. Then ask about the early days – was their instinct to read blogs, books, tutorials? To follow practical labs? To pay for training? Did they seek out people to ask questions and share knowledge? How did they evaluate where they were in the learning process? Have they stayed active and learning, or did they stop once they had enough knowledge to do the job? There is no right answer, but the approach they took will give you an idea of how they would attack a similar challenge in the future.
  • If inter-personal relationships are key to success in the job, dig into a time they had a significant disagreement (with a boss, with a subordinate, with a colleague, with someone in a community project) – something meaningful and important to them. How did they go about arguing their case? Was winning more important than getting a good solution? How important was the relationship to them?
  • If organisational skills are key: ask for an example of a time when they had to clean up after someone else. How did they go about draining the swamp? What do they say about the former organiser? How did they balance organising the existing system with allowing people to interact with the system and continue doing their jobs?

It isn’t just prospective employers who can use this technique to have better interviews. For candidates, this method can be awesome to allow you to prepare and take ownership of an interview. Look at the job requirements and required experience. When were you in a situation when you got to show the skills required? What were your actions, and what were the results?You can tell a story about your experience that hits all of the job requirements, even if your interviewer is not asking questions about it.

Go one step further: interview your interviewer! Think about the situations in the past where you have been successful and unsuccessful, and come up with your requirements – take that knowledge into the interview, and ask questions to check whether the position is a good match for you. Interviews are a two-way street, and you are interviewing the company as much as they are interviewing you. Ask interviewers when they were confronted with certain situations, and dig into their experiences as employees of the company. Is this a company that expects you to work weekends to meet unrealistic deadlines? Are you thrown a life buoy and expected to sink or swim? Is there a strict hierarchical structure, or are everyone’s perspectives heard and respected? Is there mobility within the company, or do people hit a developmental ceiling?

The great thing about this line of questioning is that it is not accessing the hypothetical side of the brain – you are not getting the idealised “I would…” answer where infinite time and resources, and everyone’s buy-in can be assumed. You are accessing memory banks, and the more details you get, the closer you get to the truth of how the person reacts. Especially great for providing insights are trade-offs, where there is no right answer – when two people want different things and you are there to adjudicate or be the intermediary, when you have to choose between two top priorities, when you only have enough time to do one of the three things that are important. In situations like that, you can really get insight into the approach and mentality of candidates, and also help candidates judge the culture and priorities of a company.

 

February 21, 2016

Announce: Entangle “Top” release 0.7.1 – an app for tethered camera control & capture

I am pleased to announce a new release 0.7.1 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

The this is mostly a bug fix release, but there was a little feature work on the film strip viewer widget. It has been rewritten to dynamically scale thumbnails according to the available space, and caches thumbnails at 256px size instead of 128px.

  • Fix linking problem with strict linkers
  • Misc spelling fixes to online help docs
  • Replace use of GSimpleAsyncResult with GTask
  • Specify versions when importing from python plugins
  • Remove use of deprecated GTK APIs
  • Render image stats overlay partially transparent
  • Fix error reporting when saving settings
  • Flush events after capture to avoid accidentally restarting preview
  • Make Nikon fine focus stepping finer
  • Ensure images are sorted by last modified date
  • Switch from 128 px to 256 px thumbnail sizes to benefit larger high dpi screens
  • Rewrite film strip browser to dynamically resize icons to fit available space
  • Draw symbolic icons in film strip if image is not yet loaded
  • Refresh translations from Zanata

February 20, 2016

jpeg2RAW Guest Spot


jpeg2RAW Guest Spot

An interview! LGM update! And Github?

Mike Howard, the host and creator of the jpeg2RAW podcast reached out to me last week to see if I might be able to come on the show to talk about Free Software Photography and what we’ve been up to here. One of the primary reasons for creating this site was to be able to raise awareness of the Free Software community to a wider audience.

So this is a great opportunity for us to expose ourselves!

Exposing Ourselves

The podcast airs live this Tuesday, February 23rd at 8PM Eastern (-0500). You can join us at the jpeg2RAW live podcast page! Mike has the live feed available to watch on that page and also has a chat server set up so viewers can interact with us live during the broadcast.

If you are free on Tuesday night then come on by and join us! I’ll be happy to field any questions you want answered (and that Mike asks) and will do my best to not embarrass myself (or our community). If you would like to make sure I address something in particular (or just don’t forget something), I also have a thread on discuss where you can make sure I know it.

I’m also looking for community members to submit some photos to help highlight our work and what’s possible with Free Software. Feel free to link them in the same thread as above. I’ve already convinced andabata to point us to some of his great macro shots (like that awesome lede image) and I’ll be submitting a few of my own images as well. If you have some works that you’d like to share please let me know!

In Case You Miss It

Mike has all of his prior podcasts archived on his Podcasts page. So if you miss the live show it looks like you’ll be able to catch up later at your convenience.

LGM Update

As mentioned previously we are heading to London for Libre Graphics Meeting 2016! We’ve got a flat rented for a great crew to be able to stay together and we’re on track for a PIXLS meet up before LGM!

Speaking of people, I’m looking forward to being able to spend some time with some great folks again this year! We’ve got Tobias, Johannes, and Pascal making it out (I’m not sure that Simon, top below, will be making it out) from darktable, DrSlony and qogniw from RawTherapee, Andrea Ferrero creator of PhotoFlow, even Ofnuts (how cool is that?) may make it out!

Darktable II Pascal, Johannes, and Tobias (left to right, bottom row) will be there!

We’ve also already had a great response so far on our Pledgie campaign. The campaign is still running if you want to help out!

Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !

If anyone is thinking they’d like to make it out to join us, please let me know as soon as possible so we can plan for space!

Mairi (Further) Looks like Mairi will be joining us!

My friend and model Mairi will also be making it out for the meeting. She’ll be on hand to help us practice lighting setups, model interactions, and will likely be shooting right along with the rest of us as well!

I’ll also be assembling slides for my presentation during LGM. I’ve got a 20 minute time slot to talk about the community we’ve been building here and the neat things our members have been up to (Filmulator, PhotoFlow, and more).

Speaking of slides and sharing information…

Github Organization

I’ve setup a Github Pixls organization so that we can begin to share various things. This came about after talking with @paperdigits on the post about the upcoming podcast at jpeg2RAW. We were talking about ways to share information and assets for creating/delivering presentations about Free Software photography.

At the moment there is only the single repository Presentations as we are figuring out structure. I’ve uploaded my slides and notes from the LGM2015 State of the Libre Graphics presentation announcing PIXLS. If you’re on Github and want to join us just let me know!

February 19, 2016

GIMP ditty: change font size and face on every text layer

A silly little GIMP ditty:
I had a Google map page showing locations of lots of metal recycling places in Albuquerque. The Google map shows stars for each location, but to find out the name and location of each address, you have to mouse over each star. I wanted a printable version to carry in the car with me.

I made a screenshot in GIMP, then added text for the stars over the places that looked most promising. But I was doing this quickly, and as I added text for more locations, I realized that it was getting crowded and I wished I'd used a smaller font. How do you change the font size for ALL font layers in an image, all at once?

Of course GIMP has no built-in method for this -- it's not something that comes up very often, and there's no reason it would have a filter like that. But the GIMP PDB (Procedural DataBase, part of the GIMP API) lets you change font size and face, so it's an easy script to write.

In the past I would have written something like this in script-fu, but now that Python is available on all GIMP platforms, there's no reason not to use it for everything.

Changing font face is just as easy as changing size, so I added that as well.

I won't bother to break it down line by line, since it's so simple. Here's the script: changefont.py: Mass change font face and size in all GIMP text layers.

February 18, 2016

Things I’ve Done and Things to Come

When I look back on my life so far, the things that are the most satisfying and fulfilling are those I have built myself or with others.

I’ve never built a house, a piece of furniture, or much of anything physical at all. I have, however, helped to create some less tangible things that have been enormously rewarding.

As a mostly self-serving exercise, I’m going to list out these things that I’m proud to have created. I would encourage you to do the same.

My family is the most significant and important thing that I’ve helped to build. Though a family is a lot of work, it is a gift, not a product. It is so much more important than these other things that it really belongs in a separate category.

So, here are some things I have made (or helped to make):

I made a blog (Acts of Volition)

This weblog was started in August of 2000 with two friends (Rob and Matt). It went on to be the venue for me to write 160,000 words across over 1,200 posts. There have been over 10,000 real comments by real people on the site.

One post I made on this site in October of 2003 about the visual design of Mozilla products sparked the beginning of a personal and professional relationship with Firefox and Mozilla that has lasted for 13 years. It brought me to meet extraordinary people, visit Whistler, Toronto, Portland, San Francisco, Mountain View, and gather countless free t-shirts.

In the early years of the blog, when I was writing regularly, I felt it was a window into a real community. Many of the posts are links to things that were silly and many are now irrelevant. Some were more meaningful. All of them were fun or interesting (to me, at least).

I made a podcast (Acts of Volition Radio)

In 2003, I started a podcast that I described as “assembling a bit of music, talking about who it is and why I like it”. Over six years I produced 34 episodes, including 261 songs. That’s over 24 straight hours of music and me talking about music.

While it’s embarrassing to listen to myself ramble, especially in the earlier episodes, I’m not embarrassed of a single one of the songs I picked. People would tell me they discovered artists and songs from the podcast, and went on to buy their music or see them live. I love hearing this.

I have always claimed that Acts of Volition Radio had no publishing schedule. I still consider it alive - there’s just a still-growing seven-year gap between the last episode and the next. You never know.

Though the weblog and podcast were about connecting with people, they were primarily solitary creations. The following few are things I created in direct collaboration with others.

I made a conference, twice (Zap Your PRAM)

My friends Peter Rukavina and Dan James (along with help from others at silverorange), created a small conference we called Zap Your PRAM. We hosted it once in 2003, in Cavendish, Prince Edward Island. Twenty-five or so extraordinary people came and spoke about film-making, the web, technology, radio, and whatever else interested them.

Five years later, in 2008, twice as many people came and spoke about music, radio, the web, and again, whatever else interested them. This second instance of the conference was hosted in the extraordinary Dalvay By-The-Sea hotel. I will never forget eating dinner with new friends in the Dalvay dining room, talking around the enormous lobby fireplace, or playing touch-football by the Dalvay lake.

These two conferences created friendships that continue over a decade later. There may yet be a third Zap (code-name Zap Your 3RAM). The five year gap between the first two isn’t enough to determine their regularity. Another could occur at any time.

I made an album (with Horton’s Choice)

I would highly recommend having been in a high-school rock band. For me, this band was Horton’s Choice. In addition to having a terrible band name, we were overly earnest and usually too loud. Our old website described us as follows:

Horton’s Choice was a rock band from Charlottetown, Prince Edward Island. We had delusions of grandeur, a lot of fun, spent a week in a recording studio in 1999, and then we broke up. A good time was had by all.

That recording studio was attached to a gift-shop in Borden-Carleton, at the Prince Edward Island side of the Confederation Bridge. It was a small and affordable studio with a studio tech who knew how to operate the recording equipment. We did everything else ourselves.

Each evening for a week, we worked from around 5pm until 11pm or so. We used the money we had made from our lousy part-time jobs to pay for the studio time. We recorded nine songs for a record that was never really named and never really released. I called it The Borden-Carleton Sessions. We had spent all of our money on recording and didn’t have any left over to produce actual CDs.

We weren’t a great band - but we loved playing together, and the recording we made is something I will always be proud (and a little embarrassed) of.

I made a company (silverorange)

Finally, something I helped to create that has probably had the most direct impact on my life and the lives of others is the company where I still work today.

In high school, a friend and I started a small web design business that was relatively successful given our modest goals. In 1999, we met a similar small company and joined with them to create silverorange.

There were seven (equal) founders. Three of the original seven moved on to bigger things, but the remaining four still work at and run silverorange today. In the sixteen years of silverorange, I’ve had the opportunity to work for amazing clients, like Mozilla, the original Digg, and Duolingo. We’ve helped companies sell seeds, crystalware, furniture hardware, medical education, and a ton of other things.

I’ve met and worked with people in many different companies and locations. I’ve had years early-on where I didn’t get paid. I’ve since helped to build something that now supports 11 great people. While it is a corporation - an intangible legal entity - silverorange has an office, clients, products, and most importantly people. It may be the closest I get to building a home.

While nostalgia can be fun, the reward of making, building, and creating isn’t confined to the past. I don’t know what else I’ll build, but experience tells me that if I really do build something, it will be worthwhile.

February 17, 2016

SVG Working Group Meeting Report — Sydney — 2016

The SVG Working Group had a four day face-to-face meeting in Sydney this month. Like last year, the first day was a joint meeting with the CSS Working Group. I would like to thank all the people that donated to Inkscape’s SVG Standards Work fund as well as to the Inkscape general fund that made my attendance possible.

Joint CSS and SVG Meeting

Minutes

CSS Stroke and Fill Specification

The CSS working group would like to allow the use of the SVG ‘stroke’ and ‘fill’ properties on CSS text as well as other places in CSS (e.g. box border). They’ve created a prototype document that basically copies the SVG stroke and fill text adding the necessary parts to get it to work with CSS text. This document has been temporary called Text Decoration 4 (the title will certainly be changed). They’ve proposed converting the ‘stroke’ and ‘fill’ properties to short-hands. (A short-hand property allows setting multiple CSS properties at the same time.) They also would like to see the ‘stroke-alignment’ property implemented (this property allows one to stroke only the inside or only the outside of a shape). I pointed out the difficulty in actually defining how ‘stroke-alignment’ would work. The SVG WG moved some of the advance stroking properties out of the SVG 2 specification into an SVG Stroke module to avoid holding up the SVG 2 specification. (See my blog entry on this as well as the issues in the SVG Stroke module.) Other issues discussed were how glyphs are painted (‘paint-order’, ‘text-shadow’), multiple strokes/fills, dashing, and ‘text-decoration-fill/stroke’.

Text Issues

Next we covered a whole slew of text issues I raised dealing with flowed text in SVG.

Strategy for using CSS in SVG for wrapping

The first issue was to agree on how SVG and CSS are related. I presented my strategy: that HTML/CSS and SVG have there own methods to define an area to fill called the wrapping area. Once this area is defined, one uses CSS rules to fill it. Here is how one defines the wrapping area in both HTML/CSS and SVG:

The CSS/HTML code:
    <style>
      .wrapper { shape-inside: xxx; ... }
      .float-left { shape-outside: yyy; ... }
      .float-right { shape-outside: zzz; ... }
    </style>
    <div id="wrapper">
      <div id="float-left"/>
      <div id="float-right"/>
      <p>
	Some text.
      </p>
    </div>
The result:
Defining a fill area using <div&gt and floats.

Wrapped text in HTML. One starts with a wrapper <div>. The ‘shape-inside’ property on this <div> reduces the wrapping area to the circle. Two floats <div>s are defined, one on the left (green rectangle) and right (red rectangle). The area that the floats exclude is reduced by the half-ellipses defined by the ‘shape-outside’ property. The final wrapping area is the light blue shape.

The CSS/SVG code:
    <style>
      .my_text { shape-inside: xxx; shape-outside: yyy, zzz; ... }
    </style>
    <text id="my_text">Some text.</text>
The result:
Defining a fill area in SVG.

Wrapped text in SVG 2. One starts with a element. The ‘shape-inside’ property on this element defines the wrapping area. The ‘shape-outside’ property reduces the wrapping area. The final wrapping area is the light blue shape.

It was pointed out at a discussion on Day 2, that the use of ‘shape-outside’ in SVG was not consistent with the CSS model. The ‘shape-outside’ property defines the effective shape of an element as seen by other elements. We agreed to change ‘shape-outside’ to an SVG only property ‘shape-subtract’.

How is the first line placed in a wrapped shape?

When the top of a wrapping area is not horizontal, how do you decide where the first line of text is placed?
Different strategies for locating the position of the first line in pyramid shape.

Alternative solutions for where to start layout of the first line, from top to bottom: First place a chunk of text fits: No restrictions, Restricted to multiples of ‘line-height’, Restricted to multiples of half ‘line-height’.

We were informed that with CSS floats, the line is moved down until the first text chunk fits. To be consistent with CSS, we should follow the same rule. A future CSS Line Grid specification may allow one to control line position.

Overflow text

What should happen when the shape isn’t big enough for all the text? This is mostly an issue for browsers where the user can specify a larger font (i.e. for accessibility reasons). CSS has an ‘overflow’ property that selects either clipping or scrolling. Neither of these is a great solution for SVG. The tentative solution in the CSS Shapes 2 specification of extending the <div> below the wrapping area doesn’t work for SVG. I proposed that there should be a means to expose the overflowed text, such as displaying it on hovering over the ellipses at the end of the displayed text. There was some interest in this. For the moment, the best solution is probably to explicitly not define what should happen, leaving it to a future time to specify the behavior. In reflecting on this after the meeting, I think one strategy is to suggest that authors provide an overflow region by adding an additional shape to the value of the ‘shape-inside’ property.

How does text wrap in a shape with a doughnut hole or other feature that breaks a line into parts?

Since SVG can have arbitrary shapes, it is possible to create shapes that break a single text line into parts. How should these shapes be filled?
A shape a bit like an 'H' showing text be laid out on both sides of the central break.

An example of a shape that breaks lines into parts.

The ‘wrap-flow’ property does not apply here as that dictates how text is flowed around floats. A new ‘wrap-inside’ property has been proposed. For the moment, however, it was agreed that text should flow as shown in the above figure. This would be the default value of any new property.

Flowing into multiple shapes

The aborted SVG 1.2 specification allowed text to be flowed sequentially into multiple shapes. This is something that Inkscape implemented and I would like to see this ability preserved. The proposed CSS methods for doing this don’t work for SVG. I proposed giving the ‘shape-inside’ property a list of shapes. It was agreed that this would be an acceptable solution for SVG. (And it can provide a place for over-flowed text.)

How is the first glyph positioned on a line?

When dealing with rectangles, it is straight forward to find the position of the first glyph but with arbitrary shapes it is more difficult. I asked what was the correct CSS method. I was told that one considers the glyph box extended upwards/downwards as dictated by using the height of the entire line box. (For example, if one has a ‘line-height’ value of 2 with a 16px font, the line box has a height of 32px.) It’s not clear where this is specified in CSS.

Baseline issues

We next switch from text wrapping to baseline issues. Text is normally laid out along a baseline which can be different depending on the script. For example, western scripts normally use an Alphabetic baseline while Indic scripts use a Hanging baseline.
Three different scripts showing their baselines.

Example baselines (red lines) in three different scripts. From left to right: alphabetic, hanging, ideographic. The EM box is shown in blue for the ideographic script.

The proposed CSS definition of the ‘dominant-baseline’ property differs from the SVG 1.1 definition in that several of the SVG 1.1 values are missing. We discussed the missing values. Some of the missing values will be added to the CSS definition. There is one fundamental difference between CSS and SVG 1.1: With SVG 1.1 (and XSL 1.1), the baseline table does not change automatically when the font size is change. One must explicitly reset the table by setting the ‘dominant-baseline’ property to the ‘reset-size’ value. The proposed CSS definition will automatically reset the table on change in font size. I’m not sure this is a necessarily good change (it’s definitely not backwards compatible) but then this is probably such a small corner case that it doesn’t really matter.

Figure from the XSL 1.1 specification showing the default behavior upon ‘font-size’ change. The baseline table does not change.

The CSS ‘auto’ value has one small problem for SVG. With vertical text, if the value of ‘text-orientation’ is ‘sideways’, the alphabetic baseline is used. SVG 1.1 always uses the central baseline for vertical text. The CSS specification will be fixed to be compatible with SVG 1.1.

We also discussed default values for the various baselines. In principle, one gets the values of the baselines from the font but most fonts don’t have baseline tables; for interoperability we need defaults. It was decided that the CSS group would investigate this more and come up with a recommended set of default values.

Filters

I brought up a couple of issues dealing with SVG/CSS Filter Effects module. The first was the status of the document. Progress towards finishing this document seems to have stopped. Two of the three editors are no longer very active. The third editor was present and said he would push this through.

I was also interested in the next level as I have a filter primitive I would like to see added. It turns out that Level 2 has already been started. Not much is in there now. Apple, however, has a bunch of filter primitives they would like to add.

Next we covered the issue of artifacts created by using eight bit per channel colors (8 bpc) in the filter chain. The specification as written doesn’t directly specify that 8 bpc color should be used but a couple examples do assume this. I proposed that they be converted to use the range 0 to 1 so that one can use floats to describe color channels. This would solve the problem. Dean Jackson from Apple will investigate the possibility of requiring that floats be used rather than ints in the filter chain.

A blob showing artifacts due to the use of only an 8 bit bump map as input to a lighting filter primitive.

An example of the artifacts one gets due to using only 8 bit alpha channel bump map as the input into a lighting filter primitive.

Gradient Banding

An unplanned topic… how to get rid of banding in gradients. Dithering is a well known technique. Can dithering be added to CSS gradients? There seems to be support for this idea. The syntax and technique needs to be specified.

SVG Meeting — Day 1

Minutes

Path Stroking: Conformance

I brought up some time ago the inconstancy in how paths are stroked with the half the stroke width is greater than the radius of curvature (see my blog post). I suggested that maybe we define the proper way to stroke a path. Another SVG Working Group member took an action to research this topic more. He consulted members of the PDF standards group as well as hardware vendors. After he presented his findings we agreed that being more specific at this time wasn’t really a viable option as the way paths are stroked is such a fundamental property of the underlying graphics libraries that are used to render SVG.

Fallback for the ‘arc’s Line Join

It has been noted that the currently specified fallback for the ‘arcs’ line join of a ‘miter’ join is less than idea. I presented a number of alternative options to the working group. They agreed to a change in the fallback to one of two possible fallbacks:

Fallback options: Blue: increasing the radius of the inner arc to meet the outer arc. Red: increasing the radius of the inner arc while decreasing the radius of the outer arc until the two meet.

I added to Inkscape trunk’s ‘Join type’ LPE the different possible fallbacks for people to test. A full discussion can be found at here. If you have an opinion, let me know.

Path morphing

With the possible demise of SMIL animations, I asked what was the status of turning the path data attribute into a property so that it can be animated using CSS/Web Animations. The response was that it hasn’t been forgotten and that Chrome will soon have an implementation. (SMIL usage shot up dramatically after YouTube started to use SMIL to animate the Play/Pause button.) CSS path animation will be based on SVG 1.1’s path animation. A future version might include more flexible rules for path interpolation (at the moment, animated paths must have the same number and type of path commands).

SVG Meeting — Day 2

Minutes (Some minutes missing due to operator error…. The meeting crossed midnight GMT which confuses the minute making bot.)

Presentation Attributes

SVG has the idea of presentation attributes. These are attributes that can also be set using CSS. Recently, we’ve promoted quite a few geometric attributes to be presentation attributes to allow setting them with CSS (it does seem a bit strange to “style” the position of a rectangle… but that is what we’ve enabled). As we add new properties, should these also be turned into presentation attributes? It is a bit of a maintenance burden to ensure all new properties are also presentation attributes, especially as we adopt a plethora of new text properties. We have already decided to require CSS so there is not necessarily a need for new presentation attributes. HTML has already deprecated/removed presentation attributes in favor of CSS. After a bit of discussion, we have decided to follow HTML’s lead.

Other Topics

Topics covered included how SVG is integrated into HTML, coordinate precision in generated content, and aligning dashes to corners. As mentioned earlier, we decided to create a ‘shape-subtract’ property rather than misuse the ‘shape-outside’ property. Most of the afternoon was spent with specification editing.

SVG Meeting — Day 3

This was a specification editing day. (It is extremely useful to have the working group present when editing as any issues that arise can be immediately dealt with.)

February 16, 2016

NASA’s Experience Curiosity

It is amazing to see how NASA is using Blender 3D for their innovative projects. From the controllable Rover web-app, Experience Curiosity, to simulated space exploration of Exoplanets, to mobile-based Augmented Reality, NASA is in the forefront of demonstrating the benefits of having Blender as an interactive 3D tool.

Brian Kumanchik, Project Lead & Art Director of NASA Jet Propulsion Laboratory, has this to say about Blender…

I started using Blender personally about 6 years ago as an free alternative to Maya and Max when I started my own business – modeling and selling Aircraft for Microsoft Flight Simulator http://simflight3d.com/ after being laid-off my job in the video game industry, I decided to try an all open source route and found the tools very capable. I actually prefer Blender over both Maya and 3DS Max. It was my Blender/GIMP-created aircraft that landed me the job with NASA. And the fact that I’m using open source tools at NASA means that the public can download my models and open them up and play with them without spending money on 3D software. I do have about 25 years experience in the video game industry mostly using 3DS Max.

The Blend4Web decision was made because it was already Blender-friendly, had a physics engine and was the most mature WebGL engine out at the time. They were also willing to work with us.

The Blender/Blend4Web pipeline was pretty smooth, the only problem with working in WebGL is that browsers are forever-changing and features get turned on and off daily. but on the plus side our app runs on mobile devices without any changes except to accommodate the smaller screens.

Watch for other apps using Blender, GIMP and Blend4Web in the future.

NASA_Spacecraft3D_AR_App

NASA’s SPACECRAFT3D Augmented Reality App

More NASA projects using Blender:

Brian also is using Blender for his upcoming board game called Project Mars

February 15, 2016

punters vs. society, an illustrated guide

Recently I wrote about how all users act like selfish punters when you talk to them. And that only user research delivers insight into their diversity and needs, looking at them as a society. I will illustrate that in this blog post.

brownie snaps

Below, we see the users of a product—e.g. a piece of software, an (internet) service, a device:

32 people distributed in a loose formation.

All users are different. Above we see the diversity of this user society depicted in two dimensions; no two are in the same spot. In reality this spread happens in dozens of dimensions. Even for very specialised user groups like ‘font designers for indochinese languages,’ there is plenty of diversity in many dimensions.

We all act like selfish punters; you, me, everyone. And when we are individually engaged in talk about a product, we will offer self‐centred opinions on what matters most and self-serving feature requests, i.e. our wants:

every single person pushes a want that is very different from everybody     else’s.

The dynamic we see illustrated above is very common in the software industry. You will find it in 99% of—

  • custom software projects for companies, NGOs or the government, during requirements gathering;
  • small and medium‐sized software companies where the boss, sales, support and/or consultants talk to customers and users;
  • any kind of user forum, online or otherwise;
  • take (a variant of) the second point above and combine it with the third, that’s F/LOSS, all of it;
  • anywhere where market research is deployed.

write write, scribble scribble

It is also very common in the software industry to earnestly administrate user wants:

the wants solidify at the edge of the picture.

Common forms of this are—

  • lists of use cases;
  • use cases in sheep’s clothing: user stories;
  • bug trackers; not necessarily as an enhancement or a feature request, it may be dressed up as a bug, or a usability problem;
  • feature roadmaps;
  • a lingering pressure, guilt, or fear in the minds of those in charge—boss, product manager, or maintainer;
  • a consensus among (part of) the crew: ‘people keep asking for XYZ, we could hack something basic in a couple of days’ (yeah sure).

Now we can compare this administration of wants to the user society:

the group of people is framed by the wishes.
We see that at best, wants are good at describing the fringe of the society, but 80% of them are downright far‑out and esoteric.

Since wants are so overwhelmingly used in the industry to drive products and projects, there is overwhelming evidence that this leads to awful results. It is completely normal that a majority of time and talent is spent on adding far‑out and esoteric features that are instant bloat.

And thus everybody loses: makers, users and financial backers.

doing it right

Now we are going to stop flirting with disaster.

User research avoids all what has been described up to now in this post. The focus is on—

  • finding out what users need;
  • quality and depth of these findings;
  • understanding the mechanisms at play.

Instead of listening to everbody and their dog, researchers accompany a selection of representative participants on a ‘journey‘ through the pertaining activity, while constantly picking their brain:

six paths run through the group of people, mostly concentrated in the     middle, some parts fly further out.

Above, we see depicted that when users still insist on pushing their wants (i.e. go far out), researchers reel them in by questioning; peeling away layers of rationalisation in order to understand the underlying needs.

These needs are not exotic, they are basic human needs (e.g. control, overview, feedback, organisation, a stable environment, a simple rule book) that form the center of the picture—and are central to making it work for users.

Researchers and designers collaborate on constructing a model—an understanding—of the user society. The centre of the picture is where one finds the commonalities in the (analysed) research material:

a cloud coincides with the density of the six paths.

This is the heart of the matter. It is surrounded by the diversity in needs as found by the research. Note that there is a hard edge to this model: what is outside is certainly out of scope.

The qualitative aspect of research (e.g. taking into account the tone of voice or facial expression when something is stated) makes a world of difference. It is this richness of information that makes user research so effective. We see above that some outliers have been ignored in the user model. This is done with full confidence, based on the research.

core, non‑core

Now we can compare the needs‐based user model to the user society:

the could of needs hides most of the people.

The coverage of the model works out so well, that it is difficult to see the actual users. This is especially true for the heart of the matter. Note that some users at the very fringe of society may feel left out; they are covered a little bit or not at all.

In a design‐driven approach, the needs‐based user model is used to decide which features to add (cover the diversity, but nothing beyond the edge) and where to focus design and development effort (on the heart of the matter).

A/B testing

Zooming out, we now can compare the two approaches:

the frame of wants around the cloud of needs

When compared to the results of user research, the bog standard, wildly popular method of listening to users and their wants

  • fails to record the heart of the matter;
  • fails to record the diversity of user needs;
  • is a list of disparate wishes, instead of a coherent, nuanced insight;
  • highlights the fringe, the far‑out, the esoteric;
  • gets you completely the wrong picture.

postscript

This past semester I discussed all this with my students at the BTK design school. To get the point across about the dangers of talking to users, I asked them ‘do you remember these creatures in mythology that always give you the wrong answer, so that you get lost?’

‘Yeah’, they said, ‘they are called trolls.’

Interview with Wes Nunes

M038-800

Could you tell us something about yourself?

My name is Weslley but everyone since my childhood calls me Wes, that’s why I sign my works as Wes Nunes. I’m 24 years old and I live in São Paulo, Brazil. I’m militant for the LGBT cause in my country and I work as a writer and comic book illustrator.

Do you paint professionally, as a hobby artist, or both?

I think every hobby has something of professional and every professional has something of hobby. Whatever if I illustrate to make money or not because it doesn’t change the fact tha this is my source of satisfaction and passion. I think the freedom and joy to do illustrations as a hobby associated with the responsibility to do illustrations for a living is the perfect way to do my work.

What genre(s) do you work in?

I do comics, from webcomics to graphic novels. I did some political cartoons publishted in newspapers here in South America. Well, I’m a gay author and a militant for the LGBT cause, so all my work revolves around social problems involving this population here. Many people from other countries think that Brazil is the country of sexual freedom. This is not true. There is much more violence than freedom, both for LGBT people and for blacks and women. My comics and illustrations are public evidence for all this violence and social conflict in my country.

Whose work inspires you most — who are your role models as an artist?

I’m openly a fan of the works of Alan Moore, Frank Miller and Neil Gaiman. All work they made exercised some influence on me but I have to say that my strongest influence was the work of Brazilians like Laerte Coutinho (kind of a goddess who I’ve been reading since I was a kid) and Fábio Moon & Gabriel Bá, who were awarded the Eisner Awards some years ago. All these form the package that determines my illustration style and comic book creation.

How and when did you get to try digital painting for the first time?

It was four years ago. I started with digital illustration because I had the intention of publishing political cartoons and webcomics. I ended up really enjoying the thing, you know? Very much. Now I do almost everything digitally and I have nothing to complain about it.

What makes you choose digital over traditional painting?

It is like if you were in an art supply store and could take everything for one dollar – or for free. It is like magic for artists like me who live in a rented small house that the owner will not let you mess the walls with ink.

How did you find out about Krita?

A few years ago I was looking for illustration software that would satisfy me on the Linux operating system, which I was using at that moment. I had been looking for a long time when I found Krita and I have to say I fell in love with it. It was everything I was looking for and perfect to do my work. Even when I returned to the Windows system I continued to use Krita. All other software has become insignificant next to it.

What was your first impression?

“Oh my gosh this thing has everything! Everything!” I think that it was my first impression. The second must have been, “can a person marry a program?” Seriously, I adapted myself very quickly to Krita and it was so ideal for me, I loved it.

Capturar01-800

What do you love about Krita?

What I love about Krita? Just everything. Tools, brushes, it does not weigh on my computer, it was extremely easy to learn how to work on and it is a well-organized program. Not to mention that it has a beautiful interface. What else could I want in software?

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think Krita needs more users. The more users the software has the better it will be and also with it the development team will get the recognition they deserve both financially and intellectually. I have done numerous independent works so I know how it is. Public recognition is very important for those who develop a work because it motivates you to do more and more.

What sets Krita apart from the other tools that you use?

Its ease of use, simple interface and it is completely free. There is strong community support for the development of Krita, and no wonder it’s completely different from what is on the market. Krita is clean software. It doesn’t flood you all the time with the responsibility to make a donation. After using it, you feel spontaneously that is fair to pay for it to encourage the work of developers.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I made a stripcomic for my blog Manifesto dos Quadrinhos (something like The Comics Manifest in English) called LAVAT, published months ago. It is a symbolic work because it toured the world and gave me visibility. It was an extremely emotional work because I portrayed a terrifying death scene of a young gay couple killed by strict anti-LGBT laws in fundamentalist countries in the Middle East. This was my favorite work made using Krita.

S026-800

What techniques and brushes did you use in it?

I often use David Revoy’s paintbrushes, nothing more. It is enough for me. My work is very traditional comic book art, I think.

Where can people see more of your work?

You can see more of my work on tumblr (http://manifestodosquadrinhos.tumblr.com) and facebook, on my page (http://www.facebook.com/manifestodosquadrinhos) and my personal profile (http://www.facebook.com/wescomics).

Anything else you’d like to share?

I leave here my thanks to my family, friends and my boyfriend to put up with me even in more difficult moments. In addition, I send special thanks to the Krita team by developing such an excellent software and still make this space where I can show my work.

February 13, 2016

Film developing setup that fits your backpack

It’s lots of fun developing your own black & white film. Here’s the setup I’ve been using. My goals were to keep costs down and to have a simple, compact setup that’s easy to use.

Developing tank and reel ~ £ 22

This is the main cost and you want to make it a good one. You can shop around for a second hand for much less.

Thermometer ~ £ 4

To make sure the solutions are at the right temperature. A glass spirit thermometer also provides a means of stirring.

Developer ~ £ 5

A 120 mL bottle of Rodinal develops about 20 rolls of film at 1+25 dilutions. You can double the dilution to 1+50 for 40, that’s just 12 pence per roll! This stuff lasts forever if you store it in darkness and air tight. Rodinal is a "one shot" developer so you toss out your dilution after use.

Fixer ~ £ 3

Fixer dilution can be reused many times, so store it after use. One liter of a 1+5 dilution fixes 17 rolls of film.

To check if your fixer dilution is still good: take a piece of cut off film leader and put it in small cup filled with fixer. If the film becomes transparent after a few minutes the fixer is still good to use.

Measuring jug ~ £ 3

To mix chemicals in. Get one with a spout for easy pouring.

Spout bags ~ £ 2

These keep air out compared to using bottles, so your chemicals will last longer. They save space too. Label them well, you don’t want to mess up!

Funnel ~ £ 1

One with a small mouth, so it fits the spout bags easily when you need to pour chemicals back.

Syringe ~ £ 1

To measure the amount of developer. Around 10 to 20 mL volume will do. Make sure to get one with 1 mL marks for more accurate measuring, and a blunt needle to easily extract from the spout bag.

Common household items

You probably already have these: a clothes peg, for hanging your developed film to dry. And a pair of scissors, to remove the film from the cannister and to cut the film into strips after drying.

Developed Ilford HP5+ film

Total ~ £ 41

As you can see, it’s only a small investment. After developing a few rolls the equipment has paid for itself, compared to sending your rolls off for processing. There’s something special about seeing your images appear on a film for the first time that’s well worth it. Like magic. :)

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.

Example

You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out long links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on lockee.me. The source is also available if you’d like to run your own instance or contribute.

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

February 11, 2016

Moderate reviews in GNOME Software

I’m pondering adding something like this for GNOME Software:

Screenshot from 2016-02-11 20-31-28

The idea is it would be launched using a special geeky-user-only gnome-software --mode=moderate command line, so that someone can up or down-vote any reviews that are compatible with their country code. This would be used by super-users using community distros like Fedora, and perhaps people in a QA team for distros like RHEL. The reason I wanted to try to use gnome-software rather than just doing it in a web-app was that you have access to the user machine hash, so you can hide reviews the user has already voted on without requiring a username and password that has to be matched to the local machine. We also have all the widgets and code in place, so it is really just a couple of hundred lines of code for this sekret panel. The server keeps track of who voted on what and so reviewers can just close the app, open it a few weeks later and just continue only moderating the reviews that came in since then.

I can’t imagine the tool would be used by many people, but it does make reviewing comments easy. Comments welcome.

GCompris: Patreon and New Logo

Hello everyone,

A few days ago, I created a page on Patreon to support my work on making new graphics on GCompris. As you may know, last year I started this project, and could make a good start thanks to a little crowd-funding campaign. However there’s a lot of remaining work to finish the task. A lot of activities need to be updated, and new activities will always need some new design and graphics.

So if you want to support GCompris, you can become my patron for this project.
Before resuming my work on the activities, I took the hard and delicate task to update the logo and the main icon of the application.

Now is a good time to have a new icon, for several reasons.
-The old icon had no real meaning, only legacy (which, for a kid that sees GCompris for the first time, doesn’t mean anything)
-Tux is already the mascot of a completely different kind of software. Having him along with other FLOSS mascots inside some activities is cool, but he doesn’t represent enough GCompris to be in the icon.
-The Qt port is still in progress, and it makes sense to have a new icon for it.
-With the new graphics in the application, GCompris needs a good branding that looks good and makes sense.

Also, as some people said they would like to keep the legacy biplane+tux, I tried. I spent countless hours trying to make something looking good, looked at it from every angles. I really couldn’t find a way, and at some point I was feeling like loosing my time.

Full of energy from these failures, I started a new icon from scratch. We had a brainstorm topic on the mailing list recently for a new icon, so I had some indications to begin with. It should mean things like education and gaming, be colorful and cute.

I spare you all the iterations, but after pages of sketches, several proposal and lot of constructive discussions on IRC, here is the final result, along with some explanations:

This is the new icon.
The globe is a symbol for the educational part of GCompris. Also luckily, it is still linked in a way to the idea of the plane from the previous icon. Also it is the same G and orange circle that is used as about button in the main menu.
The dice is a symbol for the gaming part of GCompris, and it also represents counting and maths.
I chose the orange color for the globe for several reasons, probably the most important is because it still contains some yellow from the previous icon, but it is warmer. The blue for the dice adds some contrast.

I tweaked it to follow the main guidelines of Breeze-icon-design, I like the look it gives to it.

This is the new logo with the full name.
It started as a clean-up of the previous one, changing the style and colors of the letters to something soft and colored. Then after making the icon, I added the globe to it, thanks to a suggestion on IRC.

This is a “light” version of the logo, without the globe so it fits better inside a line.

I hope everyone will be happy with this new logo and icon. I know lot of old-timers had some affection for the plane and tux logo, but if you read what I said above, you can see that it was a well considered and discussed change, with lot of good reasons to happen.

Again, if you like my work on GCompris, check this link to see how you can support it. Expect a new activity update next month.

February 10, 2016

Launching docs.krita.org: the Krita Learning Place!

For months, we have been working on something awesome: docs.krita.org! And now it’s the time to share our work with you. Over the past year, we created a comprehensive manual for Krita on KDE’s Userbase website, and now we’ve got a beautiful new home for our work. All the content has been ported to the new learning area — and we want to extend the content further as well as add more Krita tutorials!

The new and updated docs.krita.org is the place for everyone who wants to use Krita. Artists who need a good introduction, painters who want to browse brush packs, or curious sketchers looking for information on what all of the features in Krita do. The perfect place to start when learning anything about Krita. And digital painting in general.

Here are some of the things we’re sure you’ll appreciate:

Better Search Capabilities

live-search

The docs site has its own search functionality now! The search will pick up not just page titles, but also content. This makes it much easier to find what you are looking for! And the live search bar also will give suggestions as you type.

Improved Navigation

page-tree-navigation

All the content is now organized in a page tree display. You can drill down into the specific areas that you are interested in. The navigation turns into a fixed layout to make it easy to reference where you are. And pages include a previous and next page function to help you move around. Breadcrumbs exist above the title as well. Click on them to go up a directory, as usual!

page-navigation

Combined Educational Resources

No more bouncing between different websites for learning. We have moved the User Manual and the FAQ to the learning area. Combined with the live search, it means finding answers to your questions has never been easier! And there’s so much content here already, most common questions are answered, and quite a few esoteric ones as well!

Updated & New Content

We are always trying to update content, but we spent a little more time while working on these updates.

We have a new Unstable section with new features that being worked on right now, like Animation. Then, when a feature is released in a stable version, we will move the documentation out of the Unstable section.

If you start out using Krita, you might have questions like “how to save an image for the web”, or would like to see examples of how to use Krita. There are lots of tutorials spread all over world, created by Krita developers and users. So many that it’s getting difficult to find, and even more, to find them again! For this we created the tutorials section.

And if you’ve used Krita for a while, you’ll have seen that Krita has plenty features that are unusual, or even unique! Photoshop tutorials won’t help you here! So we created a dedicated area where we can tell you how to use Krita’s advanced features, and where they go beyond to what you might have been expecting.

Of course, updating the documentation and education for Krita, and keeping it up-to-date is a work-in-progress. It’ll always be a work in progress! But we are really proud of all these improvements! Learning Krita, or getting the most out of Krita, just got a whole lot easier!

Comments are live

With a huge amount of help from Robert Ancell for a lot of the foundations for the new code, I’ve pushed a plugin today to allow anonymous rating of applications.

Screenshot from 2016-02-10 17-16-04

If people abuse or spam this I’ll take the feature away until we can have OpenID logins in GNOME Online Accounts, but I’m kinda hoping people won’t be evil. The server is live and accepting reviews and votes, but the API isn’t set in stone.

February 09, 2016

sql-migrate slides

I recently gave a small lightning talk about sql-migrate (a SQL Schema migration tool for Go), at the Go developer room at FOSDEM.

Annotated slides can be found here.

sql-migrate


Comments | More on rocketeer.be | @rubenv on Twitter

February 08, 2016

Anonymous reviews in GNOME Software

Choosing an application to install is hard when there are lots of possible projects matching a specific search term. We already list applications based on the integration level and with useful metrics like “is it translated in my language” and this makes sure that high quality applications are listed near the top of the results. For more information about an application we often want a more balanced view than the PR speak or unfounded claims of the upstream project. This is where user-contributed reviews come in.

review-submit

To get a user to contribute a review (which takes time) we need to make the process as easy as possible. Making the user create a user account on yet-another-webservice will make this much harder and increase the barrier to participation to the point that very few people would contribute reviews. If anonymous reviewing does not work the plan is to use some kind of attestation service so you can use a GMail or Facebook for confirming your identity. At this point I’m hoping people will just be nice to each other and not abuse the service although this reviewing facility will go away if it starts being misused.

Designing an anonymous service is hard when you have to be resilient against a socially awkward programmer with specific political ideologies. If you don’t know any people that match this description you have obviously never been subscribed to fedora-devel or memo-list.

Obviously when contacting a web service you share your IP address. This isn’t enough to uniquely identify a machine and user, which we want for the following reasons:

  • Allowing users to retract only their own reviews
  • Stopping users up or down-voting the same review multiple times

A compromise would be to send a hash of two things that identify the user and machine. In GNOME Software we’re using a SHA1 hash of the machine-id and the UNIX username along with a salt, although this “user_id” is only specified as a string and the format is not checked.

For projects like RHEL where we care very much what comments are shown to paying customers we definitely want reviews to be pre-approved and checked before showing to customers. For distros like Fedora we don’t have this luxury and so we’re going to rely on the community to self-regulate reviews. Reviews are either up-voted or down-voted according how useful they are along with the nuclear option of marking the review as abusive.

app-page

By specifying the users current locale we can sort the potential application reviews according to a heuristic that we’re still working on. Generally we want to prefer useful reviews in the users locale and hide ones that have been marked as abusive, and we also want to indicate the users self-review so they can remove it later if required. We also want to prioritize reviews for the current application version compared to really old versions of these applications.

Comments welcome!

Attack of the Killer Titmouse!

[Juniper titmouse attacking my window] For the last several days, when I go upstairs in mid-morning I often hear a strange sound coming from the bedroom. It's a juniper titmouse energetically attacking the east-facing window.

He calls, most often in threes, as he flutters around the windowsill, sometimes scratching or pecking the window. He'll attack the bottom for a while, moving from one side to the other, then fly up to the top of the window to attack the top corners, then back to the bottom.

For several days I've run down to grab the camera as soon as I saw him, but by the time I get back and get focused, he becomes camera-shy and flies away, and I hear EEE EEE EEE from a nearby tree instead. Later in the day I'll sometimes see him down at the office windows, though never as persistently as upstairs in the morning.

I've suspected he's attacking his reflection (and also assumed he's a "he"), partly because I see him at the east-facing bedroom window in the morning and at the south-facing office window in the early afternoon. But I'm not sure about it, and certainly I hear his call from trees scattered around the yard.

Something I was never sure of, but am now: titmice definitely can raise and lower their crests. I'd never seen one with its crest lowered, but this one flattens his crest while he's in attack mode.

His EEE EEE EEE call isn't very similar to any of the calls listed for juniper titmouse in the Stokes CD set or the Audubon Android app. So when he briefly attacked the window next to my computer yesterday afternoon while I was sitting there, I grabbed a camera and shot a video, hoping to capture the sound. The titmouse didn't exactly cooperate: he chirped a few times, not always in the group of three he uses so persistently in the morning, and the sound in the video came out terribly noisy; but after some processing in audacity I managed to edit out some of the noise. And then this morning as I was brushing my teeth, I heard him again and he was more obliging, giving me a long video of him attacking and yelling at the bedroom window. Here's the Juniper titmouse call as he attacks my window this morning, and yesterday's Juniper titmouse call at the office window yesterday. Today's video is on youtube: Titmouse attacking the window but that's without the sound edits, so it's tough to hear him.

(Incidentally, since Audacity has a super confusing user interface and I'm sure I'll need this again, what seemed to work best was to highlight sections that weren't titmouse and use Edit→Delete; then use Effects→Amplify, checking the box for Allow clipping and using Preview to amplify it to the point where the bird is audible. Then find a section that's just noise, no titmouse, select it, run Effects→Noise Reduction and click Get Noise Profile. The window goes away, so click somewhere to un-select, call up Effects→Noise Reduction again and this time click OK.)

I feel a bit sorry for the little titmouse, attacking windows so frenetically. Titmice are cute, excellent birds to have around, and I hope he's saving some energy for attracting a mate who will build a nest here this spring. Meanwhile, he's certainly providing entertainment for me.

February 05, 2016

Updating Debian under a chroot

Debian's Unstable ("Sid") distribution has been terrible lately. They're switching to a version of X that doesn't require root, and apparently the X transition has broken all sorts of things in ways that are hard to fix and there's no ETA for when things might get any better.

And, being Debian, there's no real bug system so you can't just CC yourself on the bug to see when new fixes might be available to try. You just have to wait, try every few days and see if the system

That's hard when the system doesn't work at all. Last week, I was booting into a shell but X wouldn't run, so at least I could pull updates. This week, X starts but the keyboard and mouse don't work at all, making it hard to run an upgrade. has been fixed.

Fortunately, I have an install of Debian stable ("Jessie") on this system as well. When I partition a large disk I always reserve several root partitions so I can try out other Linux distros, and when running the more experimental versions, like Sid, sometimes that's a life saver. So I've been running Jessie while I wait for Sid to get fixed. The only trick is: how can I upgrade my Sid partition while running Jessie, since Sid isn't usable at all?

I have an entry in /etc/fstab that lets me mount my Sid partition easily:

/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type mount /sid as myself, without even needing to be root.

But Debian's apt upgrade tools assume everything will be on /, not on /sid. So I'll need to use chroot /sid (as root) to change the root of the filesystem to /sid. That only affects the shell where I type that command; the rest of my system will still be happily running Jessie.

Mount the special filesystems

That mostly works, but not quite, because I get a lot of errors like permission denied: /dev/null.

/dev/null is a device: you can write to it and the bytes disappear, as if into a black hole except without Hawking radiation. Since /dev is implemented by the kernel and udev, in the chroot it's just an empty directory. And if a program opens /dev/null in the chroot, it might create a regular file there and actually write to it. You wouldn't want that: it eats up disk space and can slow things down a lot.

The way to fix that is before you chroot: mount --bind /dev /sid/dev which will make /sid/dev a mirror of the real /dev. It has to be done before the chroot because inside the chroot, you no longer have access to the running system's /dev.

But there is a different syntax you can use after chrooting:

mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/

It's a good idea to do this for /proc and /sys as well, and Debian recommends adding /dev/pts (which must be done after you've mounted /dev), even though most of these probably won't come into play during your upgrade.

Mount /boot

Finally, on my multi-boot system, I have one shared /boot partition with kernels for Jessie, Sid and any other distros I have installed on this system. (That's somewhat hard to do using grub2 but easy on Debian though you may need to turn off auto-update and Debian is making it harder to use extlinux now.) Anyway, if you have a separate /boot partition, you'll want it mounted in the chroot, in case the update needs to add a new kernel. Since you presumably already have the same /boot mounted on the running system, use mount --bind for that as well.

So here's the final set of commands to run, as root:

mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid

And then you can proceed with your apt-get update, apt-get dist-upgrade etc. When you're finished, you can unmount everything with one command:

umount --recursive /sid

Some helpful background reading:

February 04, 2016

Krita 2.9.11 and the second 3.0 alpha build!

Today, we’re releasing the eleventh bugfix release for Krita 2.9 and the second development preview release of Krita 3.0! We are not planning more bug fix releases for 2.9, though it is possible that we’ll collect enough fixes to warrant one release more, because there are some problems with Windows 10 that we might be able to work around. So, please check closely if you use Krita on Windows 10:

  • You get a black screen: please go to settings/configure krita/display and disable opengl. It turns out that recent Windows updates install new Intel GPU drivers that do not implement all the functionality Krita need.
  • Pressure sensitivity stops working: a recent update of Windows 10 breaks pressure sensitivity for some people. Please check whether reinstalling the tablet drivers fixes the issue. If not, please close Krita, navigate to your user’s AppData\Roaming folder and rename the krita folder to krita_old. If Krita now shows pressure sensitivity again, please zip up your krita_old folder and send to foundation@krita.org.

And now for the fixes in 2.9.11!

2.9.11 Changelog

  • Fix a memory leak when images are copied to the clipboard
  • Ask the user to wait or break off a long-running operation before saving a document
  • Update to G’Mic 1.6.9-pre
  • Fix rendering of layer styles
  • Fix a possible issue when loading files with clone layers
  • Do not crash if there are monitors with negative numbers
  • Make sure the crop tool always uses the correct image size
  • Fix a crash on closing images while a long-running operation is still working
  • Link to the right JPEG library
  • Fix the application icon
  • Fix switching colors with X after using V to temporarily enable the line tool
  • Fix the unreadable close button on the splash screen when using a light theme
  • Fix the Pencil 2B preset
  • Fix the 16f grayscale colorspace to use the right channel positions
  • Add shortcuts to lower/raise the current layer

Go to the download page to get your updated Krita!

3.0 pre-alpha Changelog

For 3.0, we’ve got a bunch of new features and bug fixes.

There is still one really big issue that we’re working hard on: OSX and the latest Intel GPU drivers break Krita’s OpenGL support badly. On OSX, you will still NOT see the brush outline, symmetry axis, assistants and so on. On Windows, if you have an Intel GPU, the Krita window might turn totally black. There’s no need to report those issues.

  • Shift+R+click onto canvas can now select multiple layers! Use this in combination with the multiple properties editor to rename a bunch of layers quickly, or use ctrl+g to group them!
    krita_shiftr_layerselect
  • Improved pop-up palette, now preset-icons are more readable (size depends on maximum amount of presets set in the general settings):
    krita-new-popuppalette
  • Tons of improvements to the color space browser: the Tone curve is now visible, making it easier to find linear spaces, there’s feedback for color look-up table profiles like CMYK, there’s copyright in the info box, as well as possible conversion intents, and overall just more extra info moved into the tooltips for a cleaner look. The PNG 16bit import is also alphabetised.
    krita-new-gamuts  
  • Hotkeys for Hue, Saturate/Desaturate, making a color redder, yellower, bluer or greener, as well as making lighter/darker use luminance where possible. The new hotkeys have no default key and need to be set in the shortcuts editor.:
    LK_hotkeyhue_hotkeyssat_hotkeysrygb_hotkeys
  • HSI, HSY and YCrCb for the HSV/HSL adjustment filter. HSY and YCrCb can use the correct coefficients for most rgb spaces, but it isn’t linearisable yet, so not true luminance yet. Regardless, below a comparison:
    krita-hsvhslhsihsyycrcb
  • The color smudge brush can now do subpixel precision in dulling mode: krita-smudge-brush-pixel-precision
  • Add progress reporting when Krita saves a .KRA file
  • Fix wheel events in Krita 3.0
  • Sanitize the order of resource and tag loading. This makes startup a bit slower, so ideally we’d like to replace the whole system with something more sophisticated but that won’t happen for 3.0
  • Show more digits in the Memory Reporting popup in the status bar
  • Add a workaround for an assert while loading some weird PSD files
  • BUG: 346430: Make sure the crop tool always uses the current image size.
  • BUG:357173 Fix copy constructor of KisSelectionMask
  • BUG:357987 Don’t crash on loading the given file
  • Fix starting Krita without XDG_DATA_PATH set

Source

We recommend building Krita from git, not from the source zip file. Krita for OSX is build from a separate branch.

Windows

Download the zip file. Unzip the zip file where you want to put Krita..

Run the vcredist_x64.exe installer to install Microsoft’s Visual Studio runtime.

Then double-click the krita link.

Known issues on Windows:

  • If the entire window goes black, disable OpenGL for now. We’ve figured out the reason, now we only need to write a fix. It’s a bug in the Intel driver, but we know how to work around it now.

OSX

Download the DMG file and open it. Then drag the krita app bundle to the Applications folder, or any other location you might prefer. Double-click to start Krita.

Known issues on OSX:

  • We built Krita on El Capitan. The bundle is tested to work on a mid 2011 Mac Mini running Mavericks. It looks like you will need hardware that is capable of running El Capitan to run this build, but you do not have to have El Capitan, you can try running on an earlier version of OSX.
  • You will not see a brush outline cursor or any other tool that draws on the canvas, for instance the gradient tool. This is known, we’re working on it, it needs the same fix as the black screen you can get with some Intel drivers.

Linux

For the Linux builds we now have AppImages! These are completely distribution-independent. To use the AppImage, download it, and make it an executable in your terminal or using the file properties dialog of your file manager. Another change is that configuration and custom resources are now stored in the .config/krita.org/kritarc and .local/share/krita.org/ folders of the user home folder, instead of .kde or .kde4.

Known issues on Linux:

  • Your distribution needs to have Fuse enabled
  • On some distributions or installations, you can only run an AppImage as root because the Fuse system is locked down. Since an AppImage is a simple iso, you can still mount it as a loopback device and execute Krita directly using the AppRun executable in the top folder.

February 03, 2016

darktable 2.0.1 released

we're proud to announce the first bugfix release for the 2.0 series of darktable, 2.0.1!

the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.1

as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksums are:

$ sha256sum darktable-2.0.1.tar.xz
4d0e76eb42b95418ab59c17bff8aac660f5348b082aabfb3113607c67e87830b  darktable-2.0.1.tar.xz
$ sha256sum darktable-2.0.1.dmg 
580d1feb356e05d206eb74d7c134f0ffca4202943388147385c5b8466fc1eada  darktable-2.0.1.dmg

and the changelog as compared to 2.0.0 can be found below.

New features:

  • add export variables for Creator, Publisher and Rights from metadata
  • add support for key accels for spot removal iop
  • add some more info to --version
  • add collection sorting by group_id to keep grouped images together
  • add $(IMAGE.BASENAME) to watermark
  • OSX packaging: add darktable-cltest
  • OSX packaging: add darktable-generate-cache

Bugfixes:

  • make sure GTK prefers our CSS over system's
  • make selected label's background color visible
  • make ctrl-t completion popup nicer
  • fixed folder list scrolling to the top on select
  • scale waveform histogram to hidpi screens
  • really hide all panels in slideshow
  • add filename to missing white balance message
  • fix wrong tooltip in print scale
  • changing mask no longer invalidates the filmstrip thumb, making it faster
  • fix calculated image count in a collection
  • don't allow too small sidepanels
  • fixes white balance sliders for some cameras
  • fix some memleaks
  • code hardening in color reconstruction
  • validate noiseprofiles.json on startup
  • no longer lose old export presets
  • fix some crash with wrong history_end
  • don't load images from cameras with CYGM/RGBE CFA for now
  • some fixes in demosaicing
  • fix red/blue interpolation for XTrans
  • fix profiled denoise on OpenCL
  • use sRGB when output/softproof profile is missing
  • fix loading of .hdr files
  • default to libsecret instead of gnome keyring which is no longer supported
  • fix a bug in mask border self intersections
  • don't allow empty strings as mask shape names
  • fix a crash in masks
  • fix an OpenCL crash
  • eliminate deprecated OpenCL compiler options
  • update appdata file to version 0.6
  • allow finding Saxon on Fedora 23

Camera support:

  • Fujifilm XQ2 RAW support
  • support all Panasonic FZ150 crop modes
  • basic support for Nikon 1 V3
  • add defs for Canon CHDK DNG cameras to make noise profiles work

White balance presets:

  • add Nikon D5500
  • add Nikon 1 V3
  • add missing Nikon D810 presets
  • add Fuji X100T

Basecurves:

  • copy X100S to X100T

Noise profiles:

  • fix typo in D5200 profiles to make them work again
  • add Panasonic FZ1000
  • add Nikon D5500
  • add Ricoh GR
  • add Nikon 1 V3
  • add Canon PowerShot S100
  • copy Fuji X100S to X100T

Translations:

  • add Hungarian
  • update German
  • update Swedish
  • update Slovak
  • update Spanish
  • update Dutch
  • update French

February 01, 2016

Interview with Jóhann Örn Geirdal

kiki800

Could you tell us something about yourself?

My name is Jóhann Örn Geirdal and I am a professional artist and a fine art gallery supervisor. I’m from Iceland and currently living in the Reykjavik city area.

Do you paint professionally, as a hobby artist, or both?

I paint digital fine art professionally and it’s definitely my hobby as well.

What genre(s) do you work in?

Everything that gets the job done.

Whose work inspires you most — who are your role models as an artist?

The most important artists to me are Erro, Android Jones, Francis Bacon and Miro.

How and when did you get to try digital painting for the first time?

Back in 2000 I went to a multimedia school to learn to make digital art. Since then I have switched completely to digital media from traditional media.

What makes you choose digital over traditional painting?

Definitely the high level of experimentation and it’s a lot cleaner.

How did you find out about Krita?

It was through the Blender community. The artist David Revoy introduced it.

What was your first impression?

I did not fall in love with it but it was interesting enough to explore more. Now I can’t go back.

What do you love about Krita?

It is simply the best digital art software on the market.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think it’s on the right track. Just keep going.

What sets Krita apart from the other tools that you use?

It’s the fast development and that the developers are definitely listening to the artists who use it. That is not always the case with other software.

What techniques and brushes do you prefer to use?

I use a lot of custom brushes but I also use default Krita brushes and brushes from other artists.

Where can people see more of your work?

My website is http://www.geirdal.is. There you can see my current work.

Anything else you’d like to share?

I like to thank everyone who has made Krita possible and this amazing!

January 31, 2016

Setting mouse speed in X

My mouse died recently: the middle button started bouncing, so a middle button click would show up as two clicks instead of one. What a piece of junk -- I only bought that Logitech some ten years ago! (Seriously, I'm pretty amazed how long it lasted, considering it wasn't anything fancy.)

I replaced it with another Logitech, which turned out to be quite difficult to find. Turns out most stores only sell cordless mice these days. Why would I want something that depends on batteries to use every day at my desktop?

But I finally found another basic corded Logitech mouse (at Office Depot). Brought it home and it worked fine, except that the speed was way too fast, much faster than my old mouse. So I needed to find out how to change mouse speed.

X11 has traditionally made it easy to change mouse acceleration, but that wasn't what I wanted. I like my mouse to be fairly linear, not slow to start then suddenly zippy. There's no X11 property for mouse speed; it turns out that to set mouse speed, you need to call it Deceleration.

But first, you need to get the ID for your mouse.

$ xinput list| grep -i mouse
⎜   ↳ Logitech USB Optical Mouse                id=11   [slave  pointer  (2)]

Armed with the ID of 11, we can find the current speed (deceleration) and its ID:

$ xinput list-props 11 | grep Deceleration
        Device Accel Constant Deceleration (259):       3.500000
        Device Accel Adaptive Deceleration (260):       1.000000

Constant deceleration is what I want to set, so I'll use that ID of 259 and set the new deceleration to 2:

$ xinput set-prop 11 259 2

That's fine for doing it once. But what if you want it to happen automatically when you start X? Those constants might all stay the same, but what if they don't?

So let's build a shell pipeline that should work even if the constants aren't.

First, let's get the mouse ID out of xinput list. We want to pull out the digits immediately following "id=", and nothing else.

$ xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/'
11

Save that in a variable (because we'll need to use it more than once) and feed it in to list-props to get the deceleration ID. Then use sed again, in the same way, to pull out just the thing in parentheses following "Deceleration":

$ mouseid=$(xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/')
$ xinput list-props $mouseid | grep 'Constant Deceleration'
        Device Accel Constant Deceleration (262):       2.000000
$ xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/'
262

Whew! Now we have a way of getting both the mouse ID and the ID for the "Constant Deceleration" parameter, and we can pass them in to set-prop with our desired value (I'm using 2) tacked onto the end:

$ xinput set-prop $mouseid $(xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/') 2

Add those two lines (setting the mouseid, then the final xinput line) wherever your window manager will run them when you start X. For me, using Openbox, they go in .config/openbox/autostart. And now my mouse will automatically be the speed I want it to be.

Show me the way

If you need further proof that OpenStreetMap is a great project, here’s a very nice near real-time animation of the most recent edits: https://osmlab.github.io/show-me-the-way/

Show me the way

Seen today at FOSDEM, at the stand of the Humanitarian OpenStreetMap team which also deserves attention: https://hotosm.org


Comments | More on rocketeer.be | @rubenv on Twitter

January 29, 2016

Rio

Rio UX Design Hackfest from jimmac on Vimeo.

I was really pleased to see Endless, the little company with big plans, initiate a GNOME Design hackfest in Rio.

The ground team in Rio arranged a visit to two locations where we met with the users that Endless is targeting. While not strictly a user testing session, it helped to better understand the context of their product and get a glimpse of the lives in Rocinha, one of the Rio famous favelas or a more remote rural Magé. Probably wouldn’t have a chance to visit Brazil that way.

Points of diversion

During the workshop at the Endless offices we went through many areas we identified as being problematic in both the stock GNOME and Endless OS and tried to identify if we could converge on and cooperate on a common solution. Currently Endless isn’t using the stock GNOME 3 for their devices. We aren’t focusing as much on the shell now, as there is a ton of work to be done in the app space, but there are a few areas in the shell we could revisit.

GNOME could do a little better in terms of discoverability. We investigated the role of the app picker versus the window switcher in the overview and being able to enter the overview on boot. Some design choices have been explained and our solution was reconsidered to be a good way forward for Endless. Unified system menu, window controls, notifications, lock screen/screen shield have been analyzed.

Endless demoed how the GNOME app-provided system search has been used to great effect on their mostly offline devices. Think “offline google”.

DSC02567 DSC02589 DSC02616

Another noteworthy detail was the use of CRT screens. The new mini devices sport a cinch connection to old PAL/NTSC CRT TVs. Such small resolutions and poor quality brings more constraints on the design to keep things legible. This also has had a nice effect in that Endless has investigated some responsive layout solutions for gtk+ they demoed.

I also presented GNOME design team’s workflow, and the free software toolchain we use. Did a little demo of Inkscape for icon design and wireframing and Blender motion design.

Last but not least, I’d like to thank the GNOME Foundation for making it possible for me to fly to Rio.

Rio Hackfest Photos

Krita AppImages

Years and years ago, before Krita had even had one single official or unofficial release, we created something called "klik" packages. Basically, an iso that would contain Krita and all its dependencies and that could be used to run Krita on any Linux distribution. The klik packages were quite hard to maintain and hard-ish to use, though. It was easier than trying to build rpm's for SuSE, Redhat, Mandrake, debs for Debian, PKG for Slackware and whatever else was out there, though.

Fast-forward a decade. Despite advances like Launchpad and the OpenSuse OBS, it's still hard to create Krita packages for every distribution. There are more distributions, more versions, more architectures... Just maintaining the Krita Lime PPA for Ubuntu and derivatives takes a serious amount of time. Basically, the problem of distributing ones application to Linux users is still a problem.

And if you're working on even a moderately popular application that has a moderate development velocity, if it's an application that users rely on to do their job, you really want to provide your work in a binary form.

Distributions do a good job combining all the software we free software developers write into distribution releases; distributions really make it easy and convenient to install a wide range of applications. But there is a big mis-match between what users need and what they get:

Most users want a stable, unchanging operating system that they can install and use without upgrading for a couple of years. On top of that, some users don't want to be bothered by desktop upgrades, others cannot live without the latest desktop. That's often a personal preference, or a matter of not caring about the desktop as long as it can launch their work applications. And those work applications, the production tools they use to earn their money with, those need to be the very latest version.

So, Krita users often still use Ubuntu 12.04. It's the oldest LTS release that's still supported. But Ubuntu doesn't support it by providing the latest productivity applications on top of the stable base, not even through backport ppa's and if you use the Ubuntu-provided Krita, you're stuck in what now feels like the dark ages.

Enter the spiritual successor of Klik: AppImage. AppImages sprang into the limelight when they got Linus Torvalds' Seal of Approval. That distributing software on Linux is problematical has been a thorn in his eye for a long time, and when particularly when he started working on an end-user application: Subsurface. When the person behind AppImage created a SubSurface package, that resulted in a lot of publicity.

And I contacted Simon to ask for help creating a Krita AppImage. After all, we are in the middle of working up to a 3.0 release, and I'd like to be able to produce regular development builds, not just for Windows and OSX, but also for Linux.

Krita's AppImage is built on CentOS 6.5 using a long bash script. It updates CentOS using the Epel repository so we get a reasonably recent Qt5, then installs an updated compiler, gets Krita, installs dependencies, builds dependencies, builds krita, checks all the output for their dependencies, copies them into a tree, edits eveyrthing to look for dependencies locally instead of on the system and packages it up with a small executable that runs the Krita executable. The one thing that was really hard was figuring out how to integrate with the GPU drivers.

You can get the recipe here: https://github.com/boudewijnrempt/AppImages/blob/master/recipes/krita/Recipe.

There are some refinements possible: AppImage offers a way to update AppImages by only downloading and applying a delta, which we don't support yet. It's possible to setup a system where we can generate nightly builds, but I haven't figured out the combination of docker, travis and github that supports that yet, either. And Simon is working on an improved first-run script that would ask the user whether they would like to have some desktop integration, for instance for file handling or menu integration. All of that is in the future. There are also a handful of distributions that disable fuse by default, or close it for non-root users. Unfortunately, CentOS is one of them...

For now, though, it's really easy to generate binaries that seem to run quite well on a wide variety of Linux distributions, that performs just like native (well, the packages is native), are easy to download and run. So I'm ready to declare "problem solved!"

January 28, 2016

January 26, 2016

HDR Photography with Free Software (LuminanceHDR)


HDR Photography with Free Software (LuminanceHDR)

A first approach to creating and mapping HDR images

I have a mostly love/hate relationship with HDR images (well, tonemapping HDR more than the HDR themselves). I think the problem is that it’s very easy to create really bad HDR images that the photographer thinks look really good. I know because I’ve been there:

Hayleys - Mobile, AL Don’t judge me, it was a weird time in my life…

The best term I’ve heard used to describe over-processed images created from an HDR is “clown vomit” (which would also be a great name for a band, by the way). They are easily spotted with some tell-tale signs such as the halos at high-contrast edges, the unrealistically hyper-saturated colors that make your eyes bleed, and a general affront to good taste. In fact, while I’m putting up embarrassing images that I’ve done in the past, here’s one that scores on all the points for a crappy image from an HDR:

Tractor “My Eyes! The goggles do nothing!”

Crap-tastic! Of course, the allure here is that it provides first timers a glimpse into something new, and they feel the desire to crank every setting up to 11 with no regards to good taste or aesthetics.

If you take anything away from this post, let it be this: “Turn it DOWN. If it looks good to you, then it’s too much. ;)

HDR lightprobes are used in movie fx compositing to ensure that the lighting on CG models matches exactly the lighting for a live-action scene. By using an HDR lightprobe, you can match the lighting exactly to what is filmed.

I originally learned about, and used, HDR images when I would use them to illuminate a scene in Blender. In fact, I will still often use Paul Debevec’s Uffizi gallery lightprobe to light scene renders in Blender today.

For example, you may be able to record 10-12 stops of light information using a modern camera. Some old films could record 12-13 stops of light, while your eyes can approximately see up to 14 stops.

HDR images are intended to capture more than this number of stops. (Depending on your patience, significantly more in some cases).

I can go on a bit about the technical aspects of HDR imaging, but I won’t. It’s boring. Plus, I’m sure you can use Wikipedia, or Google yourselves. :) In the end, just realize that an HDR image is simply one where there is a greater amount of light information being stored than is able to be captured by your camera sensor in one shot.

Taking an HDR image(s)

More light information than my camera can record in one shot?
Then how do I take an HDR photo?

You don’t.

You take multiple photos of a scene, and combine them to create the final HDR image. Before I get into the process of capturing these photos to create an HDR with, consider something:

When/Why to use HDR

An HDR image is most useful to you when the scene you want to capture has bright and dark areas that fall outside the range of a single exposure, and you feel that there is something important enough outside that range to include in your final image.

That last part is important, because sometimes it’s OK to have some of your photo be too dark for details (or too light). This is an aesthetic decision of course, but keep it in mind…

Here’s what happens. Say you have a pretty scene you would like to photograph. Maybe it’s the Lower Chapel of Sainte Chapelle:

Sainte Chapelle Lower Chapel Sainte Chapelle Lower Chapel by iwillbehomesoon on Flickr (cbsna)

You may setup to take the shot, but when you are setting your exposure you may run into a problem. To expose for the brighter parts of the image means that the shadows fall to black too quickly, crushing out the details there.

If you expose for the shadows, then the brighter parts of the image quickly clip beyond white.

The use case for an HDR is when you can’t find a happy medium between those two exposures.

A similar situation comes up when you want to shoot any ground details against a bright sky, but you want to keep the details in both. Have a look at this example:

HDR Layers by dontmindme, on Flickr HDR Layers by dontmindme, on Flickr (cbna)

In the first column, if you expose for the ground, the sky blows out.

In the second, you can drop the exposure to bring the sky in a bit, but the ground is getting too dark.

In the third, the sky is exposed nicely, but the ground has gone to mostly black.

If you wanted to keep the details in the sky and ground at the same time, you might use an HDR (you could technically also use exposure blending with just a couple of exposures and blend them by hand, but I digress) to arrive at the last column.

Shooting Images for an HDR

Many cameras have an auto-bracketing feature that will let you quickly shoot a number of photos while changing the exposure value (EV) of each. You can also do this by hand simply by changing one parameter of your exposure each time.

You can technically change any of ISO, shutter speed, or aperture to modify the exposure, but I’d recommend you change only the shutter speed (or EV value when in Aperture Priority modes).

The reason is that changing the shutter speed will not alter the depth-of-field (DoF) of your view or introduce any extra noise the way changing the aperture or ISO would.

When considering your scene, you will also want to try to stick to static scenes if possible. The reason is that objects that move around (swaying trees, people, cars, fast moving clouds, etc.) could end up as ghosts or mis-alignments in your final image. So as you’re starting out, choose your scene to help you achieve success.

Set up your camera someplace very steady (like a tripod), dial in your exposure and take a shot. If you let your camera meter your scene for you then this is a good middle starting point.

For example, if you setup your camera and meter your scene, it might report a 1160 second exposure. This is our starting point (0EV).

The base exposure, 1160 s, 0EV

To capture the lower values, just cut your shutter speed in half ( 180 second, +1EV), and take a photo. Repeat if you’d like ( 140 second, +2EV).

180 second, +1EV (left), 140 second, +2EV (right)

To capture the upper values, just double your starting point shutter speed ( 1320, -1EV) and take a photo. Repeat if you’d like again ( 1640, -2EV).

1320, -1EV (left), 1640, -2EV (right)

This will give you 5 images covering a range of -2EV to +2EV:

Shutter SpeedExposure Value
1640-2EV
1320-1EV
11600EV
180+1EV
140+2EV

Your values don’t have to be exactly 1EV each time, LuminanceHDR is usually smart enough to figure out what’s going on from the EXIF data in your image - I chose full EV stops here to simplify the example.

So armed with your images, it’s time to turn them into an HDR image!

Creating an HDR Image

You kids have it too easy these days. We used to have to bring all the images into Hugin and align them before we could save an hdr/exr file. Nowadays you’ve got a phenomenal piece of Free/Open Source Software to handle this for you:

LuminanceHDR
(Previously qtpfsgui. Seriously.)

After installing it, open it up and hit “New HDR Image“:

LuminanceHDR startup screen

This will open up the “HDR Creation Wizard” that will walk you through the steps of creating the HDR. The splash screen notes a couple of constraints.

LuminanceHDR wizard splash screen

On the next screen, you’ll be able to load up all of the images in your stack. Just hit the big green “+“ button in the middle, and choose all of your images:

LuminanceHDR load wizard

LuminanceHDR will load up each of your files, and investigate them to try and determine the EV values for each one. It usually does a good job of this on its own, but if there a problem you can always manually specify what the actual EV value is for each image.

Also notice that because I only adjusted my shutter speed by half or double, that each of the relative EV values is neatly spaced 1EV apart. They don’t have to be, though. I could have just as easily done ½ EV or &frac13 EV steps as well.

LuminanceHDR creation wizard

If there is even the remotest question about how well your images will line up, I’d recommend that you check the box for “Autoalign images”, and let Hugin’s align_image_stack do it’s magic. You really need all of your images to line up perfectly for the best results.

Hit “Next“, and if you are aligning the images be patient. Hugin’s align_image_stack will find control points between the images and remap them so they are all aligned. When it’s done you’ll be presented with some editing tools to tweak the final result before the HDR is created.

LuminanceHDR Creation Wizard

You are basically looking at a difference view between images in your stack at the moment. You can choose which two images to difference compare by choosing them in the list on the left. You can now shift an image horizontally/vertically if it’s needed, or even generate a ghosting mask (a mask to handle portions of an image where objects may have shifted between frames).

If you are careful, and there’s not much movement in your image stacks, then you can safely click through this screen. Hit the “Next“ button.

LuminanceHDR Creation Wizard

This is the final screen of the HDR Creation Wizard. There are a few different ways to calculate the pixel values that make up an HDR image, and this is where you can choose which ones to use. For the most part, people far smarter than I had a look at a bunch of creation methods, and created the predefined profiles. Unless you know what you’re doing, I would stick with those.

Hit “Finish“, and you’re all done!

You’ll now be presented with your HDR image in LuminanceHDR, ready to be tonemapped so us mere mortals can actually make sense of the HDR values present in the image. At this point, I would hit the “Save As…” button, and save your work.

LuminanceHDR Main

Tonemapping the HDR

So now you’ve got an HDR image. Congratulations!

The problem is, you can’t really view it with your puny little monitor.

The reason is that the HDRi now contains more information than can be represented within the limited range of your monitor (and eyeballs, likely). So we need to find a way to represent all of that extra light-goodness so that we can actually view it on our monitors. This is where tonemapping comes in.

We basically have to take our HDRi and use a method for compressing all of that radiance data down into something we can view on our monitors/prints/eyeballs. We need to create a Low Dynamic Range (LDR) image from our HDR.

Yes - we just went through all the trouble of stacking together a bunch of LDR images to create the HDRi, and now we’re going back to LDR ? We are - but this time we are armed with way more radiance data than we had to begin with!

The question is, how do we represent all that extra data in an LDR? Well, there’s quite a few different ways. LuminanceHDR provides for 9 different tonemapping operators (TMO’s) to represent your HDRi as an LDR image:

Just a small reminder, there’s a ton of math involved in how to map these values to an LDR image. I’m going to skip the math. The references are out there if you want them.

I’ll try to give examples of each of the operators below, and a little comment here and there. If you want more information, you can always check out the list on the Open Source Photography wikidot page.

Before we get started, let’s have a look at the window we’ll be working in:

LuminanceHDR Main Window

Tonemap is the section where you can choose which TMO you want to use, and will expose the various parameters you can change for each TMO. This is the section you will likely be spending most of your time, tweaking the settings for whichever TMO you decide to play with.

Process gives you two things you’ll want to adjust. The first is the size of the output that you want to create (Result Size). While you are trying things out and dialing in settings you’ll probably want to use a smaller size here (some operators will take a while to run against the full resolution image). The second is any pre-gamma you want to apply to the image. I’ll talk about this setting a bit later on.

Oh, and this section also has the “Tonemap” button to apply your settings and generate a preview. I’ll also usually keep the “Update current LDR” checked while I rough in parameters. When I’m fine-tuning I may uncheck this (it will create a new image every time you hit the “Tonemap” button).

Results are shown in this big center section of the window. The result will be whatever Result Size you set in the previous section.

Previews are automatically generated and shown in this column for each of the TMO. If you click on one, it will automatically apply that TMO to your image and display it (at a reduced resolution - I think the default is 400px, but you can change it if you want). It’s a nice way to quickly get a preview overview of what all the different TMOs are doing to your image.

Ok, with that out of the way, let’s dive into the TMOs and have a look at what we can do. I’m going to try to aim for a reasonably realistic output here that (hopefully) won’t make your eyeballs bleed. No promises, though.

Need an HDR to follow along? I figured it might be more fun (easier?) to follow along if you had the same file I do.
So here it is, don’t say I never gave you anything (This hdr is licensed cc-by-sa-nc by me):
Download from Google Drive (41MB .hdr)

Another note - all of the operators can have their results tweaked by modification of the pre-gamma value ahead of time. This is applied the image before the TMO is applied, and will make a difference in the final output. Usually pushing the pre-gamma value down will increase contrast/brightness in the image, while increasing it will do the opposite. I find it better to start with pre-gamma set to 1 as I experiment, just remember that it is another factor that you use to modify your final result.

Mantiuk ‘06

I’m starting with this one because it’s the first in the list of TMOs. Let’s see what the defaults from this operator look like against our base HDRi:

Mantiuk 06 default Default Mantiuk ‘06 applied

By default Mantiuk ‘06 produces a muted color result that seems pleasing to my eye. Overall the image feels like it’s almost “dirty” or “gritty” with these results. The default settings produce a bit of extra local contrast boosting as well.

Let’s see what the parameters do to our image.

Contrast Factor

The default factor is 0.10.

Pushing this value down to as low as 0.01 produces just a slight increase in contrast across the image from the default. Not that much overall.

Pushing this value up, though, will tone down the contrast overall. I think this helps to add some moderation to the image, as hard contrasts can be jarring to the eyes sometimes. Here is the image with only the Contrast Factor pushed up to 0.40:

Mantiuk 06 Contrast Factor 0.4 Mantiuk ‘06 - Contrast Factor increased to 0.40
(click to compare to defaults)

Saturation Factor

The default value is 0.80.

This factor just scales the saturation in the image, and behaves as expected. If you find the colors a bit muted using this TMO, you can bump this value a bit (don’t get crazy). For example, here is the Saturation Factor bumped to 1.10:

Mantiuk 06 Saturation 1.10 Mantiuk ‘06 - Saturation Factor increased to 1.10
(click to compare to defaults)

Of course, you can also go the other way if you want to mute the colors a bit more:

Mantiuk 06 Saturation 0.40 Mantiuk ‘06 - Saturation Factor decreased to 0.40
(click to compare to defaults)

Detail Factor

The default is 1.0.

The Detail Factor appears to control local contrast intensity. It gets overpowering very quickly, so make small movements here (if at all). Here is what pushing the Detail Factor up to 10.0 produces:

Mantiuk 06 Detail Factor Don’t do this. Mantiuk ‘06 - Detail Factor increased to 10.0
(click to compare to defaults)

Contrast Equalization

This is supposed to equalize the contrast if there are heavy swings of light/dark across the image on a global scale, but in my example did little to the image (other than a strange lightening in the upper left corner).

My Final Version

I played a bit starting from the defaults. First I wanted to push down the contrast a bit to make everything just a bit more realistic, so I pushed Contrast Factor up to 0.30. I slightly bumped the Saturation Factor to 0.95 as well.

I liked the textures of the tree and house, so I wanted to bring those back up a bit after decreasing the Contrast Factor, so I pushed the Detail Factor up to 5.0.

Here is what I ended up with in the end:

Mantiuk 06 Final Result My final output (Contrast 0.3, Saturation 0.95, Detail 5.0)
(click to compare to defaults)

Mantiuk ‘08

Mantiuk ‘08 is a global contrast TMO (for comparison, Mantiuk ‘06 uses local contrast heavily). Being a global operator, it’s very quick to apply.

Mantiuk 08 default Default Mantiuk ‘08 applied

As you can see, the effect of this TMO is to compress the dynamic range into an LDR output using a function that operates across the entire image globally. This will produce a more realistic result I think, overall.

The default output is not bad at all, where brights seem appropriately bright, and darks are dark while still retaining details. It does feel like the resulting output is a little over-sharp to my eye, however.

There are only a couple of parameters for this TMO (unless you specifically override the Luminance Level with the checkbox, Mantiuk ‘08 will automatically adjust it for you):

Predefined Display

There are options for LCD Office, LCD, LCD Bright, and CRT but they didn’t seem to make any difference in my final output at all.

Color Saturation

The default is 1.0.

Color Saturation operates exactly how you’d expect. Dropping this value decreases the saturation, and vice versa. Here’s a version with the Color Saturation bumped to 1.50:

Mantiuk ‘08 - Color Saturation increased to 1.50
(click to compare to defaults)

Contrast Enhancement

The default value is 1.0.

This will affect the global contrast across the image. The default seemed to have a bit too much contrast, so it’s worth it to dial this value in. For instance, here is the Contrast Enhancement dialed down to 0.51:

Mantiuk 08 Contrast Enhancement 0.51 Mantiuk ‘08 - Contrast Enhancement decreased to 0.51
(click to compare to defaults)

Compared to the default settings I feel like this operator can work better if the contrast is turned down just a bit to make it all a little less harsh.

Enable Luminance Level

This checkbox/slider allows you to manually specify the Luminance Level in the image. The problem that I ran into was that with this enabled, I couldn’t adjust the Luminance far enough to keep bright areas in the image from blowing out. if I let the default behavior of automatically adjusting Luminanace, then it kept things more under control.

My Final Version

Starting from the defaults, I pushed down the Contrast Enhancement to 0.61 to even out the overall contrast. I bumped the Color Saturation to 1.10 to bring out the colors a bit more as well.

I also dropped the pre-gamma correction to 0.91 in order to bring back some of the contrast lost from the Contrast Enhancement.

Mantiuk 08 final result My final Mantiuk ‘08 output
(pre-gamma 0.91, Contrast Enhancement 0.61, Color Saturation 1.10)
(click to compare to defaults)

Fattal

Crap. Time for this TMO I guess…

THIS is the TMO responsible for some of the greatest sins of HDR images. Did you see the first two images in this post? Those were Fattal. The problem is that it’s really easy to get stupid with this TMO.

Fattal (like the other local contrast operators) is dependent on the final output size of the image. When testing this operator, do it at the full resolution you will want to export. The results will not match up if you change size. I’m also going to focus on using only the newer v.2.3.0 version, not the old one.

Here is what the default values look like on our image:

Fattal default Default Fattal applied

The defaults are pretty contrasty, and the color seems saturated quite a bit as well. Maybe we can get something useful out of this operator. Let’s have a look at the parameters.

Alpha

The default is 1.00.

This parameter is supposed to be a threshold against which to apply the effect. According to the wikidot, decreasing this value should increase the level of details in the output and vice versa. Here is an example with the Alpha turned down to 0.25:

Fattal - Alpha decreased to 0.25
(click to compare to defaults)

Increasing the Alpha value seems to darken the image a bit as well.

Beta

The default value is 0.90.

This parameter is supposed to control the amount of the algorithm applied on the image. A value of 1 is no effect on the image (straight gamma=1 mapping). Lower values will increase the amount of the effect. Recommended values are between 0.8 and 0.9. As the values get lower, the image gets more cartoonish looking.

Here is an example with Beta dropped down to 0.75:

Fattal Beta 0.75 Fattal - Beta decreased to 0.75
(click to compare to defaults)

Color Saturation

The default value is 1.0.

This parameter does exactly what’s described. Nothing interesting to see here.

Noise Reduction

The default value is 0.

This should suppress fine detail noise from being picked up by the algorithm for enhancement. I’ve noticed that it will slightly affect the image brightness as well. Fine details may be lost if this value is too high. Here the Noise Reduction has been turned up to 0.15:

Fattal NR 0.15 Fattal - Noise Reduction increased to 0.15
(click to compare to defaults)

My Final Version

This TMO is sensitive to changes in its parameters. Small changes can swing the results far, so proceed lightly.

I increased the Noise Reduction a little bit up front, which lightened up the image. Then I dropped the Beta value to let the algorithm work to brighten up the image even further. To offset the increase, I pushed Alpha up a bit to keep the local contrasts from getting too harsh. A few minutes of adjustments yielded this:

Fattal Final Result My Fattal output - Alpha 1.07, Beta 0.86, Saturation 0.7, Noise red. 0.02
(click to compare to defaults)

Overall, Fattal can be easily abused. Don’t abuse the Fattal TMO. If you find your values sliding too far outside of the norm, step away from your computer, get a coffee, take a walk, then come back and see if it still hurts your eyes.

Drago

Drago is another of the global TMOs. It also has just one control: bias.

Here is what the default values produce:

Default Drago applied

The default values produced a very washed out appearance to the image. The black points are heavily lifted, resulting in a muddy gray in dark areas.

Bias is the only parameter for this operator. The default value is 0.85. Decreasing this value will lighten the image significantly, while increasing it will darken it. For my image, even pushing the Bias value all the way up to 1.0 only produced marginal results:

Drago Bias 1.0 Drago - Bias 1.0
(click to compare to defaults)

Even at this level the image still appears very washed out. The only other parameter to change would be the pre-gamma before the TMO can operate. After adjusting values for a bit, I settled on a pre-gamma of 0.67 in addition to the Bias being set to 1:

My Final Version

Drago final result My result: Drago - Bias 1.0, pre-gamma 0.67
(click to compare to defaults)

Durand

Most of the older documentation/posts that I can find describe Durand as the most realistic of the TMOs, yielding good results that do not appear overly processed.

Indeed the default settings immediately look reasonably natural, though it does exhibit a bit of blowing out in very bright areas - which I imagine can be fixed by adjustment of the correct parameters. Here is the default Durand output:

Default Durand applied

There are three parameters that can be adjusted for this TMO, let’s have a look:

Base Contrast

The default is 5.00.

This value is considered a little high from most sources I’ve read. Usually recommending to drop this value to the 3-4 range. Here is the image with the Base Contrast dropped to 3.0:

Durand Base Contrast 3.5 Durand - Base Contrast decreased to 3.5
(click to compare to defaults)

The Base Contrast does appear to drop the contrast in the image, but it also drops the blown-out high values on the house to more reasonable levels.

Spatial Kernel Sigma

The default value is 2.00.

This parameter seems to produce a change to contrast in the image. Large value swings are required to notice some changes, depending on the other parameter values. Pushing the value up to 65.00 looks like this:

Durand Spatial Kernel 65.00 Durand - Spatial Kernel Sigma increased to 65.00
(click to compare to defaults)

Range Kernel Sigma

The default value is 2.00.

My limited testing shows that this parameters doesn’t quite operate correctly. Changes will not modify the output image until you reach a certain threshold in the upper bounds, where it will overexpose the image. I am assuming there is a bug in the implementation, but will have to test further before filing a bug report.

My Final Version

In experiment I found that pre-gamma adjustments can affect the saturation in the output image. Pushing pre-gamma down a bit will increase the saturation.

Durand final result My Durand results - pre-gamma 0.88, Contrast 3.6, Spatial Sigma 5.00
(click to compare to defaults)

I pulled the Base Contrast back to keep the sides of the house from blowing out. Once I had done that, I also dropped the pre-gamma to 0.88 to bump the saturation slightly in the colors. A slight boost to Spatial Kernel Sigma let me increase local contrasts slightly as well.

Finally, I used the Adjust Levels dialog to modify the levels slightly by raising the black point a small amount (hey - I’m the one writing about all these #@$%ing operators, I deserve a chance to cheat a little).

Reinhard ‘02

This is supposed to be another very natural looking operator. The initial default result looks good with medium-low contrast and nothing blowing out immediately:

Default Reinhard ‘02 applied

Even though many parameters are listed, they don’t really appear to make a difference. At least with my test HDR. Even worse, attempting to use the “Use Scales” option usually just crashes my LuminanceHDR.

Key Value

The default is 0.18.

This appears to be the only operator that does anything in my image at the moment. Increasing it will increase the brightness of the image, and decreasing it will darken the image.

Here is the image with Key Value turned down to 0.05:

Reinhard 02 Key Value 0.05 Reinhard ‘02 - Key Value 0.05
(click to compare to defaults)

Phi

The default is 1.00.

This parameter does not appear to have any affect on my image.

Use Scales

Turning this option on currently crashes my session in LuminanceHDR.

My Final Version

I started by setting the Key Value very low (0.01), and adjusted it up slowly until I got the highlights about where I wanted them. Due to this being the only parameter that modified the image, I then started adjusting pre-gamma up until I got to roughly the exposure I thought looked best (1.09).

Reinhard 02 final result Final Reinhard ‘02 version - Key Value 0.09, pre-gamma 1.09
(click to compare to defaults)

Reinhard ‘05

Reinhard ‘05 is supposed to be another more ‘natural’ looking TMO, and also operates globally on the image. The default settings produce an image that looks under-exposed and very saturated:

Default Reinhard ‘05 applied

There are three parameters for this TMO that can be adjusted.

Brightness

The default value is -10.00.

Interestingly, pushing this parameter down (all the way to its lowest setting, -20) did not darken my image at all. Pulling it up, however, did increase the brightness overall. Here the brightness is increased to -2.00:

Reinhard 05 brightness -2.00 Reinhard ‘05 - Brightness increased to -2.00
(click to compare to defaults)

Chromatic Adaptation

The default is 0.00.

This parameter appears to affect the saturation in the image. Increasing it desaturates the results, which is fine given that the default value of 0.00 shows a fairly saturated image to begin with. Here is the Chromatic Adaptation turned up to 0.60:

Reinhard 05 chromatic adaptation 0.6 Reinhard ‘05 - Chromatic Adaptation increased to 0.6
(click to compare to defaults)

Light Adaptation

The default is 1.00.

This parameter modifies the global contrast in the final output. It starts at the maximum of 1.00, and decreasing this value will increase the contrast in the image. Pushing the value down to 0.5 does this to the test image:

Reinhard 05 light adaptation 0.50 Reinhard ‘05 - Light Adaptation decreased to 0.50
(click to compare to defaults)

My Final Version

Reinhard 05 final result My Reinhard ‘05 - Brightness -5.00, Chromatic Adapt. 0.60, Light Adapt. 0.75
(click to compare to defaults)

Starting from the defaults, I raised the Brightness to -5.00 to lift the darker areas of the image, while keeping an eye on the highlights to keep them from blowing out. I then decreased the Light Adaptation until the scene had a reasonable amount of contrast without becoming overpowering to 0.75. At that point I turned up the Chromatic Adaptation to reduce the saturation in the image to be more realistic, and finished at 0.60.

Ashikhmin

This TMO has little in the way of controls - just options for two different equations that can be used, and a slider. The default (Eqn. 2) image is very dark and heavily saturated:

Ashikhmin default Default Ashikhmin applied

There is a checkbox option for using a “Simple” method (that produces identical results regardless of which Eqn is checked - I’m thinking it doesn’t use that information).

Simple

Checking the Simple checkbox removes any control over the image parameters, and yields this image:

Ashikhmin simple Ashikhmin - Simple
(click to compare to defaults)

Fairly saturated, but exposed reasonably well. It lacks some contrast, but the tones are all there. This result could use some further massaging to knock down the saturation and to bump the contrast slightly (or adjust pre-gamma).

Equation 4

This is the result of choosing Equation 4 instead:

Ashikhmin equation 4 Ashikhmin - Equation 4
(click to compare to defaults)

There is a large loss of local contrast details in the scene, and some of the edges appear very soft. Overall the exposure remains very similar.

Local Contrast Threshold

The default value is 0.50.

This parameter modifies the local contrast being applied to the image. The result will be different depending on which Equation is being used.

Here is Equation 2 with the Local Contrast Threshold reduced to 0.20:

Ashikhmin eqn 2 local contrast 0.20 Ashikhmin - Eqn 2, Local Contrast Threshold 0.20
(click to compare to defaults)

Lower values will decrease the amount of local contrast in the final output.

Equation 4 with Local Contrast Threshold reduced to 0.20:

Ashikhmin eqn 4 local contrast 0.20 Ashikhmin - Eqn 4, Local Contrast Threshold 0.20
(click to compare to defaults)

My Final Version

After playing with the options, the overall best version I feel is had by just using the Simple option. Further tweaking may be necessary to get usable results beyond this.

Pattanaik

This TMO appears to attempt to mimic the behavior of human eyes with the inclusion of terminology like “Rod” and “Cone”. There are quite a few different parameters to adjust if wanted. The default TMO results in an image like this:

Default Pattanaik applied

The default results are very desaturated, and tends to blow out in the highlights. The dark areas appear well exposed, with the problems (in my test hdr) being mostly constrained to highlights for this example. On first glance, the results look like something that could be worked with.

There are quite a few different parameters for this TMO. Let’s have a look at them:

Multiplier

The default value is 1.00.

This parameter appears to modify the overall contrast in the image. Decreasing the value will decrease contrast, and vice versa. It also appears to slightly modify the brightness in the image as well (pushing the highlights to a less blown-out value). Here is the Multiplier decreased to 0.03:

Pattanaik multiplier 0.03 Pattanaik - Multiplier 0.03
(click to compare to defaults)

Local Tone Mapping

This parameter is just a checkbox, with no controls. The result is a washed out image with heavy local contrast adjustments:

Pattanaik local tone mapping Pattanaik - Local Tone Mapping
(click to compare to defaults)

Cone/Rod Levels

The default is to have Auto Cone/Rod checked, greying out the options to change the parameters manually.

Turning off Auto Cone/Rod will get the default manual values of 0.50 for both applied:

Pattanaik manual cone/rod 0.5 each Pattanaik - Manual Cone/Rod (0.50 for each)
(click to compare to defaults)

The image gets very blown out everywhere, and modification of the Cone/Rod values does not significantly reduce brightness across the image.

My Final Version

Starting with the defaults, I reduced the Multiplier to bring the highlights under control. This reduced contrast and saturation in the image.

Pattanaik final result My final Pattanaik - Multiplier 0.03, pre-gamma 0.91
(click to compare to defaults)

To bring back contrast and some saturation, I decreased the pre-gamma to 0.91. The results are not too far off of the defualt settings. The results could still use some further help with global contrast and saturation, and might benefit from layering or modifications in GIMP.

Closing Thoughts

Looking through all of the results shows just how different each TMO will operate across the same image. Here are all of the final results in a single image:

I personally like the results from Mantiuk ‘06. The problem is that it’s still a little more extreme than I would care for in a final result. For a really good, realistic result that I think can be massaged into a great image, I would go to Mantiuk ‘08 or Reinhard.

I could also do something with Fattal, but would have to tone a few things down a bit.

While you’re working, remember to occasionally open up the Levels Adjustment to keep an eye on the histogram. Look for highlights blowing out, and shadows becoming too murky. All the normal rules of image processing still apply here - so use them!

You’re trying to use HDR as a tool for you to capture more information, but remember to still keep it looking realistic. If you’re new to HDR processing, then I can’t recommend enough to stop occasionally, get away from the monitor, and come back to look at your progress.

If it hurts your eyes, dial it all back. Heck, if you think it looks good, still dial it back .

If I can head off even one clown-vomit image, then I’ll consider my mission accomplished with this post.

A Couple of Further Resources

Here’s a few things I’ve found scattered around the internet if you want to read more.

We also have a sub-category on the forums dedicated entirely to LuminanceHDR and HDR processing in general: https://discuss.pixls.us/c/software/luminancehdr.

This tutorial was originally published here.

January 25, 2016

AppData and the gettext domain

When users are searching for software in GNOME Software it is very important to answer the the question “Is this localized in my language?” If you can only speak Swedish then an application talking just in American English is not much use at all. The way we calculate this in the AppStream builder is to look at the compiled .mo files, breaking them apart and then using statistics to work out what locales are included.

When we’re processing distro packages we usually extract them one at a time. We first try for a gettext domain (the .mo file name) of the distro package name, and if that’s not found then we just try and find the first .mo file in any of the locale directories. This works about 70% of the time (which is good) but fails about 30% of the time (which is bad). For xdg-app we build the application in a special prefix, along with any dependent libraries. We don’t have a distro package name for the bundle (only the application ID) and so the “first .mo file we can find” heuristic fails more often that it works. We clearly need some more information about the gettext domain from the upstream project.

AppData to the rescue. By adding this in the AppData file informs the AppStream generation code in the xdg-app builder what gettext domain to use for an application. To use this you just need to add:

  <translation type="gettext">the_gettext_domain_here</translation>

under the <component> tag. The gettext domain is normally set in the configure.ac file with the GETTEXT_PACKAGE define. If you don’t have this extra data in your application then appstream-util validate is soon going to fail, and your application isn’t going to get the language metadata and so will be lower in the search results for users using GNOME Software in a non-C locale. If your GNOME application is available in jhbuild the good news is that I’ve automatically added the <translation> tag to 104 projects semi-automatically today. For XFCE and KDE I’m going to be sending emails to the development mailing lists tomorrow. For all other applications I’m going to be using the <update_contact> email address set in the AppData file for another mass-emailing.

Although it seems I’m asking people to do more things again and again I can assure you that slowly we’re putting the foundations in place for an awesome software installer experience. Just today I merged in the xdg-app branch into gnome-software and so I’m hoping to have a per-user xdg-app preview available in Fedora 24. Exciting times. :)

Kicking off 2016 — the first Krita Sprint

This weekend, we had our place full of hackers again. The Calligra Text Layout Sprint coincided with the Krita 2016 Kick-Off Sprint. Over the course of the sprint, which started on Wednesday, we had two newcomers to KDE-related hacking sprints, and during the weekend, we had an unusual situation for free software: actual gender parity.

When asked the Calligra people whether their sprint was a success the answer Camilla gave was an unqualified “Ja!”. The main topic was sections and columns. There was also a lot of knowledge transfer and fixing of the okular plugin that’s part of Calligra. And Jos gave a sneak preview of his Fosdem presentations, not at all hindered by the noise the Krita hackers were making.

As for Krita, we started on Wednesday already, and first discussed cleaning up our source tree. We recently had a patch by a new contributor, who commented that, compared to some other projects he’d hacked our codebase is an exemplar of clarity… But it can and should be improved, we’re still having lots of legacy from the calligra days… So we’re moving things around to make the structure more logical.

Next was OpenGL. Once of the promises of Qt4 was that QPainter would work on an opengl surface. Back in the Krita 1.x days, the Qt3 days, we already had a QPainter and an OpenGL canvas. Both canvases needed separate code for brush outlines, tool helplines and so on. When we ported to Qt4, we unified that code. Painting the image on the canvas was still different for OpenGL and QPainter canvas, but what we call the tool and canvas decorations, that was all unified.

However, the OpenGL QPainter engine has never been really maintained and got stuck in the OpenGL v2 days. Maybe sufficient for some mobile applications, but not for a desktop application like Krita, which needs several OpenGL 3.2 features to function correctly. That wasn’t a problem until we decided to make a serious effort of our port to OSX. The OpenGL QPainter engine, being OpenGL v2, can only work in an OpenGL 3.2 context if the compatibility profile is set: it needs all those deprecated things.

Apple decided that nobody would need that, and offers only the Core Profile. That sucks. That means the OpenGL QPainter engine is not available on OSX for applications that need V3.2 Core Profile. Worse, Intel’s Windows drivers regularly go through a phase where using V3.2 Compatibility Profile causes black screens.

So… We teased out Qt’s OpenGL QPainter engine and started trying to port it to OpenGL v3.2. It’s supposed to be possible, but it’s likely to be extremely tricky. If we got it working, it would be nice if it could become a part of Qt… But that’s likely as challenging as writing the code in the first place.

When, the next day, we started discussing OpenGL in the context of the Qt5 and QtQuick 2 port of Krita Sketch, which Friedrich is working on, we sounded like this:

krita-sprint-janurary2016

(Image by Wolthera)

After that, on Saturday, we started planning the next Kickstarter, which will have as its main topics Text and Vector. And we even began to look forward to 2017, when we might want to focus on issues around the creation of comics. Here are the minutes!

A nice dinner at our favourite Greek restaurant, Kreta, a quiet evening, and on Sunday…

On Sunday we really went through all the registered Wish bugs in bugzilla. There were 316 of them. A whole bunch we closed for now: wonderful ideas, but not going to happen in the next two years, another bunch turned out to be already implemented and the rest we carefully categorized using the following formula:

  • WISHGROUP: Pie-in-the-sky: not going to happen, but it would be really cool
  • WISHGROUP: Big Projects: needs more definition, maybe two, three months of work
  • WISHGROUP: Stretchgoal : up to a couple of weeks or a month of work
  • WISHGROUP: Larger Usability Fixes: maybe a week or two weeks of work
  • WISHGROUP: Small Usability Fixes: half a day or a day of work
  • WISHGROUP: Out of scope: too far from our current core goals to implement
  • WISHGROUP: Needs proposal and design: needs discussion among artists to define scope first

And now the library reorganization is in progress. We also fixed a bunch of bugs, so expect new Windows, OSX and Linux Krita 3.0 development builds later this week! And end of this month, the last Krita 2.9 bugfix release!

The Art of Open Source

This article introduces Blender to a wider audience.

Written for Linux Format magazine, Jim Thacker sketches Blender’s history and the successful content-driven development model.

Download or read the pdf here.

(Text and pdf is (C) by Linux Format, copied on blender.org with permission)

Screen Shot 2016-01-25 at 12.05.21 Screen Shot 2016-01-25 at 12.23.57

January 23, 2016

Bit shifting, done by the << and >> operators, allows in C languages to express memory and storage access, which is quite important to read and write exchangeable data. But:

Question: where is the bit when moved shifted left?

Answere: it depends

Long Answere:

// On LSB/intel a left shift makes the bit moving 
// to the opposite, the right. Omg 
// The shift(<<,>>) operators follow the MSB scheme
// with the highest value left. 
// shift math expresses our written order.
// 10 is more than 01
// x left shift by n == x << n == x * pow(2,n)
#include <stdio.h> // printf
#include <stdint.h> // uint16_t

int main(int argc, char **argv)
{
  uint16_t u16, i, n;
  uint8_t * u8p = (uint8_t*) &u16; // uint16_t as 2 bytes
  // iterate over all bit positions
  for(n = 0; n < 16; ++n)
  {
    // left shift operation
    u16 = 0x01 << n;
    // show the mathematical result
    printf("0x01 << %u:\t%d\n", n, u16);
    // show the bit position
    for(i = 0; i < 16; ++i) printf( "%u", u16 >> i & 0x01);
    // show the bit location in the actual byte
    for(i = 0; i < 2; ++i)
      if(u8p[i]) 
        printf(" byte[%d]", i); 
    printf("\n");
  }
  return 0;
}

Result on a LSB/intel machine:

0x01 << 0:      1 
1000000000000000 byte[0] 
0x01 << 1:      2 
0100000000000000 byte[0] 
0x01 << 2:      4 
0010000000000000 byte[0] 
0x01 << 3:      8 
0001000000000000 byte[0]
...

<< MSB Moves bits to the left, while << LSB is a Lie and moves to the right. For directional shifts I would like to use a separate operator, e.g. <<| and |>>.

January 20, 2016

xdg-app and GNOME Software

With a huge amount of support from alex, installing applications using xdg-app in GNOME Software is working. There are still a few rough edges, but it’s getting there quickly now.

Screenshot from 2016-01-20 16-50-02

PROTOCULTURAL Taipei New Causes for the Old Event Announced

Taipei Contemporary Art Center hosts a Protocultural exhibition from January 23rd to 31st, 2016. Featured in the exhibition are 3D Models printed from around the world from the #NEWPALMYRA #ARCHOFTRIUMPH #3DPRINTSPRINT.

Join the workshop and opening on January 23rd, 2016 from 2pm to 7pm.

January 18, 2016

Interview with Cremuss

Could you tell us something about yourself?

My name is Ghislain, I’m 25 years old and I live in Saint-Etienne, France. I’ve worked as a freelancer in the video game industry since 2009. Through the years I experienced a lot of different software: I began to use 3dsmax 5.1 when I entered secondary school (I had a computer very young), then a bit of Maya but I wasn’t really committed to learn 3D properly yet: it was more a toy than anything else at this point.

I grew a bit tired of it during high school so I stopped for a time and it’s only after finishing high school that I wanted to start digging into CG software again. I was fully converted to open-source projects and GNU/Linux at this moment so in my mind I obviously had to give Blender a try. I learned it, loved it and fall in love with video game art while helping with the development of an open source video game/engine, SpringRTS.

I love computers and I’m very much interested in the “science” behind it: I did two years of C++ in my free time, learned html/css and javascript, toyed a lot with Gentoo/Linux as well as a bit of computer aided music just for fun :)

Besides computers, I spend a lot of time on my BMX. I’ve been riding for seven and a half years and it’s a huge source of motivation for me.

Do you paint professionally, as a hobby artist, or both?

Both, I paint professionally but I also like to spend a lot of my free time on personal 2D/3D projects. Although to be fair, that’s kinda irrelevant to me. I’m here as an artist to do my best, no matter what. If you’re passionate enough about something, I’d say you should be serious about it and honour it by doing your best. The difference between painting as a hobby or professionally doesn’t make much sense then.

What genre(s) do you work in?

texture800

I mostly paint stylized textures for 3D models. I’m not much of a drawer or painter so most of my 2D work is involving 3D sooner or later.

Whose work inspires you most — who are your role models as an artist?

There’s so many amazingly talented artists that it’s hard to give a specific name but if I had to mention someone, it would probably be the team who worked on Allods Online. That game has the best hand-painted stylized textures I’ve ever seen. Artists at Blizzard are also obviously a huge inspiration as well as Riot Games artists (such as Bogdanbl4 who is doing crazy work!)

How and when did you get to try digital painting for the first time?

I’m pretty sure my first ever digital painting session was when my dad bought me my first ever wacom intuos 3 M ! I was in high school and I gave it a try. I drew some sort of ugly dragon I think. I remember I signed the painting with a signature as big as the painting itself haha, it was awful.

What makes you choose digital over traditional painting?

I didn’t really choose. I’m not really a good drawer or painter so I never actually started traditional painting although I should have. I had to learn digital painting because it’s a huge part of cartoony/stylized 3D work.

HandPaintedWeapons_Axe1

How did you find out about Krita?

I’m a long time open source software user so I like to stay in touch with what’s new in the open source world. I heard about Krita quite early when it started to grow. I had always  used Gimp until then but I grew really tired of its slow development, lack of 16 bits, layers group and all and most of all, lack of communication from the developers. I migrated to Krita as soon as I had all the tools I needed in order to do my work.

What was your first impression?

I had to wait quite a bit between the time I first learned about Krita and the time I began to use it in my workflow since it lacked several features I found quite important at that time, such as color balance as well as stability. But first of all, I was really impressed by how fast it grew and how quickly it matured.

What do you love about Krita?

Its fast development and the feeling that the community is really involved in the development. There’s a lot of news about the software and development reports so users know what to expect and when. Developers are really committed to do something great.

Also, specific features such as wrap tool, mirrored painting, the brush engine and non-destructive filters.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Developers are going to hate me for this, and I’ve really discussed it before with them, but I feel the colour management in Krita could be less painful. I know it’s something they’re proud of and I understand why, but it’s annoying me as well as other artists. I feel the user has to worry too much about it, especially regarding something that complex that so few of us, artists, understand and/or are willing to understand.

I get the theory behind colour management, but how did we end up with a list of 10 different sRGB profiles when it’s something that has been normalized in 1999 and was supposed to be a”standard”. Linear and gamma corrected color space is a thing, yes, but to a lot of artists, that doesn’t justify the 76 different color profiles list that we currently have in Krita. It’s scary.

For instance, I had to ask the devs themselves how to transfer properly 16 bits image form Blender to Krita because neither the software nor internet could tell me. Why and how are we supposed to know what profile uses Blender ? What to do if you realize to late that Blender doesn’t use the same profile as your file in Krita ? I’m always so scared to convert the color space because I don’t want my work to be screwed over some stuff that I don’t understand and that I really don’t care. I feel lost and I know for fact that other artists are as lost as I am.

To sum up, I feel like colour management is necessary but that it should be dealt with more by the software, underground, than by the user. How to achieve that, I don’t know 😉

Otherwise, I never liked the smudge behaviour. I think it’s a little weird and could be improved.

What sets Krita apart from the other tools that you use?

I’m not experienced enough with 2D applications to answer that honestly. I feel Krita is a great mix between a painting and a photo editing application and it blends quite nicely into my workflow pipeline so I just want to say it’s great :)

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

HandPaintedHouse1

I would pick these 3D stylized houses. I’m the kind of artist that never seems to be satisfied with his work but somehow I’m satisfied with those. Textures were 100% hand-painted in Krita.

What techniques and brushes did you use in it?

No particular techniques and the default brush :) Concerning the brushes I usually like to keep it as simple as possible : the default brush is just fine by me 99% of the time.

Where can people see more of your work?

I have a portfolio here: http://www.cremuss.net I always take ages to update it but right now it is almost up to date, so take a look :p

Anything else you’d like to share?

Thanks a lot for the interview ! Thanks to Krita devs, keep it up and peace :)

January 17, 2016

A Week in the Life of a Krita Maintainer

Sunday, 10th of January

Gah... I shouldn't have walked that much yesterday... It was fun, to see A's newborn daughter, A, and then we went to Haarlem to buy tea and have dinner. But with a nasty foot infection, that wasn't wise. So, no chance of serving in church today. Which means... More Krita time! Around 9:30 started the first OSX build, CentOS build and Windows build, time to try and figure out some bug fixes. Also, reply to forum posts and mails to foundation@krita.org... And prepare review requests for making Krita .kra and OpenRaster .ora images visible in all Qt applications, as images and thumbnails. Fix warnings in the OSX build. Fix deprecated function calls everywhere. Yay! Wolthera and Scott start cleaning up color stuff and the assistants gui. Dinner.

Monday, 11th of January.

Dammit, still cannot walk. But that means... More Krita time! I'm missing a whole day of income, being a small, independent enterpreneur, but I've got a better chance of fixing those Windows, OSX and Linux builds. Looks like OSX works fine now, Windows sometimes, but there's still something wrong with the Linux builds. I think we need more people in our project, people who like making builds and packages, so I can go back to bug fixing. Bug fixes... Let's fix the CentOS build issue by dropping the desktop file to json file conversion build-time. Fix a memory leak in the jpeg filter, been there for ages. Make it possible to load and save GBR and GIH brushes! Kickstarter feature lands! Not with the big rewrite of our import/export system I'd wanted to do, but it's better now than it was, import/export can specify a mapping from filename extension to mimetype, so we can load files that the shared desktop's mime database doesn't know about yet. Break selecting the right style and theme -- oops! Finally fix the unreadable Close button on the splash screen (when the user used a light-colored theme). User-support mail, forum posts, irc chat... Dmitry adds cut, copy and paste of layers, another kickstarter feature! Yay!!! Tonight is roleplaying night, need to prepare the adventure for my readers, with maps. (Session report is here.)

Tuesday, 12th of January

Six-forty-effing-five. Alarm clock. I was dreaming of Qt builds going awry, so probably a good time to get up. Erm... Mail, more mail, and forum posts during breakfast. Orange juice, coffee, tea. Off to the railways station around 7:40. Do a couple of Italian lessons with Duolingo while waiting for the train to arrive, interspersed with Facebook Page Manager community management moments. On the train. Sleepy! Time to start working on our OSX port. Beelzy did an awesome job providing me with lots of patches, now they need to be integrated. Cool, Dmitry doing lots of cleanups! But where did Nimmy go? We really need his patch to make Krita work on OSX... Ah! And there's the bad boy, we accidentally had the wrong application icon. Let's remove that one, and use ours instead. And then 9:12, arrival in Duivendrecht. 9:25, arrival at the day job -- Krita cannot pay my bills yet, so I'm working on a QtQuick1 to QtQuick2 port for a Dutch home automation company. Work, work, work, without a break, until 17:30, when it's time to go back to the train. Dinner -- and yay! Smjert has got his setup fixed and is fixing bugs again. Users keep mailing foundation@krita.org with support questions, and I'm just too nice... Answered. Time to go to bed, around midnight.

Wednesday, January the 13th

Exciting! Windows builds and OSX builds were working last Sunday, and today the Linux appimage builds are working on most systems! We might be able to release the pre-pre-pre-pre-pre-alpha tomorrow! And we're creating the correct OSX application bundles, with icon! And Timothee has fixed the icon, and Jouni has started implementing importing image sequences as animations! And the alarm clock buzzed me at 6:45. Wait, that's not yay-worthy. Refactor the PNG export dialog a bit. Work, work, work. I realize that after three months I'm one of the people at this office who's been here longest. There are ten people who've been here for more than six months, twenty who've been here for six to three months and it seems there's a legion who've just started... Fix the top toolbar sliders. And I've got extra-double-plus long hacking time on the train because the track is blocked and I have to make a detour over Zwolle. No, tonight I'm not going to finish the release notes or the fixed Windows (OpenGL is broken. wth), OSX and Linux packages. Time for dinner, a bath and bed. And all kickstarter rewards except for some shirts have arrived!

Thursday, January the 14th.

Gah, six colon four five. Time to get up. And I was dreaming of a bunch of kittens playing in a hay-loft that was being converted into yuppie student housing. Must be significant or something. At least I wasn't trying to form keys out of my pillow cover so I could type "./configure" in the qt source directory, which is what my mind tried to make me do last night. Oooh! Ben Cooksley has enabled docs.krita.org, our new manual home! Exciting! People having trouble with preset files, photoshop files, krita files. Let's try to offer a helping hand, while guzzling orange juice, tea and coffee. Dmitry adds multi-node editing of layer properties, Wolthera fixes canvas rotation. A british VFX studio tries Krita and the artists are excited -- must not forget to follow-up Layer property shortcuts, drag&drop in tabbed mode and more gets pushed by Dmitry. At work, there are meetings, and more meetings. The train home fortunately isn't delayed, because we've got our priest and his wife for dinner. After dinner, I go out for a beer with our priest. The barlady wonders what kind of a monk he is, is put right, and later on, after choir practice, our wifes join us. No more coding tonight, I've had two beers.

Friday, January 15.

My last day on my current contract, but my agenda is full with meetings and things for next week. Next week is also the mini-sprint to prepate the next kickstarter. I'm guessing they'll want to keep me, we'll see on Monday. Breakfast. Forum posts. This guy is a bit agressive, though no doubt well-meaning. Mail. Time to get started with the spriter plugin! Jouni fixes the build... I'm fixing OSX stuff left and right, and trying to figure out howto make builds faster, and get them tested. Maybe we can release on Sunday? It's only a pre-alpha, but still exciting! More forum posts. More work -- meetings, it's the end of our sprint, so sprint review, sprint retrospective, sprint planning...

Saturday, 16 January

I sleep until 9:30. Well, I wake up at seven, but then go back to snoozing and dreaming of the comic book scenario that's been whirling around my mind for a while now. It's going to be cool, if I can sit down and do something about. Fried eggs and bacon. Coffee. Orange juice. Tea. Time to fire up some more builds. Things are falling together! Some preliminary tests by other OSX users shows that my packages are okay, on recent hard, with a range of OSX versions. Figuring out the Linux and Windows builds. Some more bug fixing. Jouni pushes an even more advanced image sequence importer. In the evening, guests, but I'm too tired to go down for the Vigil service, and my foot is aching again. But I did buy new, better shoes and some pullovers, because my old shoes and pullovers were completely gone and tattered. That should help...

Sunday, January 17th.

Getting up at 8:45. Time to check a bit of mail, forward an enquiry about a Secrets of Krita download to Irina. Forum posts. This guy sure posts a lot, but it's all bug reports. Liturgy, fortunately I can serve. Coffee afterwards, then upstairs and switch on the desktop, the windows laptop and the OSX laptop. Ah! The problem with Intel drivers and OpenGL is the same problem we've got on OSX: insufficient support for the Compatibily Profile of OpenGL, which breaks Qt's OpenGL QPainter engine. Good... There's a way forward. But first... RELEASE!!!

First Krita 3.0 pre-alpha!

More than a year in the making… We proudly present the first pre-alpha version of Krita 3.0 you can actually try to run! So what is Krita 3.0 pre-alpha? It’s the Qt5 port, with animation, instant preview, a handful of new features and portable packages for everyone! When we feel everything is nice and stable we’ll release Krita 3.1, and we’ll keep on releasing new versions as and when we finish Kickstarter stretch goals. So keep in mind: Krita 3.0 is experimental.

This “release” includes the latest version of the animation and the instant-preview performance work, plus there are a number of stretch goals from the Kickstarter already available, too. And it is a major upgrade of the core technology that Krita runs on: from Qt4 to Qt5. The latter wasn’t something that was a lot of fun, but it’s needed to keep Krita code healthy for the future! Whatever may come, we’re ready for it!

Kiki_Krita_86

The port to Qt5 meant a complete rewrite of our tablet and display code, which, combined with animation and the instant preview means that Krita is really unstable right now! And that means that we need you to help us test!

Another little project was updating our build-systems for Windows, OSX, and Linux. We fully intend to make Krita 3.0 as supported on OSX as on Windows and Linux, and to that end, we got ourselves a faster Mac.

One of the cool things coming from this system is that for Krita 3.0 we can have portable packages for all three systems! We have AppImages for Linux, DMG’s for OSX and a portable zip file for 64 bits Windows. Sorry, no 32 bits Windows builds yet…

krita3-prealpha

Download Instructions

Windows

Download the zip file. Unzip the zip file where you want to put Krita..

Run the vcredist_x64.exe installer to install Microsoft’s Visual Studio runtime.

Then double-click the krita link.

Known issues on Windows:

  • The location of the configuration files has changed: configuration data and custom resources, and the new location isn’t correct yet. The settings are in %APPDATA%\Local\kritarc and the resources in %APPDATA%\Roaming\Krita\krita\krita
  • If the entire window goes black, disable OpenGL for now. We’ve figured out the reason, now we only need to write a fix. It’s a bug in the Intel driver, but we know how to work around it now.

OSX

Download the DMG file and open it. Then drag the krita app bundle to the Applications folder, or any other location you might prefer. Double-click to start Krita.

Known issues on OSX:

  • We built Krita on El Capitan. The bundle is tested to work on a mid 2011 Mac Mini running Mavericks. It looks like you will need hardware that is capable of running El Captitan to run this build, but you do not have to have El Capitan, you can try running on an earlier version of OSX.
  • You will not see a brush outline cursor or any other tool that draws on the canvas, for instance the gradient tool. This is known, we’re working on it, it needs the same fix as the black screen you can get with some Intel drivers.

Linux

For the Linux builds we now have Appimages! These are completely distribution-independent. To use the AppImage, download it, and make it an executable in your terminal or using the file properties dialog of your file manager,Another change is that configuration and custom resources are now stored in the .config/krita.org/kritarc and .local/share/krita.org/ folders of the user home folder, instead of .kde or .kde4.

Known issues on Linux:

  • Your distribution needs to have Fuse enabled
  • On some distributions or installations, you can only run an AppImage as root because the Fuse system is locked down. Since an AppImage is a simple iso, you can still mount it as a loopback device and execute Krita directly using the AppRun executable in the top folder.

What’s Next?

More alpha builds! We’ll keep fixing bugs and implementing features, and keep making releases! Right now, we’re aiming for an update every week. Remember that Krita 3.0 will not include all of the features from the last Kickstarter. We still have a ways to go with adding the rest of the stretch goals, but with this release you’ll get…

Change Log

All the animation features from the Animation Beta

And more animation goodness:

Animation Drop Frame Support

We implemented a “Drop Frames” mode for Krita and made it default option. Now you can switch on the “Drop Frames” mode in the Animation Docker to ensure your animation is playing with the requested frame rate, even when the GPU cannot handle this amount of data to be shown.

Show the current frames per second (fps) and whether the frames are dropped in the tooltip of the drop frames button.

The animation playback buttons become red if the frames are dropped. The tool tip shows the following values:

  •   Effective FPS – the visible speed of the clip
  •   Real FPS – how many real frames per second is shown (always smaller)
  •   Frames dropped – percentage of the frames dropped

Other Animation Features

  • Allow switching frames using arrow keys (canvas input setting)
  • Add “Show in Timeline” action to the Layers Docker
  • Fix Duplicate Layer feature for animated layers
  • Let the current frame spin box have a higher limit as well/ Let the user choose the start frame higher than 99
  • Fix crashes with cropped animations, the move tool and changed backgrounds.
  • Fix loading of the animation playback properties
  • Fix initialization of the offset of the frame when it is duplicated
  • Fix crash when loading a file with Onion Skins activated
  • Frames import: Under file->import animation. This requires that you have removed the krita.rc in the resource folder(settings->manage resources->open resource folder) if you had a previous version of Krita installed. Only has a filebrowser that’ll allow you to select multiple files for now, but we’ll enhance the UI in the future.

Tablet handling

  • We rewrote our tablet handling. If tablets didn’t work for you with 2.9 or even crashed, check out the 3.0 branch.
  • On Windows, we should better support display scaling
  • On Windows, support tablet screen rotation

Tool Improvements

  • Move increment keys for the move tool! This is still under development, but we are sure it’s basic form is appreciated.

Layer Improvements

  • We removed the ‘move in/out of group layer’ buttons. Moving a layer up and down will also pass it into the group.
  • Duplication of multiple layers
  • Shift+Delete shortcut to the Remove Layer action
  • Move Up/Down actions for multiple layer selections
  • Make Merge Down for multiple layers and selecting the right merged layer afterwards
  • Ctrl+G when having multiple layers selected now groups them
  • Ctrl+Shift+G will now have the currently selected layer put into a group with a alpha inherited layer above it, not unlike Photoshop clipping masks.
  • Copy-paste layer actions. This is a little different from regular copy-paste, as the latter copies pixels onto the clipboard, while copy-paste layers copies full layers onto the clipboard
  • Implemented Select All/Visible/Locked layers actions. By default they have no shortcuts, but you can assign any to them
  • Mass editing of layers. Select multiple layers and press the layer properties to mass-edit or rename them
  • properties and renaming layers now have hotkeys: F2 and F3

Shortcuts

  • Our shortcut system is now ordered into groups.
  • You can now save and share custom versions of your shortcuts.
  • Krita now has Photoshop and Painttool Sai compatible shortcuts included by default.
  • You can now switch the selection modifiers to use ctrl instead of alt. Useful if you are on Linux, or prefer ctrl to alt.
  • Reset Canvas Rotation had gotten lost in 2.9, it’s now back and visible under view->canvas

Other features

  • Add import/export of GBR and GIH brush files, generating from animated .kra files is still coming.
  • Show editing time of a document in the document information dialog, useful for proffesional illustrators, speedpainters and other commision-takers. It detects when you haven’t performed actions for a while, and has a precision of +- 60 seconds. You can empty it in the document info dialog and of course by unzipping you .kra file and editing the meta-data there.

Minor changes

  • The popup palette now has anti-aliased edges (but it’s square on OSX…)
  • simple color selector now has white on top and black on the bottom.
  • updated ICC profiles.
  • Added a Smudge_water preset to make smudging easier.
  • Added printing of the current FPS on canvas when the debugging is activated

Because our release is so fresh and fragile, we are, for once not going to ask you to report bugs. Instead, we have a

Survey

With that in mind, it shouldn’t be surprising that we don’t recommend using this version for production work! Right now, Krita is in the “may eat your cat”-stage… But it is sure fun to play with!

DanceGirl_SEQ_small

(Animations created by Achille, thanks!)