August 31, 2015

Average Book Covers and a New (official) GIMP Website (maybe)


A little while back I had a big streak of averaging anything I could get my hands on. I am still working on a couple of larger averaging projects (here's a small sneak peek - guess the movie?):








I'm trying out visualizing a movie by mean averaging all of the cuts. Turns out movies have way more than I thought - so it might be a while until I finish this one... :)

On the other hand, here's something neat that is recently finished...

JungleBook: Simple Kindle Ebook Cover Analysis


Jason van Gumster just posted this morning about a neat project he'd been toying with that is along similar lines of the Netflix Top 50 Covers by Genre, but takes it to a deeper level. He's written code to average the top 50 ebook covers on Amazon by genre:

Top 50 Kindle Covers by Jason van Gumster

By itself this is really pretty (to me - not sure if anyone else likes these things as much as I do) but Jason takes it further by providing some analysis and commentary on the resulting images in the context of ebook sales and popularity (visually) to people.

I highly recommend you visit Jason's post and read the whole post (it's not too long). It's really neat!


The GIMP Website


I had this note on my to-do list for ages to tinker with the GIMP website. I finally got off my butt and started a couple of weeks ago. I did a quick mockup to get a feel for the overall direction I wanted to head:


I've been hacking at it for a couple of weeks now and I kind of like how it's turning out. I'm still in the process of migrating old site content and making sure that legacy URI's aren't going to change. It may end up being a new site for GIMP. It also may not, so please don't hold your breath... :)

Here's where I am at the moment for a front page:


static GIMP page


Yes, that image is a link. The link will lead you to the page as I build it: http://static.gimp.org. See? It's like a prize for people who bother to read to the end! Feel free to hit me up with ideas or if you want to donate any artwork for the new page while I build it. I can't promise that I'll use anything anyone sends me, but if I do I will be sure to properly attribute! (Please consider a permissive license if you decide to send me something).

Interview with Brian Delano

big small and me 4 and 5 sample

Could you tell us something about yourself?

My name is Brian Delano. I’m a musician, writer, futurist, entrepreneur and artist living in Austin, Texas. I don’t feel I’m necessarily phenomenal at any of these things, but I’m sort of taking an approach of throwing titles at my ego and seeing whichones stick and sprout.

Do you paint professionally, as a hobby artist, or both?

I’m more or less a hobby artist. I’ve made a few sales of watercolors here and there and have had my pieces in a few shows around town, but, so far, the vast majority of my art career exists as optimistic speculation between my ears.

What genre(s) do you work in?

I mostly create abstract art. I’ve been messing around with web comic ideas a bit, but that’s pretty young on my “stuff I wanna do” list. Recently, I’ve been working diligently on illustrating a children’s book series that I’ve been conceptualizing for a few years.

Whose work inspires you most — who are your role models as an artist?

Ann Druyan & Carl Sagan, Craig & Connie Minowa, Darren Waterston, Cy Twombly, Theodor Seuss Geisel, Pendelton Ward, Shel Silverstein and many others.

How and when did you get to try digital painting for the first time?

My first exposure to creating digital art was through the mid-nineties art program Kid Pix. It was in most every school’s computer lab and I thought it was mind-blowingly fun. I just recently got a printout from one of my first digital paintings from this era (I think I was around 8 or so when I made it) and I still like it. It was a UFO destroying a beach house by shooting lightning at it.

What makes you choose digital over traditional painting?

Don’t get me wrong, traditional (I call it analog :-P) art is first and foremost in my heart, but when investment in materials and time is compared between the two mediums, there’s no competition. If I’m trying to make something where I’m prototyping and moving elements around within an image while testing different color schemes and textures, digital is absolutely the way to go.

How did you find out about Krita?

I was looking for an open source alternative to some of the big name software that’s currently out for digital art. I had already been using GiMP and was fairly happy with what it offered in competition with Photoshop, but I needed something that was more friendly towards digital painting, with less emphasis on imaging. Every combination of words in searches and numerous scans through message boards all pointed me to Krita.

What was your first impression?

To be honest, I was a little overwhelmed with the vast set of options Krita has to offer in default brushes and customization. After a few experimental sessions, some video tutorials, and a healthy amount of reading through the manual, I felt much more confident in my approach to creating with Krita.

What do you love about Krita?

If I have a concept or a direction I want to take a piece, even if it seems wildly unorthodox, there’s a way to do it in Krita. I was recently trying to make some unique looking trees and thought to myself ” I wish I could make the leafy part look like rainbow tinfoil…” I messed around with the textures, found a default one that looked great for tinfoil, made a bunch of texture circles with primary colored brush outlines, selected all opaque on the layer, added a layer below it, filled in the selected space with a rainbow gradient, lowered the opacity a bit on the original tin foil circle layer, and bam! What I had imagined was suddenly a (digital) reality!

What do you think needs improvement in Krita? Is there anything that really annoys you?

Once in a while, if I’m really pushing the program and my computer, Krita will seem to get lost for a few seconds and become non responsive. Every new release seems to lessen this issue, though, and I’m pretty confident that it won’t even be an issue as development continues.

What sets Krita apart from the other tools that you use?

Krita feels like an artist’s program, created by artists who program. Too many other tools feel like they were created by programmers and misinterpreted focus group data to cater to artists’ needs that they don’t fully understand. I know that’s a little vague, but once you’ve tried enough different programs and then come to Krita, you’ll more than likely see what I mean.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I’m currently illustrating a children’s book series that I’ve written which addresses the size and scope of everything and how it relates to the human experience. I’m calling the series “BIG, small & Me”, I’m hoping to independently publish the first book in the fall and see where it goes. I’m not going to cure cancer or invent faster than lightspeed propulsion, but if I can inspire the child that one day will do something this great, or even greater, then I will consider my life a success.

big small and me cover sample

What techniques and brushes did you use in it?

I’ve been starting out sketching scenes with the pencil brushes, then creating separate layers for inking. Once I have the elements of the image divided up by ink, I turn off my sketch layers, select the shapes made by the ink layers and then fill blocks on third layers. When I have the basic colors of the different elements completed in this manner, I turn off my ink lines and create fourth and fifth layers for texturing and detailing each element. There are different tweaks and experimental patches in each page I’ve done, but this is my basic mode of operation in Krita.

Where can people see more of your work?

I have a few images and a blog up at artarys.com, and will hopefully be doing much more with that site  pretty soon. I’m still in the youngish phase of most of the projects I’m working on so self promotion will most likely be ramping up over the next few months. I’m hoping to set up a kickstarter towards the end of the year for a first pressing of BIG, small & Me, but until then most of my finished work will end up on either artarys.com or my facebook page.

Anything else you’d like to share?

It’s ventures like Krita that give me hope for the future of creativity. I am so thankful that there are craftspeople in the world so dedicated to creating such a superior tool for digital art.

August 26, 2015

Switching to a Kobo e-reader

For several years I've kept a rooted Nook Touch for reading ebooks. But recently it's become tough to use. Newer epub books no longer work work on any version of FBReader still available for the Nook's ancient Android 2.1, and the Nook's built-in reader has some fatal flaws: most notably that there's no way to browse books by subject tag, and it's painfully slow to navigate a library of 250 books when have to start from the As and you need to get to T paging slowly forward 6 books at a time.

The Kobo Touch

But with my Nook unusable, I borrowed Dave's Kobo Touch to see how it compared. I like the hardware: same screen size as the Nook, but a little brighter and sharper, with a smaller bezel around it, and a spring-loaded power button in a place where it won't get pressed accidentally when it's packed in a suitcase -- the Nook was always coming on while in its case, and I didn't find out until I pulled it out to read before bed and discovered the battery was too low.

The Kobo worked quite nicely as a reader, though it had a few of the same problems as the Nook. They both insist on justifying both left and right margins (Kobo has a preference for that, but it doesn't work in any book I tried). More important is the lack of subject tags. The Kobo has a "Shelves" option, called "Collections" in some versions, but adding books to shelves manually is tedious if you have a lot of books. (But see below.)

It also shared another Nook problem: it shows overall progress in the book, but not how far you are from the next chapter break. There's a choice to show either book progress or chapter progress, but not both; and chapter progress only works for books in Kobo's special "kepub" format (I'll write separately about that). I miss FBReader's progress bar that shows both book and chapter progress, and I can't fathom why that's not considered a necessary feature for any e-reader.

But mostly, Kobo's reader was better than the Nook's. Bookmarks weren't perfect, but they basically worked, and I didn't even have to spent half an hour reading the manual to use them (like I did with the Nook). The font selection was great, and the library navigation had one great advantage over the Nook: a slider so you could go from A to T quickly.

I liked the Kobo a lot, and promptly ordered one of my own.

It's not all perfect

There were a few disadvantages. Although the Kobo had a lot more granularity in its line spacing and margin settings, the smallest settings were still a lot less tight than I wanted. The Nook only offered a few settings but the smallest setting was pretty good.

Also, the Kobo can only see books at the top level of its microSD card. No subdirectories, which means that I can't use a program like rsync to keep the Kobo in sync with my ebooks directory on my computer. Not that big a deal, just a minor annoyance.

More important was the subject tagging, which is really needed in a big library. It was pretty clear Shelves/Collections were what I needed; but how could I get all my books into shelves without laboriously adding them all one by one on a slow e-ink screen?

It turns out Kobo's architecture makes it pretty easy to fix these problems.

Customizing Kobo

While the rooted Nook community has been stagnant for years -- it was a cute proof of concept that, in the end, no one cared about enough to try to maintain it -- Kobo readers are a lot easier to hack, and there's a thriving Kobo community on MobileReads which has been trading tips and patches over the years -- apparently with Kobo's blessing.

The biggest key to Kobo's customizability is that you can mount it as a USB storage device, and one of the files that exposes is the device's database (an sqlite file). That means that well supported programs like Calibre can update shelves/collections on a Kobo, access its book list, and other nifty tricks; and if you want more, you can write your own scripts, or even access the database by hand.

I'll write separately about some Python scripts I've written to display the database and add books to shelves, and I'll just say here that the process was remarkably straightforward and much easier than I usually expect when learning to access a new device.

There's lots of other customizing you can do. There are ways of installing alternative readers on the Kobo, or installing Python so you can write your own reader. I expected to want that, but so far the built-in reader seems good enough.

You can also patch the OS. Kobo updates are distributed as tarballs of binaries, and there's a very well designed, documented and supported (by users, not by Kobo) patching script distributed on MobileReads for each new Kobo release. I applied a few patches and was impressed by how easy it was. And now I have tight line spacing and margins, a slightly changed page number display at the bottom of the screen (still only chapter or book, not both), and a search that defaults to my local book collection rather than the Kobo store.

Stores and DRM

Oh, about the Kobo store. I haven't tried it yet, so I can't report on that. From what I read, it's pretty good as e-bookstores go, and a lot of Nook and Sony users apparently prefer to buy from Kobo. But like most e-bookstores, the Kobo store uses DRM, which makes it a pain (and is why I probably won't be using it much).

They use Adobe's DRM, and at least Adobe's Digital Editions app works in Wine under Linux. Amazon's app no longer does, and in case you're wondering why I didn't consider a Kindle, that's part of it. Amazon has a bad reputation for removing rights to previously purchased ebooks (as well as for spying on their customers' reading habits), and I've experienced it personally more than once.

Not only can I no longer use the Kindle app under Wine, but Amazon no longer lets me re-download the few Kindle books I've purchased in the past. I remember when my mother used to use the Kindle app on Android regularly; every few weeks all her books would disappear and she'd have to get on the phone again to Amazon to beg to have them back. It just isn't worth the hassle. Besides, Kindles can't read public library books (those are mostly EPUBs with Adobe DRM); and a Kindle would require converting my whole EPUB library to MOBI. I don't see any up side, and a lot of down side.

The Adobe scheme used by Kobo and Nook is better, but I still plan to avoid books with DRM as much as possible. It's not the stores' fault, and I hope Kobo does well, because they look like a good company. It's the publishers who insist on DRM. We can only hope that some day they come to their senses, like music publishers finally did with MP3 versus DRMed music. A few publishers have dropped DRM already, and if we readers avoid buying DRMed ebooks, maybe the message will eventually get through.

August 25, 2015

Funding Krita

Even Free software needs to be funded. Apart from being very collectible, money is really useful: it can buy transportation so contributors can meet, accommodation so they can sleep, time so they can code, write documentation, create icons and other graphics, hardware to test and develop the software on.

With that in mind, KDE is running a fund raiser to fund developer sprints, Synfig is running a fund raiser to fund a full-time developer and Krita… We’re actually trying to make funded development sustainable. Blender is already doing that, of course.

Funding development is a delicate balancing act, though. When we started doing sponsorship for full-time development on Krita, there were some people concerned that paying some community members for development would disenchant others, the ones who didn’t get any of the money. Even Google Summer of Code already raised that question. And there are examples of companies hiring away all community members, killing the project in the process.

Right now, our experience shows that it hasn’t been a problem. That’s partly because we have always been very clear about why we were doing the funding: Lukas had the choice between working on Krita and doing some boring web development work, and his goal was fixing bugs and performance issues, things nobody had time for, back then. Dmitry was going to leave university and needed a job, and we definitely didn’t want to lose him for the project.

In the end, people need food, and every line of code that’s written for Krita is one line more. And those lines translate to increased development speed, which leads to a more interesting project, which leads to more contributors. It’s a virtuous circle. And there’s still so much we can do to make Krita better!

So, what are we currently doing to fund Krita development, and what are our goals, and what would be the associated budget?

Right now, we are:

  • Selling merchandise: this doesn’t work. We’ve tried dedicated webshops, selling tote bags and mugs and things, but the total sales is under a hundred euros, which makes it not worth the hassle.
  • Selling training DVD’s: Ramon Miranda’s Muses DVD is still a big success. Physical copies and downloads are priced the same. There’ll be a new DVD, called “Secrets of Krita”, by Timothée Giet this year, and this week, we’ll start selling USB sticks (credit-card shaped) with the training DVD’s and a portable version of Krita for Windows and OSX and maybe even Linux.
  • The Krita Development Fund. It comes in two flavors. For big fans of Krita, there’s the development fund for individual users. You decide how much a month you can spare for Krita, and set up an automatic payment profile with Paypal or a direct bank transfer. The business development fund has a minimum amount of 50 euros/month and gives access to the CentOS builds we make.
  • Individual donations. This depends a lot on how much we do publicity-wise, and there are really big donations now and then which makes it hard to figure out what to count on, from month to month, but the amounts are significant. Every individual donor gets a hand-written email saying thank-you.
  • We are also selling Krita on Steam. We’ve got a problem here: the Gemini variant of Krita, with the switchable tablet/desktop GUI, got broken with the 2.9 release. But Steam users also get regular new builds of the 2.9 desktop version. Stuart is helping us here, but we need to work harder to interact with our community on Steam!
  • And we do one or two big crowd-funding campaigns. Our yearly kickstarters. They take about two full-time months to prepare, and you can’t skimp on preparation because then you’ll lose out in the end, and they take significant work to fulfil all the rewards. Reward fulfilment is actually something we pay someone a volunteer gratification for to do the work. We are considering doing a second kickstarter this year, to give me an income, with as goal producing a finished, polished OSX port of Krita. The 2015 kickstarter campaign brought in 27,471.78 euros, but we still need to buy and send out the rewards, which are estimated at an approximate cost of 5,000 euros.
  • Patreon. I’ve started a patreon, but I’m not sure what to offer prospective patrons, so it isn’t up and running yet.
  • Bug bounties. The problem here is that the amount of money people think is reasonable for fixing a bug is wildly unrealistic, even for a project that is as cheap to develop as Krita. You have to count on 250 euros for a day of work, to be realistic. I’ve sent out a couple of quotations, but… If you realize that adding support for loading group layers from XCF files is already taking three days, most people simply cannot bear the price of a bug fix individually.

So, let’s do sums for the first 8 months of 2015:

Paypal (merchandise, training materials, development fund, kickstarter-through-paypal and smaller individual donations) 8,902.04
The big individual donations usually arrive directly at our bank account, including a one-time donation to sponsor the port of Krita to Qt5 15,589.00
Steam 5,150.97
Kickstarter 27,471.78
Total 57,113.79

So, the Krita Foundation’s current yearly budget is roughly 65,000 euros, which is enough to employ Dmitry full-time and me part-time. The first goal really is to make sure I can work on Krita full-time again. Since KO broke down, that’s been hard, and I’ve spent five months on the really exciting Plasma Phone project for Blue Systems. That was a wonderful experience, but it had a direct influence on the speed of Krita development, both code-wise, as well as in terms of growing the userbase and keeping people involved.

What we also have tried is approaching VFX and game studios, selling support and custom development. This isn’t a big success yet, and that’s puzzling me some. All these studios are on Linux. All their software, except for their 2D painting application, is on Linux. They want to use Krita, on Linux. And every time we are in contact with some studio, they tell us they want Krita. Except, there’s some feature missing, something that needs improved… And we make a very modest quote, one that doesn’t come near what custom development should cost, and silence is the result.

Developing Krita is actually really cheap. We don’t have any overhead: no management, no office, modest hardware needs. With 5,000 euros we can fund one full-time developer for one month, with something to spare for hardware, sprints and other costs, like the license for the administration software, stamps and envelopes. The first goal would be to double our budget, so we can have two full-time developers, but in the end, I would like to be able to fund four to five full-time developers, including me, and that means we’re looking at a year budget of roughly 300,000 euros. With that budget, we’d surpass every existing 2D painting application, and it’s about what Adobe or Corel would need to budget for one developer per year!

Taking it from here, what are the next steps? I still think that without direct involvement of people and organizations who want to use Krita in a commercial, professional setting, we cannot reach the target budget. I’m too much a tech geek — there’s a reason KO failed, and that is that we were horrible at sales — to figure out how to reach out and convince people that supporting Krita would be a winning proposition! Answers on a post-card, please!

August 24, 2015

Self-generated metadata with LVFS

This weekend I finished the penultimate feature for the LVFS. Before today, when uploading firmware there was up to a 24h delay before the new firmware would appear in the metadata. This was because there was a cronjob on my home server downloading files every night from the LVFS site, running appstream-builder on them locally and then uploading the metadata back to the site. Not awesome at all.

Actually generating the metadata in the OpenShift instance was impossible, until today. Due to libgcab and libappstream-glib not being available on the RHEL 6.2 instance I’m using, I had to re-implement two things in Python:

  • Reading and writing Microsoft cabinet archives
  • Reading MetaInfo files and writing compressed AppStream XML

The two helper libraries (only really implementing the parts required, but patches welcome) are python-cabarchive and python-appstream. I’m not awesome at Python, so feedback (in the form of pull requests) welcome.

This means I’m nearly okay to be hit by a bus. Well, nearly; the final feature is to collect statistics about how many people are downloading each firmware file, and possibly collecting data on how many failures and successes there have been when actually applying the firmware. This is actually quite tricky to do without causing privacy issues and not doing double counting. I’ll do some more thinking and then write up a proposal, ideas welcome.

August 22, 2015

2015 KDE Sprints Fundraiser

Krita is a part of the KDE Community. Without KDE, Krita wouldn’t exist, and KDE still supports Krita in many different ways. KDE is a world-wide community for people and projects who create free software. That ranges from applications like Krita, Digikam, Kdenlive to education software like GCompris to desktop and mobile phone software.

Krita not only uses the foundations developed by KDE and its developers all around the world, KDE hosts the website, the forums, everything needed for development. And the people working on all this need to meet from time to time to discuss future directions, to make decisions about technology and to work together on all the software that KDE communities create. As with Krita, most of the work on KDE is done by volunteers!

KDE wants to support those volunteers with travel grants and accomodation support, and for that, KDE is raising funds right now. Getting developers, artists, documentation writers, users all together in one place to work on creating awesome free software! And there is a big sprint coming up soon: the Randa Meetings. From the 6th to the 13th of September more than 50 people will meet in Randa, Switzerland to work, discuss, decide, document, write, eat and sleep under one and the same roof.

It’s a very effective meeting: in 2011 the KDE Frameworks 5 project was started, rejuvenating and modernizing the KDE development platform. Krita is currently being ported to Frameworks. Last year, Kdenlive received special attention, reinvigorating the project as part of the KDE community. Krita artist Timothee Giet worked on GCompris, another new KDE project. This year, the focus is on bringing KDE software to touch devices, tablets, phones, laptops with touch screen.

Let’s help KDE bring people together!

August 21, 2015

Embargoed firmware updates in LVFS

For the last couple of days I’ve been working with a large vendor adding new functionality to the LVFS to support their specific workflow.

Screenshot from 2015-08-21 13-06-02

The new embargo target allows vendors to test the automatic update functionality using a secret vendor-specific URL set in /etc/fwupd.conf without releasing it to the general public until the hardware has been announced.

Updates normally go through these stages: Private → Embargoed → Testing → Stable although LVFS users with the QA capability can skip these as required. The screenshot also shows that we’re unpacking the .cab file and parsing the metainfo file server side (in python), which gives us so much more rich detail about the firmware.

war and peace—the 5th column

Building out the war and peace series, this second instalment focusses on acts of sabotage performed by users against… users. For those in a hurry, there is a short, sharp summary at the end.

cuckoo

Last week my friend Jan‑C. Borchardt tweeted:

‘Stop saying »the user« and referring to them as »he«. Thanks!’

To which I replied:

‘“They”; it is always a group, a diverse group.
‘But I agree, someone talking about “the user” is a tell‐tale sign that they are part of the problem, not the solution.’

All that points at a dichotomy that I meant to blog about for a long time. You see, part one of the war and peace series looked at all four parties involved—product makers, users, developers and designers—as separate silos. The logical follow‑up is to look at hybrid actors: the user‐developer, the product maker‐developer, et cetera.

While working to provide a theoretical underpinning to 20+ years of observation of performance in practice of these hybrid actors, I noticed that for all user‐XYZ hybrids, the user component has not much in common with the users of said product. In general, individual users are in conflict with the user group they are part of.

That merits its own blog post, and here we are.

spy vs. spy

First we got to get away from calling this ‘user versus users.’ The naming of the two needs to be much more dissimilar, both to get the point across and so that you don’t have to read the rest of this post with a magnifying glass, just to be sure which one I am talking about.

During the last months I have been working with the concept of inhabitants vs. their city. The inhabitants stand for individual users, each pursuing their personal interests. All of them are united in the city they have chosen to live in—think photoshop city, gmail city, etc. Each city stands for a bustling, diverse user group.

inner city blues

With this inhabitants & city model, it becomes easier to illustrate the mechanisms of conflict. Let’s start off with a real‐life example.

Over the next years, Berlin needs to build tens of thousands units of affordable housing to alleviate a rising shortage. Because the tens of thousands that annually move to Berlin are (predominantly) looking for its bubbly, relaxed, urban lifestyle, building tower blocks on the edge of town (the modernist dream), or rows of cookie‐cutter houses in suburbia (the anglo‐saxon dream) won’t solve anything.

What is needed is development of affordable housing in the large urban area of Berlin. A solid majority of the population of Berlin agrees with this. Under one condition: no new buildings in their backyard. And thus almost nothing affordable gets built.

What can we learn from this? Naturally reactionary inhabitants take angry action to hinder the development that their city really needs. I am sure this rings a bell for many a product maker.

yee‑haw!

The second example is completely fictional. A small posse of inhabitants forms and petitions the city to build something for their (quite) special interest: a line‐dancing facility. At first they demand that the city pays for everything and performs all the work. When this falls on deaf ears, the posse organises a kickstarter where about 150 backers chip in to secure the financing of the building work.

Being petitioned again, the city asks where this line‐dancing facility could be located. The posse responds that in one of the nicest neighbourhoods there is a empty piece of grassland. The most accessible part of it would be a good location for the facility and its parking lot.

The city responds that this empty grassland host events and community activities on about 100 days a year, many of which are visited by folks from all over the city. And all other days of the year the grassland serves the neighbourhood, simply by being empty and semi‐wild nature; providing a breather between all the dense and busy urbanisation, and a playground for kids.

repeat after me: yee‑haw!

This angers the line‐dancing posse. They belittle the events and activities, and don’t know anyone who partakes in them. The events can be moved elsewhere. The land just sits there, being empty, and is up for grabs. Here they are with a sack of money and a great idea, so let’s get a move on.

The city then mentions one of its satellite towns, reachable by an expressway. At its edge, there is plenty of open space for expansion. It would be good to talk to the mayor of that town. The posse is now furious. Their line‐dancing facility, which would make such a fine feature for the heart of the city, being relegated to being an appendix of a peripheral module? Impossible!

What can we learn from this? Inhabitants will loudly push their special‐interest ideas, oblivious to the negative impact on their city. Again this must also ring a bell for many product makers.

leaving the city

Now that the inhabitants & city model has helped us to find these mechanisms, I must admit there are also some disadvantages to it. First, cities do not scale up or down like user groups do. I have worked on projects ranging from a handful of users (hamlet) to 100 million (large country). And yes, the mechanisms hold over this range.

Second, I suspect that many, especially those of a technological persuasion, will take the difference between inhabitants and their city to be that the former are the people, and the latter the physical manifestation, especially buildings and infrastructure.

no escape

Thus we move on for a second time, picking a more generic route, but one with attitude. For individual users I have picked the term punter. If that smacks to you as clientèle that is highly opinionated, with a rather parochial view of their environs, then you got my point.

Now you may think ‘is he picking on certain people?’ No, on the contrary: we all are punters, for everything we use: software, devices, websites, services—hell, all products. You, me, everyone. We are all just waffling in a self‐centred way.

There is no single exception to the punter principle, even for people whose job is centred on getting beyond the punter talk and to the heart of the matter. It is simply a force of nature. The moment we touch, or talk about, a product we use, we are a punter.

greater good

For the user group I have picked the term society. This works both as a bustling, diverse populace and as a club of people with common interests (the photoshop society, gmail society, product‑XYZ society). Some of you, especially when active in F/LOSS, will say ‘is not community the perfect term here?’ It would be, if it wasn’t already squatted.

After almost a decade in F/LOSS I can say that in practice, ‘community’ boils down to a pub full of punters (i.e. chats, mailing lists and forums). In those pubs you will hear punters yapping about their pet feature (line dancing) and loudly resisting structural change in their backyard. What you won’t hear is a civilised, big‐picture discourse about how to advance their society.

it differs

One thing that last week’s exchange with Jan‑C., and also a follow up elsewhere, brought me is the focus on the diversity of a (product) society. This word wonderfully captures how in dozens and dozens of dimensions people of a society are different, have different needs and different approaches to get stuff done.

I can now review my work over the last decade and see that diversity is a big factor in making designing for users so demanding. The challenge is to create a compact system that is flexible enough to serve a diverse society (hint: use the common interests to avoid creating a sprawling mess).

I can also think of the hundreds of collaborators I worked with and now see what they saw: diversity was either ‘invisible,’ in their mono‐cultural environment, or it was such an overwhelming problem that they did not dare tackle it (‘let’s see what the user asks for’). Talking about ‘the user’ is the tell‐tale sign of a diversity problem.

The big picture emerges—

If you want to know why technology serves society so badly, look no further than the tech industry’s problem to acknowledge, and adapt to, the diversity of society.

Yes, I can see how the tendency of the tech sector to make products that only its engineers understand has the same roots as the now widely publicised problem that this sector has a hard time being inclusive to anyone who is not a male, WASP engineer; it is a diversity problem.

but why?

Back to the punters. That we all act like one was described ages ago by the tragedy of the commons. This is—

‘[…] individuals acting independently and rationally according to each’s self‐interest behave contrary to the best interests of the whole group by depleting some common resource.’

If you think about it for a minute, you can come up with many, many examples individuals acting like that, at the cost of society. The media are filled with it, every day.

what a waste

What common resources do punters strive to deplete in (software) products? From my position that is easy to answer—

  1. makers’ time; both in proprietary and F/LOSS, the available time of the product maker, the designer and developers is scarce; it is not a question of money, nor can you simply throw more people at it—larger teams simply take longer to deliver more mediocre results. To make better products, all makers need to be focussed on what can advance their society; any time spent on punter talk, or acts (e.g. a punter’s pull request), is wasted time.
  2. interaction bandwidth; this is, loosely, a combination of UI output (screen space, sound and tactile output over time) and input (events from buttons, wheels, gestures), throttled by the limit what humans can process at any given time. Features need interaction and this eats the available bandwidth, fast. In good products, the interaction bandwidth is allocated to serve its whole society, instead of a smattering of punters.

The tragedy of (software) products is that it’s completely normal that in reaction to punters’ disinformation and acts of sabotage, a majority of maker’s time and a majority of interaction bandwidth gets wasted.

Acts of sabotage? SME software makers of specialised tools know all about fat contracts coming with a list of punter wishes. Even in F/LOSS, adoption by a large organisation can come with strings attached. Modern methods are trojan horses of punter‐initiated bounties, crowdfunding or code contributions of their wishes.

the point

This makes punters the fifth column in the UI version of war and peace. Up to now we had four players in our saga—product maker, users (i.e. the society), developers and the designer—and here is a fifth: a trait in all of us within society to say and do exactly that what makes (software) products bloated, useless, collapse under their own weight and burn to the ground.

It is easy to see that punters are the enemy of society and of product makers (i.e. those who aim to make something really valuable for society). Punters have an ally in developers, who love listening to punters and then build their wishes. It makes the both of them feel real warm and fuzzy. (I am still not decided on whether this is deliberate on the part of developers, or that they are expertly duped by punters offering warmth and fuzziness.)

That leaves designers; do they fight punters like dragon slayers? No, not at all. Read on.

the dragon whisperer

Remember that punters and society are one and the same thing. The trick is to attenuate the influence of punters to zero; and to tune into the diversity and needs of society and act upon them. Problem is, you only get to talk to punters. Every member of society acts like a punter when they open their mouth.

There is a craft that delivers insight into a society, from working with punters. It is called user research. There are specialist practitioners (user researchers) and furthermore any designer worth their salt practices this. There is a variety of user research methods, starting with interviewing and surveying, followed up by continuous analysis by the designer of all punter/society input (e.g. of all that ‘community’ pub talk).

the billion‐neuron connection

What designers do is maintain a model of the diversity and needs of the society they are designing for, from the first to the last minute they are on a project. They use this model while solving the product‐users‐tech puzzle, i.e. while designing.

When the designer is separated from the project (tech folks tend towards that, it’s a diversity thing) then the model is lost. And so is the project.

(Obligatory health & safety notice: market research has nothing to do with user research, it is not even a little bit useful in this context.)

brake, brake

At this point I would like to review the conflicts and relationships that we saw in part one of war and peace, using the insights we won today. But this blog post is already long enough, so that will have to wait for another day.

Instead, here is the short, sharp summary of this post:

  • Users groups can be looked at in two ways: as a congregation of punters and as a society.
  • We all are punters, talking in a in a self‐centred way and acting in our self‐interest.
  • We are also all members of (product) societies; bustling, diverse populaces and clubs of people with common interests (the photoshop society, gmail society, product‑XYZ society).
  • Naturally reactionary punters take angry action to hinder structural product development.
  • Punters will loudly push their special‐interest ideas, oblivious to the negative impact on their society.
  • The diversity of societies poses one of the main challenges in designing for users.
  • The inability of the tech sector to acknowledge, and adapt to, the diversity of society explains why it tends to produce horrible, tech‐centric products.
  • In a fine example of ‘the tragedy of the commons,’ punters behave contrary to the best interests of their society by depleting makers’ time and interaction bandwidth.
  • Punters act like a fifth column in the tri‑party conflict between product makers, society and developers.
  • You only get to talk to punters, but pros use user research methods to gain insight into the diversity and needs of a society.
  • Everyone gets bamboozled by punters, but not designers. They use user research and maintain a model of diversity and needs, to design for society.

Interested, or irritated? Then (re)read the whole post before commenting. Meanwhile you can look forward to part three of war and peace, the UI version.

Python module for reading EPUB e-book metadata

Three years ago I wanted a way to manage tags on e-books in a lightweight way, without having to maintain a Calibre database and fire up the Calibre GUI app every time I wanted to check a book's tags. I couldn't find anything, nor did I find any relevant Python libraries, so I reverse engineered the (simple, XML-bsaed) EPUB format and wrote a Python script to show or modify epub tags.

I've been using that script ever since. It's great for Project Gutenberg books, which tend to be overloaded with tags that I don't find very useful for categorizing books ("United States -- Social life and customs -- 20th century -- Fiction") but lacking in tags that I would find useful ("History", "Science Fiction", "Mystery").

But it wasn't easy to include it in other programs. For the last week or so I've been fiddling with a Kobo ebook reader, and I wanted to write programs that could read epub and also speak Kobo-ese. (I'll write separately about the joys of Kobo hacking. It's really a neat little e-reader.)

So I've factored my epubtag script into a usable Python module, so as well as being a standalone program for viewing epub book data, it's easy to use from other programs. It's available on GitHub: epubtag.py: parse EPUB metadata and view or change subject tags.

August 18, 2015

Different User Types in LVFS

I’ve been working with two large (still un-named) vendors about their required features for the Linux Vendor Firmware Service. One of the new features I’ve landed this week in the test instance are the different user modes.

Screenshot from 2015-08-18 21-45-31

There are currently three classes of user:

  • The admin user that can do anything
  • Unprivileged users that can just upload files to the testing target
  • QA users that can upload files to the testing or stable target, and can tag files from testing to stable

This allows a firmware engineer to upload files before the new hardware has launched, and then someone else from the QA or management team can test the firmare and then push it out to the firmware so it can be flashed on real hardware by users.

I’ve also added functionality so that users can also change their own password (moving away from vendor keys) and added a simple test suite to test all the new rules.

August 17, 2015

Preço de software BIM no Brasil

Abaixo está uma tabela com preços de software BIM para arquitetura. Incluí aqui somente software que tem suporte ao formato IFC (tanto leitura como gravação). Sem isso, acho que concordamos que não podemos chamar algo de BIM. Também retirei as soluções restritas a um certo tipo de construção (como DDS-CAD). As aplicações abaixo são as...

August 16, 2015

Why I Stopped Reading Books Written by Judith Tarr

Not about Krita or KDE, so main article behind the fold... Instead it's about my reaction to a blog article or two by an author whose work I used to buy.

Read more ...

August 14, 2015

Notes from the dark(table) Side


Notes from the dark(table) Side

A review of the Open Source Photography Course

We recently posted about the Open Source Photography Course from photographer Riley Brandt. We now also have a review of the course as well.

This review is actually by one of the darktable developers, houz! He had originally posted it on discuss as a topic but I think it deserves a blog post instead. (When a developer from a favorite project speaks up, it’s usually worth listening…)

Here is houz’s review:


The Open Source Photography Course Review

by houz

Author houz headshot

It seems that there is no topic to discuss The Open Source Photography Course yet so let’s get started.

Disclaimer

First of all, as a darktable developer I am biased so take everything I write with a grain of salt. Second, I didn’t pay for my copy of the videos but Riley was kind enough to provide a free copy for me to review. So add another pinch of salt. I will therefore not tell you if I would encourage you to buy the course. You can have my impressions nevertheless.

Review

I won’t say anything about the GIMP part, not because it wouldn’t know how to use that software but it’s relatively short and I just didn’t notice anything to comment on. It’s solid basics of how to use GIMP and the emphasis on layer masks is really important in real world usage.

Now for the darktable part, I have to say that I liked it a lot. It showcases a viable workflow and is relatively complete – not by explaining every module and becoming the audio book of the user manual but by showing at least one tool for every task. And as we all know, in darktable there are many ways to skin a cat, so concentrating on your favourites is a good thing.

What I also appreciate is that Riley managed to cut the single topics to manageable chunks of around 10 minutes or less so you can easily watch them in your lunch break and have no problem to come back to one topic later and easily find what you are looking for.

Before this starts to sound like an advertisement I will just point out some small nitpicking things I noticed while watching the videos. Most of these were not errors in the videos but are just extra bits of information that might make your workflow even smoother, so it’s more of an addendum than an erratum.

  • When going through your images on lighttable you can either zoom in till you only see a single image (alt-1 is a shortcut for that) or hold the z key pressed. Both are shown in the videos. The latter can quickly become tedious since releasing z just once bring you back to where you were. There are however two more keyboard shortcuts that are not assigned by default under views>lighttable: ‘sticky preview’ and ‘sticky preview with focus detection’. Both work just like normal z and ctrl-z, just without the need to keep the key pressed. You can assign a key to these, for example by reusing z and ctrl-z.
  • Color labels can be set with F1 .. F5, similar to rating.
  • Basecurve and tonecurve allow very fine up/down movement of points with the mouse wheel. Hover over a node and scroll.
  • Gaussian in shadows&highlights tends to give stronger halos than bilateral in normal use, see the darktable blog for an example.
  • For profiled denoising better use ‘HSV color’ instead of ‘color’ and ‘HSV lightness’ instead of ‘lightness’, see the user manual for details.
  • When using the mouse wheel to zoom the image you can hold ctrl to get it smaller than fitting to the screen. That’s handy to draw masks over the image border.
  • When moving the triangles in color zones apart you actually widen the scope of affected values since the curve gets moved off the center line on a wider range.
  • Also color zones: You can also change reds and greens in the same instance, no need for multiple instances. Riley knows that and used two instances to be able to control the two changes separately.
  • When loading sidecar files from lighttable, you can even treat a JPEG that was exported from darktable like an XMP file and manually select that since the JPEGs get the processing data embedded. It’s like a backup of the XMP with a preview. Caveat: When using LOTS of mask nodes (mostly with the brush mask) the XMP data might get too big so it’s no longer possible to embed in the JPEG, but in general it works.
  • The collect module allows to store presets so you can quickly access often used search rules. And since presets only store the module settings and not the resulting image set these will be updated when new images are imported.
  • In neutral density you can draw a line with the right mouse button, similar to rotating images.
  • Styles can also be created from darkroom, there is a small button next to the history compression button.

So, that’s it from me. Did you watch the videos, too? What was your impression? Do you have any remarks?

Stellarium 0.13.66.0

Today Stellarium 0.13.66.0 has been published for public testing. We are refactored the GUI for deep-sky objects and catalog of the their objects. Plus we introduced few new options for DSO. Other big features for current beta: Iridium flares, accurate calculations of the ecliptic obliquity, calculations of the nutation and cross-index data for DSO and stars.

Please help us test it.

List of changes between version 0.13.3 and HEAD (0.13.66.0):
- Added accurate calculations of the ecliptic obliquity. Finally we have good precession! (LP: #512086, #1126981, #1240070, #1282558, #1444323)
- Added calculations of the nutation. Now we optionally have IAU-2000B nutation. (Applied between 1500..2500 only.)
- Added new DSO catalog and related to him features (LP: #1453731, #1432243, #1285175, #1237165, #1189983, #1153804, #1127633, #1106765, #1106761, #957083)
- Added Iridium flares to Satellites plugin (LP: #1106782)
- Added list of interesting double stars in Search Tool
- Added list of interesting variable stars in Search Tool
- Added cross-identification data for HIP stars (SAO and HD numbers)
- Added sinusoidal projection
- Added draw DSO symbols with different colors
- Added labels to the landscapes (gazetteer support)
- Added a new behaviour for drawing of orbit for selected planet - drawing of the orbits of 'hierarchical group' (parent and all their childrens) of the selected celestial body (disabled by default).
- Added an option to lower the horizon, e.g. for a widescreen cylindrical view with mostly sky and ground on lower screen border (LP: #1299063).
- Added various timeshift-by-year commands (LP: #1478670)
- Added Moon's phases info
- Rewriting the Meteor module and the Meteor Showers plugin (LP: #1471143)
- Updated skycultures
- Updated Hungarian and Brazilian Portuguese translations of landscapes and skycultures
- Updated data for Solar System Observer
- Updated color scheme for constellations
- Updated documentation
- Updated QZip stuff
- Updated TUI plugin
- Updated data for Satellites plugin
- Updated Clang support
- Updated Pluto texture
- Using a better formula for CCD FOV calculation in Oculars plugin (LP: #1440330)
- Fixed shortkeys conflict (New shortkeys for Scenery 3D plugin - LP: #1449767)
- Fixed aliasing issue with GCC in star data unpacking
- Fixed documentation for core.wait() function (LP: #1450870)
- Fixed perspective projection issue (LP: #724868)
- Fixed crash on activation of AngleMeasure in debug mode (LP: #1455839)
- Fixed issue for update bottom bar position when we toggle the GUI (LP: #1409023)
- Fixed weird view of the Moon in Oculars plugin when Moon's scaling is enabled
- Fixed non-realistic mode drawing of artificial satellites
- Fixed availability plugins for the scripting engine (LP: #1468986)
- Fixed potention memory bug in Meteor class
- Fixed wrong spectral types for some stars (LP: #1429530)
- Change visibleSkyArea limitation to allow landscapes with "holes" in the ground. (LP: #1469407)
- Improved Delta T feature (LP: #1380242)
- Improved the ad-hoc visibility formula for LDN and LBN
- Improved the sight opportunities when artificial satellites are passes (LP: #907318)
- Changed behaviour of coloring of orbits of artificial satellites: we use gray color for parts of orbits, where satellite will be invisible (LP: #914751)
- Improved coloring and rendering artificial satellites (include their labels) in the Earth shadow for both modes (LP: #1436954)
- Improved for landscape and lightscape brightness with/without atmosphere. Now even considers landscape opacity for no-atmosphere conditions.
- Reduce Milky Way brightness in moonlight
- Enhancement of features in Telescope Control plugin (LP: #1469450)

Download link (Windows 32 and 64 bit, OS X): https://launchpad.net/stellarium/trunk/trunk

Custom attributes in angular-gettext

Kristiyan Kostadinov recently submitted a very neat new feature for angular-gettext, which was just merged: support for custom attributes.

This feature allows you to mark additional attributes for extraction. This is very handy if you’re always adding translations for the same attributes over and over again.

For example, if you’re always doing this:

<input placeholder="{{ 'Input something here' | translate }}">

You can now mark placeholder as a translatable attribute. You’ll need to define your own directive to do the actual translation (an example is given in the documentation), but it’s now a one-line change in the options to make sure that placeholder gets recognized and hooked into the whole translation string cycle.

Your markup will then become:

<input placeholder="Input something here">

And it’ll still internationalize nicely. Sweet!

You can get this feature by updating your grunt-angular-gettext dependency to at least 2.1.3.

Full usage instructions can be found in the developer guide.


Comments | More on rocketeer.be | @rubenv on Twitter

August 13, 2015

Linux Vendor Firmware Service: We Need Your Help

I spend a lot of my day working on framework software for other programs to use. I enjoy this plumbing, and Red Hat gives me all the time I need to properly design and build these tricky infrastructure-type projects. Sometimes, just one person isn’t enough.

For the LVFS project, I need vendors making hardware to submit firmware files with carefully written metadata so that they can be downloaded by hundreds of thousands of Linux users securely and automatically. I also need those vendors to either use a standardized flashing protocol (e.g. DFU or UEFI) or to open the device specifications enough to allow flashing firmware without signing an NDA.

Over the last couple of months I’ve been emailing various tech companies trying to get hold of the right people to implement this. So far the reaction from companies has been enthusiastic and apathetic in equal measures. I’ve had a few vendors testing the process, but I can’t share those names just yet as most companies have been testing with unreleased hardware.

This is where you come in. On your Linux computer right now, think about what hardware you own that works in Linux that you know has user-flashable firmware? What about your BIOS, your mouse, or your USB3 hub? Your network card, your RAID card, or your video card?

Things I want you to do:

  • Find the vendor on the internet, and either raise a support case or send an email. Try and find a technical contact, not just some sales or marketing person
  • Tell the vendor that you would like firmware updates when using Linux, and that you’re not able to update the firmware booting to Windows or OS-X
  • Tell the vendor that you’re more likely to buy from them again if firmware updates work on Linux
  • Inform the vendor about the LVFS project : https://beta-lvfs.rhcloud.com/

At all times I need you to be polite and courteous, after all we’re asking the vendor to spend time (money) on doing something extra for a small fraction of their userbase. Ignoring one email from me is easy, but getting tens or hundreds of support tickets about the same issue is a great way to get an issue escalated up to the people that can actually make changes.

So please, spend 15 minutes opening a support ticket or sending an email to a vendor now.

August 12, 2015

August Update

Let’s ramble a bit… It’s August, which means vacation time. Well, vacation is a big word! Krita was represented at Akademy 2015 with three team members. We gave presentations and talks and in general had a good time meeting up with other KDE hackers. And we added a couple of vacation days in wonderful Galicia (great food, awesome wine, interesting language).

But Akademy and vacation followed on me breaking my arm, which followed on a sprint in Los Angeles where the topic of the day was not Krita but Plasma Mobile. All in all, about a month without me doing any work on Krita.

In the meantime, Wolthera and Jouni’s Google Summer of Code, Dmitry’s Levels of Detail kickstarter project, Stefano’s work on fixing resource bundles, Michael’s work on fixing the OpenGL canvas in the Qt5 branch and lots of other things have been going on. Lots of excitement! And let’s not forget to mention the new icons that Timothee Giet is working on. It sometimes gets hard to push changes to the code repository because everything is doing that at the same time.

Anyway, on returning from Galicia, there were over 350 bugs in the bug tracker… So we decide to skip the August release of Krita 2.9, and spend a month on fixing bugs, fixing bugs and fixing more bugs! There are unstable builds in files.kde.org, but they are unstable. Be warned!

Update: we had to pull the builds… They were way too unstable! Despite that, another eight bugs bit the dust today and we’re close to correctly loading and saving 16 bit cmyk to and from PSD.

Oh, and we’ve also started on sourcing the Kickstarter rewards. It’s a huge amount of work this year: 610 postcards, 402 sticker sheets, 402 shortcut sheets, 86 t-shirts, 16 tote bags, 26 Secrets of Krita DVD’s (Timothee has already started on recording), 48 usb sticks, 17 mugs, 53 pencil cases, 3 sketch books and 1 tablet holder!

August 11, 2015

war and peace, the abridged UI version

Not to worry, this is not going to be as lengthy as Tolstoy’s tome. Actually, I intend this blog post to be shorter than usual. But yes, my topic today is war and peace in interaction design.

peace

I like to think that my work as an interaction architect makes the world a better place.

I realise product makers’ dreams, designing elegant and captivating embodiments they can ship. I save terminally ill software and websites from usability implosion, and in the meantime get their makers a lot closer to what they always intended.

On top of that, I provide larger organisations with instruments to reign in the wishful thinking that naturally accumulates there, and to institute QA every step along the way.

All of this is accomplished by harmonising the needs of product makers, developers and users. You would think that because I deliver success; solutions to fiendishly complex problems; certainty of what needs to be done; and (finally!) usable software, working with these three groups is a harmonious, enthusiastic and warm experience.

Well, so would I, but this is not always the case.

war

The animosity has always baffled me. I also took it personally: there I was, showing them the light at the end of the tunnel of their own misery, and they get all antsy, hostile and hurt about it.

After talking it through with some trusted friends, I have now a whole new perspective on the matter. Sure enough, as an interaction architect I am always working in a conflict zone, but it is not my conflict. Instead, it is the tri‑party conflict between product makers, developers and users.

The main conflict between product makers and users
Each individual user expects software and websites really to be made for me‐me‐me, while product makers try to make it for as many users as possible. Both are a form of greed.
There is also a secondary conflict, when users ‘pay’ the product maker in the form of personal, behavioural data, and/or by eyeballing advertisements—’nuff said.
The main conflict between product makers and developers
Product makers want the whole thing perfectly finished by tomorrow, while reserving the right to change their mind, any given minute, on what this thing is. Developers like to know exactly up front what all the modules are they have to build—but not too exactly, it robs the chance to splice in a cheaper substitute—while knowing it takes time, four times more than one would think, to build software.
That this is a fundamental conflict is proven by the current fad for agile development, where it is conveniently forgotten that there is such a thing as coherence and wholeness to a fine piece of software.
The main conflict between developers and users
This one is very simple: who is going to do the work? Developers think it is enough to get the technology running on the processing platform—with not too many crashing bugs—and users are free to do the rest. Users will have no truck with technology; they want refined solutions that ‘just work™,’ i.e. the developers do the work.

All of this makes me Jimmy Carter at Camp David. The interaction design process, the resulting interaction design and its implementation are geared towards bringing peace and prosperity to the three parties involved. This implies compromises from each party. And for me to tell them things they do not like to hear.

Product makers need to be told
  • to make real hard choices and define their target user groups narrowly—it is not for everyone;
  • that they cannot play fast and loose with users’ data;
  • to take the long, strategic view on the product level, instead of trying to micro‐manage every detail of its realisation;
  • to concentrate on the features that really make the product, instead of a pile‑up of everybody’s wish lists;
  • to accept that they cannot have it all, and certainly not with little effort and investment.
Users need to be told that
  • each of them are just part of a (very large) group and that in general the needs of this group are addressed by the product;
  • using software and websites takes a certain amount of investment from them: time, money, participation and/or privacy;
  • software cannot read their minds; to use it they will need to input rather exactly what they are trying to achieve;
  • quite a few of them are outside the target user group and their needs will not be taken into consideration.
Developers need to be told
  • that we are here to make products, not just code modules;
  • no substitutes please, the qualities of what needs to be built determines success;
  • users cannot be directly exposed to technology; an opaque—user—interface is needed between the two;
  • if it isn’t usable, it does not work.

No wonder everybody gets upset.

peace

How do I get myself some peace? Well, the only way is to obtain a bird’s‐eye view of the situation and to accept it.

First, I must accept that this war is inherent to any design process and all designers are confronted by it. Nobody ever really told us, but we are there to bring peace and success.

Second, I have to accept that product makers, developers and users get upset with my interaction design solutions for the simple reason that they are now confronted with the needs of the other two parties. They had it ‘all figured out’ and now this turns up. (Yes, I do also check if they have a point and discovered a bug in my design.)

Third, I have to see my role as translator in a different light. We all know that product makers, developers and users have a hard time talking to each other, and it is the role of the interaction architect to translate between them.

It is now clear to me that when I talk to one of the parties involved, I do not only fit the conversation to their frame of reference and speak their language, but also represent the other two parties. There is some anger potential there: for my audience because I speak in such familiar terms, about unfamiliar things that bring unwelcome complexity; for me because tailored vocabulary and examples do not increase acceptance from their side.

Accepting the war and peace situation is step one, doing something about it is next. I think it will take some kind of aikido move; a master blend of force and invisibility.

Force, because I am still implicitly engaged to bring peace and success, and to solve a myriad of interaction problems nobody wants to touch. This must be met head‑on, without fear.

Invisibility, because during the whole process it must be clear to all three parties that they are not negotiating with me, but with each other.

postmortem

That is it for today, I promised to keep it short. There are some interesting kinks and complications in the framework I set up today, but dealing with them will have to wait for another blog post.

August 10, 2015

Blender Institute releases pilot of Cosmos Laundromat – a free and open source episodical animation film.

The 10 minute opening sequence of the movie now has been released on the web. Successive episodes will be made when additional funding is in.

 

The pilot tells the story of Franck, a suicidal sheep who lives on a desolate island. He meets his fate in a quirky salesman, who offers him the gift of a lifetime. Little does he know that he can only handle so much lifetime…

The “Cosmos Laundromat” project started in 2014 as an experimental feature film in which an adventurous and absurdist love story gets told by multiple studios – each working in their own unique style. The project was initiated by Blender Foundation to improve animation production pipelines with the free and open source 3D software Blender. Based on the results of a crowd-funding campaign in Spring 2014, the Blender Institute in the Netherlands decided to first work on a pilot.

The opening of Cosmos Laundromat, the 10-minute pilot called “First Cycle”, now has been released for the public on the web. In the past weeks it had successful preview screenings; in the EyeFilm cinema in Amsterdam, on the SIGGRAPH convention in Los Angeles, and in the Presto Theatre at the Pixar campus in Emmeryville.
The official theatrical premiere will be on the Netherlands Film Festival in September. The film has been nominated for the prestigious Jury Prize of the Animago festival in Berlin.

New episodes will depend on audience feedback and additional funding. Recurring revenues will be expected to be generated via the Blender Institute’s subscription system “Blender Cloud”, which gives access to all of the source data that was used to make the film. New episodes are also meant to be made using free and open source software only, sharing the entire works with the audience, free to use, free to remix and free to learn from.

Ton Roosendaal –  producer and director of Blender Institute – spent a week in Los Angeles presenting Blender and the film project. “The reception we had was fabulous, especially from artists who work in the animation industry. They totally dig the sophisticated story build up, the high quality character animation and the amazing visuals. And most of all they root for us to become a success – because we are proving that there’s independent animation production possible outside of the film business with its restrictive distribution and licensing channels.”

More information:

Cosmos Laundromat can be watched via the production blog:
http://gooseberry.blender.org/about/

Film screenshots, Poster and promotion images:
http://download.blender.org/gooseberry/presskit.zip

Or contact  producer Ton Roosendaal, ton@blender.org

 

Bat Ballet above the Amaranths

This evening Dave and I spent quite a while clearing out amaranth (pigweed) that's been growing up near the house.

[Palmer's amaranth, pigweed] We'd been wondering about it for quite some time. It's quite an attractive plant when small, with pretty patterns on its leaves that remind me of some of the decorative houseplants we used to try to grow when I was a kid.

I've been working on an Invasive Plants page for the nature center, partly as a way to figure out myself which plants we need to pull and which are okay. For instance, Russian thistle (tumbleweed) -- everybody knows what it looks like when it's a dried-up tumbleweed, but by then it's too late, scattering its seeds all over. Besides, it's covered with spikes by then. The trick is to recognize and pull it when it's young, and the same is true of a lot of invasives, especially the ones with spiky seeds that stick to you, like stickseed and caltrops (goatheads).

A couple of the nature center experts have been sending me lists of invasive plants I should be sure to include, and one of them was a plant called redroot pigweed. I'd never heard of it, so I looked it up -- and it looked an awful lot like our mystery plant. A little more web searching on Amaranthus images eventually led me to Palmer's amaranth, which turns out to be aggressive and highly competitive, with sticky seeds.

Unfortunately the pretty little plants had had a month to grow by the time we realized the problem, and some of them had trunks an inch and a half across, so we had to go after them with a machete and a hand axe. But we got most of them cleared.

As we returned from dumping the last load of pigweed, a little after 8 pm, the light was fading, and we were greeted by a bat making rounds between our patio and the area outside the den. I stopped what I was doing and watched, entranced, as the bat darted into the dark den area then back out, followed a slalom course through the junipers, buzzed past my head and the out to make a sweep across the patio ... then back, around the tight corner and back to the den, over and over.

I stood watching for twenty minutes, with the bat sometimes passing within a foot of my head. (yay, bat -- eat some of these little gnats that keep whining by my ears and eyes!) It flew with spectacular maneuverability and grace, unsurpassed by anything save perhaps a hummingbird, changing direction constantly but always smoothly. I was reminded of the way a sea lion darts around underwater while it's hunting, except the bat is so much smaller, able to turn in so little space ... and of course maneuvering in the air, and in the dark, makes it all the more impressive.

I couldn't hear the bat's calls at all. Years ago, waiting for dusk at star parties on Fremont Peak, I used to hear the bats clearly. Are the bats here higher pitched than those California bats? Or am I just losing high frequencies as I get older? Maybe a combination of both.

Finally, a second bat, a little smaller than the first, appeared over the patio and both bats disappeared into the junipers. Of course I couldn't see either one well enough to tell whether the second bat was smaller because it was a different species, or a different gender of the same species. In Myotis bats, apparently the females are significantly larger than the males, so perhaps my first bat was a female Myotis and the male came to join her.

The two bats didn't reappear, and I reluctantly came inside.

Where are they roosting? In the trees? Or is it possible that one of them is using my bluebird house? I'm not going to check and risk disturbing anyone who might be roosting there.

I don't know if it's the same little brown bat I saw last week on the front porch, but it seems like a reasonable guess.

I've wondered how many bats there are flying around here, and how late they fly. I see them at dusk, but of course there's no reason to think they stop at dusk just because we're no longer able to see them. Perhaps I'll find out: I ordered parts for an Arduino-driven bat detector a few weeks ago, and they've been sitting on my desk waiting for me to find time to solder them together. I hope I find the time before summer ends and the bats fly off wherever they go in winter.

August 07, 2015

wishful thinking; ignite the shirts

A week ago I presented about my wishful thinking and act to succeed series at Ignite Berlin. That led to some unforeseen developments, with the result that you can look forward to some real cool t‑shirts.

kaboom!

The Ignite format is pretty demanding. I better let them explain it themselves:

‘Each speaker gets 5 minutes on stage. 20 slides, which auto‐forward every 15 seconds, no going back. So it’s pretty brutal, although nothing that a rehearsal can’t fix.’
the Ignite format, from their about page

Yes, this is really different than presenting 20 to 45 minutes at your own rhythm, which is what I am used to. A strategy, careful planning of the 20 slides and a generous helping of rehearsal are asked for. What I regularly see at conferences—some (recycled) slides banged together the night before and winging it during showtime—is bound to have a 99.99% fail rate at Ignite.

bang, you win

The upside is that the audience wins. All the speakers are an order of magnitude more prepared than they normally would be. There is no time for waffling and even single‐issue talks are engaging for five minutes.

At this event there were fourteen talks, two runs of seven each, which sounds like a looong marathon to sit through. In praxis, one run of seven talks takes 35 minutes of pure talk time, plus some for applaus and changeover (everything is pre‐sequenced on a single laptop). Thus in 38–40 minutes, seven engaging topics have passed and then it is time for a break, to digest and discuss.

Since my talk was scheduled almost at the end of the event, I expected to be too preoccupied to enjoy all these talks before mine. On the night, all of the talks engaged and entertained me, which put me in a good mood for mine. (When is the last time you could say that about a conference?)

show and tell

In my Ignite talk I showed a selection of wishful‐thinking issues, together with the positive action that can must be taken to remedy them. Meanwhile, I told the back‐story, for instance, that—

  • I have seen all of this wishful thinking in practice;
  • I wanted to expose a destructive streak that runs through the IT industry;
  • it was more work to make issues and remedies fit a single tweet, than to come up with them;
  • I felt that I could go on ‘forever’, but called it quits at fifty;
  • being in interaction design—which is essentially product realisation and involves seeing all dimensions (product, users, tech)—makes it easy to see the damage from wishful thinking;
  • it is a real shame to see the right people, with the right intentions, run projects into the ground through wishful thinking;
  • this is not valid only in IT, but in any industry;
  • please, it is difficult, but resist the wishful thinking when you believe in what you are working on;
  • what is needed is process change, which is also difficult, introducing a design process that from the first to the last minute of the project shapes and runs all product realisation, including manufacturing or fixing that final bug.

aftermath

I had plenty of interesting discussions after the talks were through, but one really took me by surprise: fellow speaker Onika Simon of Spokehub said something along the lines of ‘why don’t you put this wishful thinking on t‑shirts? There are plenty of people who deserve to get one.’

During my talk I had admitted that I am not a product maker and that never in my life I’ve had a good product idea. Thus it did not surprise me that I never had thought of wishful‐thinking t‑shirts. But now that the genie was out of the bottle, how difficult could it be?

snakes and ladders

Some parts were really straightforward. The content was already there. Deciding what should go on front and back, and picking some free‐as‐in‐speech fonts (right, no pirated components in my products) was no big deal. Neither was typesetting the texts.

Making EPS files already involved jumping through one hoop (why not accept pdf? It is just about the same tech). Dealing with spreadshirt was a three‐ring circus. Spreadshirt is suppose to make it easy to open your own merchandising outlet, but forget about the easy part.

I could go on and on, about requiring flash <spit>, crashes, usability disasters, the pervasive ‘how do I get that done?’ and ‘how do I know it did it?’ anxiety, and only finding out what you will get when you get there. But let’s say that unless you are a spreadshirt executive, I won’t bother (you with it).

lift‑off

Against these odds, I did manage to put up a t‑shirt shop in less than a week. There is one MVP: a limited‐edition t‑shirt (available one month only) in female and male cuts, and two variants, dark and bright:

the bright female, dark female, dark male and bright male wishful     thinking shirts

I found out at the very end, when I got to check it out (typical, eh), that you can change the shirt colour in the shop. Suits me fine; a simple ‘menu’ to choose from and then freedom to customise, a bit.

When I checked the wishful thinking topic page, I noticed how hard‐hitting these are by themselves, so it was clear that these go, solo, on the front:

the text on the front of the shirt: the hardware specs are fixed, now     we can start with the software design

This is the wishful thought for August ’15 and you can see that I plunged for the first one I saw. Each month I will pick a different one (no, not in the order on that page) and change the ‘bright’ colour scheme.

On the back we ensure that everyone gets the point…

the text on the back: wishful thinking breeds failed products

…just in case the beholder wishfully thinks the statement on the front is best‐practice.

postscript

And out of the blue m+mi works offers a hardware product. It will be fun offering these and I hope spreadshirt cooperates a bit more to keep it that way. I look forward to seeing one of these t‑shirts being worn in the wild.

PyCon Australia 2015

If anyone is interested in my talk I gave at PyCon Australia in Brisbane, here is the YouTube link:

Slides can be found here: http://redhat.slides.com/rjoost/deck-3/YouTube link: How your Python program behaves: a story on how to build a program slicer

The conference was a blast. Thanks to the organisers for this wonderful conference.


August 06, 2015

Creating a QML snap with Snapcraft

We want to enable all kinds of developers to quickly make applications and devices using Snappy as their basis. A quick way to make compelling user interfaces is by using QML, so it seemed like a natural fit to get QML working in snapcraft to eliminate complex setups, and just get things working. There is an Introduction to Snapcraft, I'm going to assume you've already read that.

To get started with an interesting demo I went and stole the Qt photoviewer demo and pulled it into its' own repository, then added a couple simple configuration files. This is a great demo because it is graphical and fun, but also shows pulling data from the network as all the photos are based on Flickr tags.

parts:
  qml:
    plugin: qml
  photoviewer:
    plugin: copy
    files:
      main.qml: main.qml
      PhotoViewerCore: PhotoViewerCore
snappy-metadata: meta

The snapcraft.yaml file includes two parts. The first part is the QML plug in which includes all the pieces needed to run QML programs from the Ubuntu archive. The second is the copy plugin which copies our QML files into the snap. We don't have a build system in this example so copy is all we need, more complex examples could use the cmake or autotools plugins instead.

The last item in the snapcraft.yaml tells Snapcraft where to find the packaging information for Snappy. In the meta directory we have a packages.yaml that is a standard Snappy package file.

name: photoviewer
version: 0.1
vendor: Ted Gould <ted@canonical.com>
frameworks: [mir]
binaries:
  - name: photoviewer
    exec: qmlscene main.qml --
    caps:
      - mir_client
      - network-client

It configures a binary that will be set up by Snappy, which is simply a call to qmlscene with our base QML file. This will then get wrapped up into a single binary in /apps/bin that we can execute.

We need to now turn this directory into a snap. You should follow the instructions to install snapcraft, and then you can just call it in that directory:

$ snapcraft

There are a few ways to set up a Snappy system, the one that I've used here is with QEMU on my development system. That makes it easy to develop and test with, and currently the Mir snap is only available for amd64. After getting snappy setup you'll need to grab the Mir framework from the store and install the snap we just built.

$ sudo snappy install mir
$ sudo snappy install --allow-unauthenticated photoviewer_1.0_amd64.snap

You can then run the photoviewer:

$ photoviewer.photoviewer

And you should have something like this on your display:

While this is a simple demo of what can be done with QML, it can be expanded to enable all kinds of devices from displaying information on a network service or providing UI for a small IoT device.

August 05, 2015

Report from Akademy 2015

A week has passed since I’ve been back from Akademy, so it’s more than time to make a little report.

I’ve enjoyed a lot meeting old and new friends from KDE. Lot of good times shared :)

akademy2015-people(photo by Alex Merry ; you can find lot of ther cool photos on this link)

This year I made a quick little talk presenting the result of my work on GCompris. You can find it with all other talks recorded on this page, if you didn’t watch them already.

 

Also I could discuss some ideas for next things to come, so stay tuned ;)

Thanks a lot to KDE e.V for the support, that was another awesome experience.

August 04, 2015

Color Curves Matching


Color Curves Matching

Sample points and matching tones

In my previous post on Color Curves for Toning/Grading, I looked at the basics of what the Curves dialog lets you do in GIMP. I had been meaning to revisit the subject with a little more restraint (the color curve in that post was a little rough and gross, but it was for illustration so I hope it served its purpose).

This time I want to look at the use of curves a little more carefully. You’d be amazed at the subtlety that gentle curves can produce in toning your images. Even small changes in your curves can have quite the impact on your final result. For instance, have a look at the four film emulation curves created by Petteri Sulonen (if you haven’t read his page yet on creating these curves, it’s well worth your time):

Dot Original Headshot Original
Dot Portra NC400 Film Portraesque (Kodak Portra NC400 Film)
Dot Fuji Provia Film Proviaesque (Fujichrome Provia)
Dot Fuji Velvia Film Velviaesque (Fujichrome Velvia)
Dot crossprocessed C41 Film Crossprocess (E6 slide film in C-41 neg. processing)

I can’t thank Petteri enough for releasing these curves for everyone to use (for us GIMP users, there is a .zip file at the bottom of his post that contains these curves packaged up). Personally I am a huge fan of the Portraesque curve that he has created. If there is a person in my images, it’s usually my go-to curve as a starting point. It really does generate some wonderful skin tones overall.

The problem in generating these curves is that one has to be very, very familiar with the characteristics of the film stocks you are trying to emulate. I never shot Velvia personally, so it is hard for me to have a reference point to start from when attempting to emulate this type of film.

What we can do, however, is to use our personal vision or sense of aesthetic to begin toning our images to something that we like. GIMP has some great tools for helping us to become more aware of color and the effects of each channel on our final image. That is what we are going to explore…

Disclaimer I cannot stress enough that what we are approaching here is an entirely subjective interpretation of what is pleasing to our own eyes. Color is a very complex subject and deserves study to really understand. Hopefully some of the things I talk about here will help pique your interest to push further and experiment!
There is no right or wrong, but rather what you find pleasing to your own eye.

Approximating Tones

What we will be doing is using Sample Points and the Curves dialog to modify the color curves in my image above to emulate something else. It could be another photograph, or even a painting.

I’ll be focusing on the skin tones, but the method can certainly be used for other things as well.

Dot Original Headshot My wonderful model.

With an image you have, begin considering what you might like to approximate the tones on. For instance, in my image above I want to work on the skin tones to see where it leads me.

Now find an image that you like, and would like to approximate the tones from. It helps if the image you are targeting already has tones somewhat similar to what you are starting with (for instance, I would look for another Caucasian image with similar tones to start from, as opposed to Asian). Keeping tones at least similar will reduce the violence you’ll do to your final image.

So for my first example, perhaps I would like to use the knowledge that the Old Masters already had in regards to color, and would like to emulate the skin tones from Vermeer’s Girl with the Pearl Earring.

Johannes Vermeer Girl with the Pearl Earring Johannes Vermeer - The Girl With The Pearl Earring (1665)

In GIMP I will have my original image already opened, and will then open my target image as a new layer. I’ll pull this layer to one side of my image to give me a view of the areas I am interested in (faces and skin).

Vermeer setup GIMP

I will be using Sample Points extensively as I proceed. Read up on them if you haven’t used them before. They are basically a means of giving you real-time feedback of the values of a pixel in your image (you can track up to four points at one time).

I will put a first sample point somewhere on the higher skin tones of my base image. In this case, I will put one on my models forehead (we’ll be moving it around shortly, so somewhere in the neighborhood is fine).

GIMP first sample point

Ctrl + Left Click in the ruler area of your main window (shown in green above), and drag out into your image. There should be crosshairs across your entire image screen showing you where you are dragging.

When you release the mouse button, you’ve dropped a Sample Point onto your image. You can see it in my image above as a small crosshair with the number 1 next to it.

GIMP should open the sample points dialog for you when you create the first point, but if not you can access it from the image menu under:

Windows → Dockable Dialogs → Sample Points

Sample points dialog

This is what the dialog looks like. You can see the RGB pixel data for the first sample point that I have already placed. As you place more sample points, they will each be reflecting their data on this dialog.

You can go ahead and place more sample points on your image now. I’ll place another sample point, but this time I will put it on my target image where the tones seem similar in brightness.

Sample point placed

What I’ll then do is change the data being shown in the Sample Points dialog to show HSV data instead of Pixel data.

Sample points dialog with 2 points

Now, I will shoot for around 85% value on my source image, and try to find a similar value level in similar tones from my target image as well. Once you’ve placed a sample point, you can continue to move it around and see what types of values it gives you. (If you use another tool in the meantime, and can no longer move just the sample point - you can just select the Color Picker Tool to be able to move them again).

Move the points around your skin tones until you get about the same Value for both points.

Once you have them, make sure your original image layer is active, then start up the curves dialog.

Colors → Curves…

Now here is something really handy to know while using the Curves dialog: if you hover your mouse over your image, you’ll notice that the cursor is a dropper - you can click and drag on an area of your image, and the corresponding value will show up in your curves dialog for that pixel (or averaged area of pixels if you turn that on).

So click and drag to about the same pixel you chose in your original image for the sample point.

Curve base Curves dialog with a value point (217) for my sampled pixel.

Here is what my working area currently looks like:

GIMP workspace for sample point color matching

I have my curves dialog open, and an area around my sample point chosen so that the values will be visible in the dialog, my images with their associated sample points, and the sample points dialog showing me the values of those points.

The basic idea now is to adjust my RGB channels to get my original image sample point (#1) to match my target image sample point (#2).

Because I selected an area around my sample point with the curves dialog open, I will know roughly where those values need to be adjusted. Let’s start with the Red channel.

First, set the Sample Points dialog back to Pixel to see the RGBA data for that pixel.

GIMP Sample point Red Green Blue matching

We can now see that to match the pixel colors we will need to make some adjustments to each channel. Specifically,

the Red channel will have to come down a bit (218 → 216),

the Green down some as well (188 → 178),

and Blue much more (171 → 155).

You may want to resize your Curves dialog window larger to be able to more finely control the curves. If we look at the Red channel in my example, we would want to adjust the curve down slightly at the vertical line that shows us where our pixel values are:

Color Curve Adjustment Red

We can adjust the red channel curve along this vertical axis (marked x:217) until our pixel red value matches the target (216).

Then just change over to the green channel and do the same:

Color Curve Adjustment Green

Here we are adjusting the green curve vertically along the axis marked x:190 until our pixel green value matches the target (178).

Finally, follow the same procedure for the blue channel:

Color Curve Adjustment Blue

As before, we adjust along the vertical axis x:173 until our blue channel matches the target (155).

At this point, our first sample point pixel should be the same color as from our target.

The important thing to take away from this exercise is to be watching your image as you are adjusting these channels to see what types of effects they produce. Dropping the green channel should have seen a slight addition of magenta to your image, and dropping the blue channel should have shown you the addition of a yellow to balance it.

Watch your image as you make these changes.

Don’t hit OK on your curves dialog yet!

You’ll want to repeat this procedure, but using some sample points that are darker than the previous ones. Our first sample points had values of about 85%, so now let’s see if we can match pixels down below 50% as well.

Without closing your curves dialog, you should be able to click and drag your sample points around still. So I would set your Sample Points dialog to show you HSV values again, and now drag your first point around on your image until you find some skin that’s in a darker value, maybe around 40-45%.

Once you do, try to find a corresponding value in your target image (or something close at least).

I managed to find skin tones with values around 45% in both of my images:

Color CUrve Skin Dark Color Curve Sking Dark RGB

In these darker tones, I can see that the adjustments I will have to make are for:

Red down a bit (116 → 114),

Green bumped up some (60 → 73),

Blue slightly down (55 → 53).

With the curves dialog still active, I then click and drag on my original image until I am in the same area as my sample point again. This give me my vertical line showing me the value location in my curves dialog, just as before:

Dark tones red Red down to 114.
Dark tones green Green up to 73.
Dark tones blue Blue down to 53.

At this point you should have something similar to the tones of your target image. Here is my image after these adjustments so far:

Results so far GIMP Matching Effects of the curves so far (click to compare to original).

Once you’ve got things in a state that you like, it would be a good idea to save your progress. At the top of the Curves dialog there is a “+” symbol. This will let you add the current settings to your favorites. This will allow you to recall these settings later to continue working on them.

However, you’re results might not quite look right at the moment. So why not?

Well, the first problem is that Sample Points will only allow you to sample a single pixel value. There’s a chance that the pixels you pick are not truly representative of the correct skin tones in that range (for instance you may have inadvertently clicked a pixel that represents the oil paint cracks in the image). It would be nice if there were an option for Sample Points to allow an adjustable sample radius (if there is an option I haven’t found it yet).

The second issue is that similar value points might be very different colors overall. Hopefully your sources will be nice for you to pick in areas that you know are relatively consistent and representative of the tones you want, but it’s not always a guarantee.

If the results are not quite what you want at the moment, you can do what I will sometimes do and go back to the beginning…

While still keeping the curves dialog open you can pull your sample points to another location, and match the target again. Try choosing another sample point with a similar value as the first one. This time instead of adding new points the curve as you make adjustments, just drag the existing points you previously placed.

It’s an Iterative Process

Depending on how interested you are in tweaking your resulting curve, you may find yourself going around a couple of times. That’s ok.

Iterative flowchart

I would recommend keeping your curves to having two control points at first. You want your curves to be smooth across the range (any abrupt changes will do strange things to your final image).

If you are doing a couple of iterations, try modifying existing points on your curves instead of adding new ones. It may not be an exact match, but it doesn’t have to be. It only needs to look nice to your eyes.

There won’t be a perfect solution for a perfect color matching between images, but we can produce pleasing curves that emulate the results we are looking for.

In Conclusion

I personally have found the process of doing this with different images to be quite instructive in how the curves will affect my image. If you try this out and pay careful attention to what is happening while you do it, I’m hopeful you will come away with a similar appreciation of what these curves will do.

Most importantly, don’t be constrained by what you are targeting, but rather use it as a stepping off point for inspiration and experimentation for your own expression!

I’ll finish with a couple of other examples…

Dot Botticelli Birth of Venus Sandro Botticelli - The Birth of Venus) (click to compare to original)
Fa Presto - St. Michael (click to compare original)

And finally, as promised, here’s the video tutorial that steps through everything I’ve explained above:

From a request, I’ve packaged up some of the curves from this tutorial (Pearl Earring, St. Michael, the previous Orange/Teal Hell, and another I was playing with from a Norman Rockwell painting): Download the Curves (7zip .7z)

FPV Addicts

I’ve started doing longer edits of the 15 second clips I usually put on Instagram. I’ve been really creative with the naming so far.

FPV Addicts

FPV Addict

August 03, 2015

Monthly Drawing Challenge August 2015

(by jmf)

The 6th iteration of the Monthly Drawing Challenge is taking place on the Krita Forums!

This month’s topic is… Ancient

To enter, post your picture on the August drawing challenge thread. The deadline is August 24, 2015. You are free to interpret the topic in any way. Let your imagination run free. The winner is decided through a poll running 7 days after the deadline. The winner gets the privilege to choose next month’s topic. You can use the hashtag #kritachallenge to talk about this challenge on social media.

I started the challenge in February 2015 with two goals: to give people motivation to draw, and to give them a way to get rid of the “blank canvas syndrome”. The challenge is not about winning! It is about making art, trying something new, and getting inspired.

 

Last month’s winner: “Love at First Flight” by scottyp.

love at first flight

July 31, 2015

Fri 2015/Jul/31

  • I've been making a little map of things in Göteborg, for GUADEC. The red markers are for the main venue (Folkets Hus) and the site of the BOFs (IT University). There's a ferry line to get from near Folkets Hus to the IT Univerisy. The orange marker is the Liseberg amusement park where the roller coasters are. The blue markers are some hotels.

    Go here if you cannot see the map.

July 30, 2015

New Discuss Categories and Logging In


New Discuss Categories and Logging In

Software, Showcase, and Critiques. Oh My!

Hot on the heels of our last post about welcoming G’MIC to the forums at discuss.pixls.us, I thought I should speak briefly about some other additions I’ve recently made.

These were tough for me to finally make a decision about. I want to be careful and not get crazy with over-categorization. At the same time, I do want to make good logical breakdowns for people that is still intuitive.

Here is what the current category breakdown looks like for discuss:

  • PIXLS.US
    The comment/posts from articles/blogposts here on the main site.
  • Processing
    Processing and managing images after they’ve been captured.
  • Capturing
    Capturing an image and the ways we go about doing it.
  • Showcase
  • Critique
  • Meta
    Discussions related to the website or the forum itself.
    • Help!
      Help with the website or forums.
  • Software
    Discussions about various software in general.

Along with the addition of the Software category (and the G’MIC subcategory), I decided that the Help! category would make more sense under the Meta category. That is, the Help! section is for website/forum help, which is more of a Meta topic (hence moving it).

Software

As we’ve already seen, there is now a Software category for all discussions about the various software we use. The first sub-category to this is of course, the G’MIC subcategory.

F/OSS Project Logos

If there is enough interest in it, I am open to creating more sub-categories as needed to support particular software projects (GIMP, darktable, RawTherapee, etc…). I will wait until there is some interest before adding more categories here.

Showcase

This category had some interest from members and I agree that it’s a good idea. It’s intended as a place for members to showcase the works they’re proud of and to hopefully serve as a nice example of what we’re capable of producing using F/OSS tools.

A couple of examples from the Showcase category so far:

Filmulator Output Example, by Carlo Vaccari New Life, Filmulator Output Sample, by CarVac
Mairi Troisieme, by Pat David Mairi Troisième by Pat David (cbna)

There may be a use of this category later for storing submissions for a rotating lede image on the main page of the site.

Critique

This is intended as a place for members to solicit advice and critiques on their works from others. It took me a little work to come up with an initial take on the overall description for the category.

I can promise that I will do my best to give honest and constructive feedback to anyone that asks in this category. I also promise to do my best to make sure that no post goes un-answered here (I know how beneficial feedback has been to me in the past, so it’s the least I could do to help others out in return).

Discuss Login Options

I also bit the bullet this week and finally caved to sign up for a Facebook account. The only reason was because I had to have a personal account to get an API key to allow people to log in using their FB account (with OAuth).

dicuss.pixls.us login options We can now use Google, Facebook, Twitter, and Yahoo! to Log In.

On the other hand, we now accept four different methods of logging in automatically along with signing up for a normal account. I have been trying to make it as frictionless as possible to join the conversation and hopefully this most recent addition (FB) will help in some small way.

Oh, and if you want to add me on Facebook, my profile can be found here. I also took the time to create a page for the site here: PIXLS.US on Facebook.

released darktable 1.6.8

We are happy to announce that darktable 1.6.8 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.8
Please only use our provided packages ("darktable-1.6.8.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:
https://github.com/darktable-org/darktable/releases/download/release-1.6.8/darktable-1.6.8.tar.xz
https://github.com/darktable-org/darktable/releases/download/release-1.6.8/darktable-1.6.8.dmg

this is a point release in the stable series. the sha256sum is

sha256sum darktable-1.6.8.tar.xz
b676f81bd8cc661a8f76e03ad449da4444f770b6bec3e9accf013c636f690905
sha256sum darktable-1.6.8.dmg
ec4b1ad797ea7a483d7fc94724de99a1d18da7d7f75071220e1d313e0a4d8a53

and as always, please don't use the tarballs provided by github (marked as "Source code").

changes

  • clipping, santiy check for custom aspect ratios
  • read lensmodel from xmp
  • handle canon lens recognition special case
  • general cleanups

rawspeed

  • Canon EOS M3
  • Canon EOS 5Ds (R)
  • Nikon 1 J5
  • Panasonic DMC-G7 (4:3 aspect ratio only)
  • Fujifilm X-T10
  • Pentax K-S2
  • Panasonic TZ71
  • Olympus TG-4
  • Leica VLUX1 4:3 aspect ratio mode

standard color matrices

  • Canon EOS M3
  • Canon EOS 5Ds (R)
  • Nikon 1 J5
  • Panasonic DMC-G7
  • Fujifilm X-T10
  • Pentax K-S2
  • Olympus TG-4

white balance presets

  • Samsung NX500
  • Panasonic TZ71

noise profiles

  • Sony ILCE-5100
  • Fujifilm HS50EXR
  • Canon EOS 5Ds R

So now go out, enjoy the summer and take a lot of photos!

A good week for critters

It's been a good week for unusual wildlife.

[Myotis bat hanging just outside the front door] We got a surprise a few nights ago when flipping the porch light on to take the trash out: a bat was clinging to the wall just outside the front door.

It was tiny, and very calm -- so motionless we feared it was dead. (I took advantage of this to run inside and grab the camera.) It didn't move at all while we were there. The trash mission accomplished, we turned out the light and left the bat alone. Happily, it wasn't ill or dead: it was gone a few hours later.

We see bats fairly regularly flying back and forth across the patio early on summer evenings -- insects are apparently attracted to the light visible through the windows from inside, and the bats follow the insects. But this was the first close look I'd had at a stationary bat, and my first chance to photograph one.

I'm not completely sure what sort of bat it is: almost certainly some species of Myotis (mouse-eared bats), and most likely M. yumanensis, the "little brown bat". It's hard to be sure, though, as there are at least six species of Myotis known in the area.

[Woodrat released from trap] We've had several woodrats recently try to set up house near the house or the engine compartment of our Rav4, so we've been setting traps regularly. Though woodrats are usually nocturnal, we caught one in broad daylight as it explored the area around our garden pond.

But the small patio outside the den seems to be a particular draw for them, maybe because it has a wooden deck with a nice dark space under it for a rat to hide. We have one who's been leaving offerings -- pine cones, twigs, leaves -- just outside the door (and less charming rat droppings nearby), so one night Dave set three traps all on that deck. I heard one trap clank shut in the middle of the night, but when I checked in the morning, two traps were sprung without any occupants and the third was still open.

But later that morning, I heard rattling from outside the door. Sure enough, the third trap was occupied and the occupant was darting between one end and the other, trying to get out. I told Dave we'd caught the rat, and we prepared to drive it out to the parkland where we've been releasing them.

[chipmunk caught in our rat trap] And then I picked up the trap, looked in -- and discovered it was a pretty funny looking woodrat. With a furry tail and stripes. A chipmunk! We've been so envious of the folks who live out on the canyon rim and are overloaded with chipmunks ... this is only the second time we've seen here, and now it's probably too spooked to stick around.

We released it near the woodpile, but it ran off away from the house. Our only hope for its return is that it remembers the nice peanut butter snack it got here.

[Baby Great Plains skink] Later that day, we were on our way out the door, late for a meeting, when I spotted a small lizard in the den. (How did it get in?) Fast and lithe and purple-tailed, it skittered under the sofa as soon as it saw us heading its way.

But the den is a small room and the lizard had nowhere to go. After upending the sofa and moving a couple of tables, we cornered it by the door, and I was able to trap it in my hands without any damage to its tail.

When I let it go on the rocks outside, it calmed down immediately, giving me time to run for the camera. Its gorgeous purple tail doesn't show very well, but at least the photo was good enough to identify it as a juvenile Great Plains skink. The adults look more like Jabba the Hut nothing like the lovely little juvenile we saw. We actually saw an adult this spring (outside), when we were clearing out a thick weed patch and disturbed a skink from its hibernation. And how did this poor lizard get saddled with a scientfic name of Eumeces obsoletus?

July 27, 2015

3D printing Poe

I helped print this statue of Edgar Allan Poe, through “We the Builders“, who coordinate large-scale crowd-sourced 3D print jobs:

Poe's Face

You can see one of my parts here on top, with “-Kees” on the piece with the funky hair strand:

Poe's Hair

The MakerWare I run on Ubuntu works well. I wish they were correctly signing their repositories. Even if I use non-SSL to fetch their key, as their Ubuntu/Debian instructions recommend, it still doesn’t match the packages:

W: GPG error: http://downloads.makerbot.com trusty Release: The following signatures were invalid: BADSIG 3D019B838FB1487F MakerBot Industries dev team <dev@makerbot.com>

And it’s not just my APT configuration:

$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release.gpg
$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release
$ gpg --verify Release.gpg Release
gpg: Signature made Wed 11 Mar 2015 12:43:07 PM PDT using RSA key ID 8FB1487F
gpg: requesting key 8FB1487F from hkp server pgp.mit.edu
gpg: key 8FB1487F: public key "MakerBot Industries LLC (Software development team) <dev@makerbot.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: BAD signature from "MakerBot Industries LLC (Software development team) <dev@makerbot.com>"
$ grep ^Date Release
Date: Tue, 09 Jun 2015 19:41:02 UTC

Looks like they’re updating their Release file without updating the signature file. (The signature is from March, but the Release file is from June. Oops!)

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Basic Color Curves


Basic Color Curves

An introduction and simple color grading/toning

Color has this amazing ability to evoke emotional responses from us. From the warm glow of a sunny summer afternoon to a cool refreshing early evening in fall. We associate colors with certain moods, places, feelings, and memories (consciously or not).

Volumes have been written on color and I am in no ways even remotely qualified to speak on it. So I won’t.

Instead, we are going to take a look at the use of the Curves tool in GIMP. Even though GIMP is used to demonstrate these ideas, the principles are generic to just about any RGB curve adjustments.

Your Pixels and You

First there’s something you need to consider if you haven’t before, and that’s what goes into representing a colored pixel on your screen.

PIXLS.US House Zoom Example Open up an image in GIMP.
PIXLS.US House Zoom Example Now zoom in.
PIXLS.US House Zoom Example Nope - don’t be shy now, zoom in more!
PIXLS.US House Zoom Example Aaand there’s your pixel. So let’s investigate what goes into making your pixel.

Remember, each pixel is represented by a combination of 3 colors: Red, Green, and Blue. In GIMP (currently at 8-bit), that means that each RGB color can have a value from 0 - 255, and combining these three colors with varying levels in each channel will result in all the colors you can see in your image.

If all three channels have a value of 255 - then the resulting color will be pure white. If all three channels have a value of 0 - then the resulting color will be pure black.

If all three channels have the same value, then you will get a shade of gray (128,128,128 would be a middle gray color for instance).

So now let’s see what goes into making up your pixel:

GIMP Color Picker Pixel View The RGB components that mix into your final blue pixel.

As you can see, there is more blue than anything else (it is a blue-ish pixel after all), followed by green, then a dash of red. If we were to change the values of each channel, but kept ratio the same between Red, Green, and Blue, then we would keep the same color and just lighten or darken the pixel by some amount.

Curves: Value

So let’s leave your pixel alone for the time being, and actually have a look at the Curves dialog. I’ll be using this wonderful image by Eric from Flickr.

Hollow Moon by Eric qsimple Flickr Hollow Moon by qsimple/Eric on Flickr. (cbna)

Opening up my Curves dialog shows me the following:

GIMP Base Curves Dialog

We can see that I start off with the curve for the Value of the pixels. I could also use the drop down for “Channel” to change to red, green or blue curves if I wanted to. For now let’s look at Value, though.

In the main area of the dialog I am presented with a linear curve, behind which I will see a histogram of the value data for the entire image (showing the amount of each value across my image). Notice a spike in the high values on the right, and a small gap at the brightest values.

GIMP Base Curves Dialog Input Output

What we can do right now is to adjust the values of each pixel in the image using this curve. The best way to visualize it is to remember that the bottom range from black to white represents the current value of the pixels, and the left range is the value to be mapped to.

So to show an example of how this curve will affect your image, suppose I wanted to remap all the values in the image that were in the midtones, and to make them all lighter. I can do this by clicking on the curve near the midtones, and dragging the curve higher in the Y direction:

GIMP Base Curves Dialog Push Midtones

What this curve does is takes the values around the midtones, and pushes their values to be much lighter than they were. In this case, values around 128 were re-mapped to now be closer to 192.

Because the curve is set Smooth, there will be a gradual transition for all the tones surrounding my point to be pulled in the same direction (this makes for a smoother fall-off as opposed to an abrupt change at one value). Because there is only a single point in the curve right now, this means that all values will be pulled higher.

Hollow Moon Example Pushed Midtones The results of pushing the midtones of the value curve higher (click to compare to original).

Care should be taken when fiddling with these curves to not blow things out or destroy detail, of course. I only push the curves here to illustrate what they do.

A very common curve adjustment you may hear about is to apply a slight “S” curve to your values. The effect of this curve would be to darken the dark tones, and to lighten the light tones - in effect increasing global contrast on your image. For instance, if I click on another point in the curves, and adjust the points to form a shape like so:

GIMP Base Curves Dialog S shaped curve A slight “S” curve

This will now cause dark values to become even darker, while the light values get a small boost. The curve still passes through the midpoint, so middle tones will stay closer to what they were.

Hollow Moon Example S curve applied Slight “S” curve increases global contrast (click for original).

In general, I find it easiest to visualize in terms of which regions in the curve will effect different tones in your image. Here is a quick way to visualize it (that is true for value as well as RGB curves):

GIMP Base Curves darks mids lights zones

If there is one thing you take away from reading this, let it be the image above.

Curves: Colors

So how does this apply to other channels? Let’s have a look.

The exact same theory applies in the RGB channels as it did with values. The relative positions of the darks, midtones, and lights are still the same in the curve dialog. The primary difference now is that you can control the contribution of color in specific tonal regions of your image.

Value, Red, Green, Blue channel picker.

You choose which channel you want to adjust from the “Channel” drop-down.

To begin demonstrating what happens here it helps to have an idea of generally what effect you would like to apply to your image. This is often the hardest part of adjusting the color tones if you don’t have a clear idea to start with.

For example, perhaps we wanted to “cool” down the shadows of our image. “Cool” shadows are commonly seen during the day in shadows out of direct sunlight. The light that does fall in shadows is mostly reflected light from a blue-ish sky, so the shadows will trend slightly more blue.

To try this, let’s adjust the Blue channel to be a little more prominent in the darker tones of our image, but to get back to normal around the midtones and lighter.

Boosting blues in darker tones
Pushing up blues in darker tones (click for original).

Now, here’s a question: If I wanted to “cool” the darker tones with more blue, what if I wanted to “warm” the lighter tones by adding a little yellow?

Well, there’s no “Yellow” curve to modify, so how to approach that? Have a look at this HSV color wheel below:

The thing to look out for here is that opposite your blue tones on this wheel, you’ll find yellow. In fact, for each of the Red, Green, and Blue channels, the opposite colors on the color wheel will show you what an absence of that color will do to your image. So remember:

RedCyan GreenMagenta BlueYellow

What this means to you while manipulating curves is that if you drag a curve for blue up, you will boost the blue in that region of your image. If instead you drag the curve for blue down, you will be removing blues (or boosting the Yellows in that region of your image).

So to boost the blues in the dark tones, but increase the yellow in the lighter tones, you could create a sort of “reverse” S-curve in the blue channel:

Boost blues in darks, boost yellow in high tones (click for original).

In the green channel for instance, you can begin to introduce more magenta into the tones by decreasing the curve. So dropping the green curve in the dark tones, and letting it settle back to normal towards the high tones will produce results like this:

Suppressing the green channel in darks/mids adds a bit of magenta
(click for original).

In isolation, these curves are fun to play with, but I think that perhaps walking through some actual examples of color toning/grading would help to illustrate what I’m talking about here. I’ll choose a couple of common toning examples to show what happens when you begin mixing all three channels up.

Color Toning/Grading

Orange and Teal Hell

I use the (cinema film) term color grading here because the first adjustment we will have a look at to illustrate curves is a horrible hollywood trend that is best described by Todd Miro on his blog.

Grading is a term for color toning on film, and Todd’s post is a funny look at the prevalence of orange and teal in modern film palettes. So it’s worth a look just to see how silly this is (and hopefully to raise awareness of the obnoxiousness of this practice).

The general thought here is that caucasian skin tones trend towards orange, and if you have a look at a complementary color on the color wheel, you’ll notice that directly opposite orange is a teal color.

Screenshot from Kuler borrowed from Todd.

If you don’t already know about it, Adobe has online a fantastic tool for color visualization and palette creation called Kuler Adobe Color CC. It lets you work on colors based on some classic rules, or even generate a color palette from images. Well worth a visit and a fantastic bookmark for fiddling with color.

So a quick look at the desired effect would be to keep/boost the skin tones into a sort of orange-y pinkish color, and to push the darker tones into a teal/cyan combination. (Colorists on films tend to use a Lift, Gamma, Gain model, but we’ll just try this out with our curves here).

Quick disclaimer - I am purposefully exaggerating these modifications to illustrate what they do. Like most things, moderation and restraint will go a long ways towards not causing your viewers eyeballs to bleed. Remember - light touch!

So I know that I want to see my skin tones head into an orange-ish color. In my image the skin tones are in the upper mids/low highs range of values, so I will start around there.

What I’ve done is put a point around the low midtones to anchor the curve closer to normal for those tones. This lets me fiddle with the red channel and to isolate it roughly to the mid and high tones only. The skin tones in this image in the red channel will fall toward the upper end of the mids, so I’ve boosted the reds there. Things may look a little weird at first:

If you look back at the color wheel again, you’ll notice that between red and green, there is a yellow, and if you go a bit closer towards red the yellow turns to more of an orange. What this means is that if we add some more green to those same tones, the overall colors will start to shift towards an orange.

So we can switch to the green channel now, put a point in the lower midtones again to hold things around normal, and slightly boost the green. Don’t boost it all the way to the reds, but about 2/3rds or so to taste.

This puts a little more red/orange-y color into the tones around the skin. You could further adjust this by perhaps including a bit more yellow as well. To do this, I would again put an anchor point in the low mid tones on the blue channel, then slightly drop the blue curve in the upper tones to introduce a bit of yellow.

Remember, we’re experimenting here so feel free to try things out as we move along. I may consider the upper tones to be finished at the moment, and now I would want to look at introducing a more blue/teal color into the darker tones.

I can start by boosting a bit of blues in the dark tones. I’m going to use the anchor point I already created, and just push things up a bit.

Now I want to make the darker tones a bit more teal in color. Remember the color wheel - teal is the absence of red - so we will drop down the red channel in the lower tones as well.

And finally to push a very slight magenta into the dark tones as well, I’ll push down the green channel a bit.

If I wanted to go a step further, I could also put an anchor point up close to the highest values to keep the brightest parts of the image closer to a white instead of carrying over a color cast from our previous operations.

If your previous operations also darkened the image a bit, you could also now revisit the Value channel, and make modifications there as well. In my case I bumped the midtones of the image just a bit to brighten things up slightly.

Finally to end up at something like this.

After fooling around a bit - disgusting, isn’t it?
(click for original).

I am exaggerating things here to illustrate a point. Please don’t do this to your photos. :)

If you’d like to download the curves file of the results we reached above, get it here:
Orange Teal Hell Color Curves

Conclusion

Remember, think about what the color curves represent in your image to help you achieve your final results. Begin looking at the different tonalities in your image and how you’d like them to appear as part of your final vision.

For even more fun - realize that the colors in your images can help to evoke emotional responses in the viewer, and adjust things accordingly. I’ll leave it as an exercise for the reader to determine some of the associations between colors and different emotions.

Sun 2015/Jul/26

  • An inlaid GNOME logo, part 5

    Esta parte en español

    (Parts 1, 2, 3, 4)

    This is the shield right after it came out of the clamps. I had to pry it a bit from the clamped board with a spatula.

    Unclamped shield

    I cut out the shield shape by first sawing the straight sections, and then using a coping saw on the curved ones.

    Sawing straight edges

    Coping the curves

    All cut out

    I used a spokeshave to smooth the convex curves on the sides.

    Spokeshave for the curves

    The curves on the top are concave, and the spokeshave doesn't fit. I used a drawknife for those.

    Drawknife for the tight curves

    This gives us crisp corners and smooth curves throughout.

    Crisp corner

    On to planing the face flat! I sharpened my plane irons...

    Sharp plane iron

    ... and planed carefully. The cutoff from the top of the shield was useful as a support against the planing stop.

    Starting to plane the shield

    The foot shows through once the paper is planed away...

    Foot shows through the paper

    Check out the dual-color shavings!

    Dual-color shavings

    And we have a flat board once again. That smudge at the top of the sole is from my dirty fingers — dirty with metal dust from the sharpening step — so I washed my hands and planed the dirt away.

    Flat shield

    The mess after planing

    But it is too flat. So, I scribed a line all around the front and edges, and used the spokeshave and drawknife again to get a 45-degree bevel around the shield. The line is a bit hard to see in the first photo, but it's there.

    Scribed lines for bevel

    Beveling with a spokeshave

    Final bevel around the shield

    Here is the first coat of boiled linseed oil after sanding. When it dries I'll add some coats of shellac.

    First coat of linseed oil

Trackpad workarounds: using function keys as mouse buttons

I've had no end of trouble with my Asus 1015E's trackpad. A discussion of laptops on a mailing list -- in particular, someone's concerns that the nifty-looking Dell XPS 13, which is available preloaded with Linux, has had reviewers say that the trackpad doesn't work well -- reminded me that I'd never posted my final solution.

The Asus's trackpad has two problems. First, it's super sensitive to taps, so if any part of my hand gets anywhere near the trackpad while I'm typing, suddenly it sees a mouse click at some random point on the screen, and instead of typing into an emacs window suddenly I find I'm typing into a live IRC client. Or, worse, instead of typing my password into a password field, I'm typing it into IRC. That wouldn't have been so bad on the old style of trackpad, where I could just turn off taps altogether and use the hardware buttons; this is one of those new-style trackpads that doesn't have any actual buttons.

Second, two-finger taps don't work. Three-finger taps work just fine, but two-finger taps: well, I found when I wanted a right-click (which is what two-fingers was set up to do), I had to go TAP, TAP, TAP, TAP maybe ten or fifteen times before one of them would finally take. But by the time the menu came up, of course, I'd done another tap and that canceled the menu and I had to start over. Infuriating!

I struggled for many months with synclient's settings for tap sensitivity and right and left click emulation. I tried enabling syndaemon, which is supposed to disable clicks as long as you're typing then enable them again afterward, and spent months playing with its settings, but in order to get it to work at all, I had to set the timeout so long that there was an infuriating wait after I stopped typing before I could do anything.

I was on the verge of giving up on the Asus and going back to my Dell Latitude 2120, which had an excellent trackpad (with buttons) and the world's greatest 10" laptop keyboard. (What the Dell doesn't have is battery life, and I really hated to give up the Asus's light weight and 8-hour battery life.) As a final, desperate option, I decided to disable taps completely.

Disable taps? Then how do you do a mouse click?

I theorized, with all Linux's flexibility, there must be some way to get function keys to work like mouse buttons. And indeed there is. The easiest way seemed to be to use xmodmap (strange to find xmodmap being the simplest anything, but there you go). It turns out that a simple line like

  xmodmap -e "keysym F1 = Pointer_Button1"
is most of what you need. But to make it work, you need to enable "mouse keys":
  xkbset m

But for reasons unknown, mouse keys will expire after some set timeout unless you explicitly tell it not to. Do that like this:

  xkbset exp =m

Once that's all set up, you can disable single-finger taps with synclient:

  synclient TapButton1=0
Of course, you can disable 2-finger and 3-finger taps by setting them to 0 as well. I don't generally find them a problem (they don't work reliably, but they don't fire on their own either), so I left them enabled.

I tried it and it worked beautifully for left click. Since I was still having trouble with that two-finger tap for right click, I put that on a function key too, and added middle click while I was at it. I don't use function keys much, so devoting three function keys to mouse buttons wasn't really a problem.

In fact, it worked so well that I decided it would be handy to have an additional set of mouse keys over on the other side of the keyboard, to make it easy to do mouse clicks with either hand. So I defined F1, F2 and F3 as one set of mouse buttons, and F10, F11 and F12 as another.

And yes, this all probably sounds nutty as heck. But it really is a nice laptop aside from the trackpad from hell; and although I thought Fn-key mouse buttons would be highly inconvenient, it took surprisingly little time to get used to them.

So this is what I ended up putting in .config/openbox/autostart file. I wrap it in a test for hostname, since I like to be able to use the same configuration file on multiple machines, but I don't need this hack on any machine but the Asus.

if [ $(hostname) == iridum ]; then
  synclient TapButton1=0 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

  xmodmap -e "keysym F1 = Pointer_Button1"
  xmodmap -e "keysym F2 = Pointer_Button2"
  xmodmap -e "keysym F3 = Pointer_Button3"

  xmodmap -e "keysym F10 = Pointer_Button1"
  xmodmap -e "keysym F11 = Pointer_Button2"
  xmodmap -e "keysym F12 = Pointer_Button3"

  xkbset m
  xkbset exp =m
else
  synclient TapButton1=1 TapButton2=3 TapButton3=2 HorizEdgeScroll=1
fi

July 24, 2015

Fri 2015/Jul/24

  • An inlaid GNOME logo, part 4

    Esta parte en español

    (Parts 1, 2, 3)

    In the last part, I glued the paper templates for the shield and foot onto the wood. Now comes the part that is hardest for me: excavating the foot pieces in the dark wood so the light-colored ones can fit in them. I'm not a woodcarver, just a lousy joiner, and I have a lot to learn!

    The first part is not a problem: use a coping saw to separate the foot pieces.

    Foot pieces, cut out

    Next, for each part of the foot, I started with a V-gouge to make an outline that will work as a stop cut. Inside this shape, I used a curved gouge to excavate the wood. The stop cut prevents the gouge from going past the outline. Finally, I used the curved gouge to get as close as possible to the final line.

    V channel as a stop cut Excavating inside the channel

    Each wall needs squaring up, as the curved gouge leaves a chamfered edge instead of a crisp angle. I used the V-gouge around each shape so that one of the edges of the gouge remains vertical. I cleaned up the bottom with a combination of chisels and a router plane where it fits.

    Square walls

    Then, each piece needs to be adjusted to fit. I sanded the edges to have a nice curve instead of the raw edges from the coping saw. Then I put a back bevel on each piece, using a carving knife, so the back part will be narrower than the front. I had to also tweak the walls in the dark wood in some places.

    Unadjusted piece Sanding the curves Beveling the edges

    After a lot of fiddling, the pieces fit — with a little persuasion — and they can be glued. When the glue dries I'll plane them down so that they are level to the dark wood.

    Gluing the pieces Glued pieces

    Finally, I clamped everything against another board to distribute the pressure. Let's hope for the best.

    Clamped

July 22, 2015

Welcome G'MIC


Welcome G'MIC

Moving G'MIC to a modern forum

Anyone who’s followed me for a while likely knows that I’m friends with G’MIC (GREYC’s Magic for Image Computing) creator David Tschumperlé. I was also able to release all of my film emulation presets on G’MIC for everyone to use with David’s help and we collaborated on a bunch of different fun processing filters for photographers in G’MIC (split details/wavelet decompose, freaky details, film emulation, mean/median averaging, and more).

David Tschumperle beauty dish GMIC David, by Me (at LGM2014)

It’s also David that helped me by writing a G’MIC script to mean average images for me when I started making my amalgamations (Thus moving me away from my previous method of using Imagemagick):

Mad Max Fury Road Trailer 2 - Amalgamation Mad Max Fury Road Trailer 2 - Amalgamation

So when the forums here on discuss.pixls.us were finally up and running, it only made sense to offer G’MIC its own part of the forums. They had previously been using a combination of Flickr groups and gimpchat.com. These are great forums, they were just a little cumbersome to use.

You can find the new G’MIC category here. Stop in and say hello!

I’ll also be porting over the tutorials and articles on work we’ve collaborated on soon (freaky details, film emulation).

Congratulations


Congratulations

To the winners of the Open Source Photography Course Giveaway

I compiled the list of entries this afternoon across the various social networks and let random.org pick an integer in the domain of all of the entries…

So a big congratulations goes out to:

Denny Weinmann (Facebook, @dennyweinmann, Google+ )
and
Nathan Haines (@nhaines, Google+)

I’ll be contacting you shortly (assuming you don’t read this announcement here first…)! I will need a valid email address from you both in order to send your download links. You can reach me at pixlsus@pixls.us.

Thank you to everyone who shared the post to help raise awareness! The lessons are still on sale until August 1st for $35USD over on Riley’s site.

1.2.0-beta.0 released

Hello everyone. I’m really happy to announce the first proper pre-release of MyPaint 1.2.0 for those brave early-bird testers out there.

You can download it from https://github.com/mypaint/mypaint/releases/tag/v1.2.0-beta.0.

Windows and Ubuntu binaries are available, all signed, and the traditional signed canonical source tarball is there too for good measure. Sorry about the size of the Windows installers – we need to package all of GTK/GDK and Python on that platform too.

(Don’t forget: if you find the translations lacking for your languages, you can help fix mistakes before the next beta over at https://hosted.weblate.org/engage/mypaint/ )

July 20, 2015

Plugging in those darned USB cables

I'm sure I'm not the only one who's forever trying to plug in a USB cable only to find it upside down. And then I flip it and try it the other way, and that doesn't work either, so I go back to the first side, until I finally get it plugged in, because there's no easy way to tell visually which way the plug is supposed to go.

It's true of nearly all of the umpteen variants of USB plug: almost all of them differ only subtly from the top side to the bottom.

[USB trident] And to "fix" this, USB cables are built so that they have subtly raised indentations which, if you hold them to the light just right so you can see the shadows, say "USB" or have the little USB trident on the top side:


In an art store a few weeks ago, Dave had a good idea.

[USB cables painted for orientation] He bought a white paint marker, and we've used it to paint the logo side of all our USB cables.

Tape the cables down on the desk -- so they don't flop around while the paint is drying -- and apply a few dabs of white paint to the logo area of each connector. If you're careful you might be able to fill in the lowered part so the raised USB symbol stays black; or to paint only the raised USB part. I tried that on a few cables, but after the fifth or so cable I stopped worrying about whether I was ending up with a pretty USB symbol and just started dabbing paint wherever was handy.

The paint really does make a big difference. It's much easier now to plug in USB cables, especially micro USB, and I never go through that "flip it over several times" dance any more.

July 19, 2015

Windows porting

I’m hoping that MyPaint will be able to support Windows fully starting with the first v1.2.0-beta release. This is made possible by the efforts of our own Windows porters and testers, and the dedicated folks who keep MSYS2 working so well.

The Inno Setup installer we'll be using starting with the 1.2.0-beta releases. Releases will start happening shortly (date TBA) on Github, and you’ll be able to pull down installer binaries for 32 bit and 64 bit Windows as part of this.

If you’re interested in the workings of the installer build, and would like to test it and help it improve, it’s all documented and scripted in the current github master. Please be aware that SourceForge downloads are involved during the build procedure until MSYS2 fix that. Our own binaries and installers will never be knowingly distributed – by us – through SourceForge or any similar crapware bundling site.

Discussion thread on the forums.

July 15, 2015

The Open Source Photography Course


The Open Source Photography Course

A chance to win a free copy

Photographer Riley Brandt recently released his Open Source Photography Course. I managed to get a little bit of his time to answer some questions for us about his photography and the course itself. You can read the full interview right here:

A Q&A with Photographer Riley Brandt

As an added bonus just for PIXLS.US readers, he has gifted us a nice surprise!

Did Someone Say Free Stuff?

Riley went above and beyond for us. He has graciously offered us an opportunity for 2 readers to win a free copy of the course (one in an open format like WebM/VP8, and another in a popular format like MP4/H.264)!

For a chance to win, I’m asking you to share a link to this post on:

with the hashtag #PIXLSGiveAway (you can click those links to share to those networks). Each social network counts as one entry, so you can triple your chances by posting across all three.

Next week (Monday, 2015-07-20 Wednesday, 2015-07-22 to give folks a full week), I will search those networks for all the posts and compile a list of people, from which I’ll pick the winners (using random.org). Make sure you get that hashtag right! :)

Some Previews

Riley has released three nice preview videos to give a taste of what’s in the courses:

A Q&A with Photographer Riley Brandt


A Q&A with Photographer Riley Brandt

On creating a F/OSS photography course

Riley Brandt is a full-time photographer (and sometimes videographer) at the University of Calgary. He previously worked for the weekly (Calgary) local magazine Fast Forward Weekly (FFWD) as well as Sophia Models International, and his work has been published in many places from the Wall Street Journal to Der Spiegel (and more).

Riley Brandt Logo

He recently announced the availability of The Open Source Photography Course. It’s a full photographic workflow course using only free, open source software that he has spent the last ten months putting together.

Riley has graciously offered two free copies for us to give away!
For a chance to win, see this blog post.

Riley Brandt Photography Course Banner

I was lucky enough to get a few minutes of Riley’s time to ask him a few questions about his photography and this course.

A Chat with Riley Brandt

Tell us a bit about yourself!

Hello, my name is Riley Brandt and I am a professional photographer at the University of Calgary.

At work, I get to spend my days running around a university campus taking pictures of everything from a rooster with prosthetic legs made in a 3D printer, to wild students dressed in costumes jumping into freezing cold water for charity. It can be pretty awsome.

Outside of work, I am a supporter of Linux and open source software. I am also a bit of a film geek.

Univ. Calgary Prosthetic Rooster [ed. note: He’s not kidding - That’s a rooster with prosthetic legs…]

I see you were trained in photojournalism. Is this still your primary photographic focus?

Though I definitely enjoy portraits, fashion and lifestyle photography, my day to day work as a photographer at a university is very similar to my photojournalism days.

I have to work with whatever poor lighting conditions I am given, and I have to turn around those photos quickly to meet deadlines.

However, I recently became an uncle for the first time to a baby boy, so I imagine I will be expanding into new born and toddler photography very soon :)

Riley Brandt Environment Portrait Sample Environmental Portrait by Riley Brandt

How long have you been a photographer?

Photography started as a hobby for me when I was living the Czech Republic in the late 90s and early 2000s. My first SLR camera was the classic Canon AE1 (which I still have).

I didn’t start to work as a full time professional photographer until I graduated from the Journalism program at SAIT Polytechnic in 2008.

What type of photography do you enjoy doing the most?

In a nutshell, I enjoy photographing people. This includes both portraits and candid moments at events.

I love meeting someone with an interesting story, and then trying to capture some of their personality in an image.

At events, I’ve witnessed everything from the joy of someone meeting an astronaut they idolize, to the anguish of a parent at graduation collecting a degree instead of their child who was killed. Capturing genuine emotion at events is challenging, and overwhelming at times, but is also very gratifying.

It would be hard for me to choose between candids or portraits. I enjoy them both.

Riley Brandt Portraits Portraits by Riley Brandt

How would you describe your personal style?

I’ve been told several times that my images are very “clean”. Which I think means I limit the image to only a few key elements, and remove any major distractions.

If you had to choose your favorite image from your portfolio, what would it be?

I don’t have a favorite image in my collection.

However, at the end of a work week, I usually have at least one image that I am really happy with. A photo that I will look at again when I get home from work. An image that I look forward to seeing published. Those are my favorites.

Has free-software always been the foundation of your workflow?

Definitely not. I started with Adobe software, and still use it (and other non-free software) at work. Though hopefully that will change.

I switched to free software for all my personal work at home, because all my computers at home run Linux.

I also dislike at lot of Adobe’s actions as a company, ie: horrible security and switching to a “cloud” version of their software which is really just a DRM scheme.

There many significant reasons to not run non-free software, but what really motivated my switch initially was simply that Adobe never released a Linux version of their software.

What is your normal OS/platform?

I guess I am transitioning from Ubuntu to Fedora (both GNU/Linux). My main desktop is still running Ubuntu Gnome 14.04. But my laptop is running Fedora 21.

Ubuntu doesn’t offer an up to date version of the Gnome desktop environment. It also doesn’t use the Gnome Software Centre or many Gnome apps. Fedora does. So my desktop will be running Fedora in the near future as well.

Riley Brandt Summer Days Riley Brandt Summer Days Lifestyle by Riley Brandt

What drove you to consider creating a free-software centric course?

Because it was so difficult for me to transition from Adobe software to free software, I wanted to provide an easier option for others trying to do the same thing.

Instead of spending weeks or months searching through all the different manuals, tutorials and websites, someone can spend a weekend watching my course and be up and running quickly.

Also, it was just a great project to work on. I got to combine two of my passions, Linux and photography.

Is the course the same as your own approach?

Yes, it’s the same way I work.

I start with fundamentals like monitor calibration and file management. Then onto basics like correcting exposure, color, contrast and noise. After that, I cover less frequently used tools. It’s the same way I work.

The course focuses heavily on darktable for RAW processing - have you also tried any of the other options such as RawTherapee?

I originally tried digiKam because it looked like it had most of the features I needed. However, KDE and I are like oil and water. The user interface felt impenetrable to me, so I moved on.

I also tried RawTherapee, but only briefly. I got some bad results in the beginning, but that was probably due to my lack of familiarity with the software. I might give it another go one day.

Once darktable added advanced selective editing with masks, I was all in. I like the photo management element as well.

Riley Brandt Portraits

Have you considered expanding your (course) offerings to include other aspects of photography?

Umm.. not just yet. I first need to rest :)

If you were to expand the current course, what would you like to focus on next?

It’s hard to say right now. Possibly a more in depth look at GIMP. Or a series where viewers watch me edit photos from start to finish.

It took 10 months to create this course, will you be taking a break or starting right away on the next installment? :)

A break for sure :) I spent most of my weekends preparing and recording a lesson for the past year. So yes, first a break.

Some parting words?

I would like to recommend the Desktop Publishing course created by GIMP Magazine editor Steve Czajka for anyone who is trying to transition from Adobe InDesign to Scribus.

I would also love to see someone create a similar course for Inkscape.

The Course

Riley Brandt Photography Course Banner

The Open Source Photography Course is available for order now at Riley’s website. The course is:

  • Over 5 hours of video material
  • DRM free
  • 10% of net profits donated back to FOSS projects
  • Available in open format (WebM/VP8) or popular (H.264), all 1080p
  • $50USD

He has also released some preview videos of the course:

From his site is a nice course outline to get a feel for what is covered:

Course Outline

Chapter 1. Getting Started

  1. Course Introduction
    Welcome to The Open Source Photography Course
  2. Calibrate Your Monitor
    Start your photography workflow the right way by calibrating your monitor with dispcalGUI
  3. File Management
    Make archiving and searching for photos easier by using naming conventions and folder organization
  4. Download and Rename
    Use Rapid Photo Downloader to rename all your photos during the download process

Chapter 2. Raw Editing in darktable

  1. Introduction to darktable, Part One
    Get to know darktable’s user interface
  2. Introduction to darktable, Part Two
    Take a quick look at the slideshow view in darktable
  3. Import and Tag
    Import photos into darktable and tag them with keywords, copyright information and descriptions
  4. Rating Images
    Learn an efficient way to cull, rate, add color labels and filter photos in lighttable
  5. Darkroom Overview
    Learn the basics of the darkroom view including basic module adjustments and creating favorites
  6. Correcting Exposure, Part 1
    Correct exposure with the base curves, levels, exposure, and curves modules
  7. Correcting Exposure, Part 2
    See several examples of combining modules to correct an image’s exposure
  8. Correct White Balance
    Use presets and make manual changes in the white balance module to color correct your images
  9. Crop and Rotate
    Navigate through the many crop and rotate options including guides and automatic cropping
  10. Highlights and Shadows
    Recover details lost in the shadows and highlights of your photos
  11. Adding Contrast
    Make your images stand out by adding contrast with the levels, tone curve and contrast modules
  12. Sharpening
    Fix those soft images with the sharpen, equalizer and local contrast modules
  13. Clarity
    Sharpen up your midtones by utilizing the local contrast and equalizer modules
  14. Lens Correction
    Learn how to fix lens distortion, vignetting and chromatic aberrations
  15. Noise Reduction
    Learn the fastest, easiest and best way to clean up grainy images taken in low light
  16. Masks, Part one
    Discover the possibilities of selective editing with the shape, gradient and path tools
  17. Masks, Part Two
    Take you knowledge of masks further in this lesson about parametric masks
  18. Color Zones
    Learn how to limit your adjustments to a specific color’s hue, saturation or brightness
  19. Spot Removal
    Save time by making simple corrections in darktable, instead of opening up GIMP
  20. Snapshots
    Quickly compare different points in your editing history with snapshots
  21. Presets and Styles
    Save your favorite adjustments for later with presets and styles
  22. Batch Editing
    Save time by editing one image, then quickly applying those same edits to hundreds of images
  23. Searching for Images
    Learn how to sort and search through a large collection of images in Lighttable
  24. Adding Effects
    Get creative in the effects group with vignetting, framing, split toning and more
  25. Exporting Photos
    Learn how to rename, resize and convert you RAW photos to JPEG, TIFF and other formats

Chapter 3. Touch Ups in GIMP

  1. Introduction to GIMP
    Install GIMP, then get to know your way around the user interface
  2. Setting Up GIMP, Part 1
    Customize the user interface, adjust a few tools and install color profiles
  3. Setting Up GIMP, Part 2
    Set keyboard shortcuts that mimic Photoshop’s and install a couple of plugins
  4. Touch Ups
    Use the heal tool and the clone tool to clean up your photos
  5. Layer Masks
    Learn how to make selective edits and non-destructive edits using layer masks
  6. Removing Distractions
    Combine layers, a helpful plugin and layer masks to remove distractions from your photos
  7. Preparing Images for the Web
    Reduce file size while retaining quality before you upload your photos to the web
  8. Getting Help and Finding the Community
    Find out which websites, mailing lists and forums to go to for help and friendly discussions

All the images in this post © Riley Brandt.

July 14, 2015

Hummingbird Quidditch!

[rufous hummingbird] After months of at most one hummingbird at the feeders every 15 minutes or so, yesterday afternoon the hummingbirds here all suddenly went crazy. Since then, my patio looks like a tiny Battle of Britain, There are at least four males involved in the fighting, plus a couple of females who sneak in to steal a sip whenever the principals retreat for a moment.

I posted that to the local birding list and someone came up with a better comparison: "it looks like a Quidditch game on the back porch". Perfect! And someone else compared the hummer guarding the feeder to "an avid fan at Wimbledon", referring to the way his head keeps flicking back and forth between the two feeders under his control.

Last year I never saw anything like this. There was a week or so at the very end of summer where I'd occasionally see three hummingbirds contending at the very end of the day for their bedtime snack, but no more than that. I think putting out more feeders has a lot to do with it.

All the dogfighting (or quidditch) is amazing to watch, and to listen to. But I have to wonder how these little guys manage to survive when they spend all their time helicoptering after each other and no time actually eating. Not to mention the way the males chase females away from the food when the females need to be taking care of chicks.

[calliope hummingbird]

I know there's a rufous hummingbird (shown above) and a broad-tailed hummingbird -- the broad-tailed makes a whistling sound with his wings as he dives in for the attack. I know there a black-chinned hummer around because I saw his characteristic tail-waggle as he used the feeder outside the nook a few days before the real combat started. But I didn't realize until I checked my photos this morning that one of the combatants is a calliope hummingbird. They're usually the latest to arrive, and the rarest. I hadn't realized we had any calliopes yet this year, so I was very happy to see the male's throat streamers when I looked at the photo. So all four of the species we'd normally expect to see here in northern New Mexico are represented.

I've always envied places that have a row of feeders and dozens of hummingbirds all vying for position. But I would put out two feeders and never see them both occupied at once -- one male always keeps an eye on both feeders and drives away all competitors, including females -- so putting out a third feeder seemed pointless. But late last year I decided to try something new: put out more feeders, but make sure some of them are around the corner hidden from the main feeders. Then one tyrant can't watch them all, and other hummers can establish a beachhead.

It seems to be working: at least, we have a lot more activity so far than last year, even though I never seem to see any hummers at the fourth feeder, hidden up near the bedroom. Maybe I need to move that one; and I just bought a fifth, so I'll try putting that somewhere on the other side of the house and see how it affects the feeders on the patio.

I still don't have dozens of hummingbirds like some places have (the Sopaipilla Factory restaurant in Pojoaque is the best place I've seen around here to watch hummingbirds). But I'm making progress

Building a better catalog file

Inside a windows driver package you’ll probably see a few .dll‘s, a .inf file and a .cat file. If you’ve ever been curious in Windows you’ll have double clicked it and it would show some technical information about the driver and some information about who signed the file.

We want to use this file to avoid having to get vendors to manually sign the firmware file with a GPG detached signature, which also implies trusting the Microsoft WHQL certificate. These are my notes on my adventure so far.

There are not many resources on this stuff, and I’d like to thank dwmw2 and dhowels for all their help so far answering all my stupid questions. osslsigncode is also useful to see how other signing is implemented.

So, the basics. A .cat file is a SMIME PKCS DER file. We can dump the file using:

openssl asn1parse -in ecfirmware.cat  -inform DER

and if we were signing just one file we should be able to verify the .cat file with something like this:

wget http://www.microsoft.com/pki/certs/MicRooCerAut_2010-06-23.crt
openssl x509 -in MicRooCerAut_2010-06-23.crt -inform DER -out ms/msroot.pem -outform PEM
cat ms/*.pem > ms/certs.pem
openssl smime -verify -CAfile ms/certs.pem -in ecfirmware.cat -inform DER -attime $(date +%s --date=2015-01-01) -content ECFirmware.38.7.50.0.cap
Verification failed

(Ignore the need to have the root certificate for now, that seems to be a bug in OpenSSL and they probably have bigger fires to fight at this point)

…but it’s not. We have a pkcs7-signed blob and we need to work out how to get the signature to actually *match* and then we have to work out how to interpret the pkcs7-signed data blob, and use the sha256sums therein to validate the actual data. OpenSSL doesn’t know how to interpret the MS content type OID (1.3.6.1.4.1.311.10.1) so it wimps out and doesn’t put any data into the digest at all.

We can get the blob using a simple:

dd if=ecfirmware.cat of=ecfirmware.cat.payload bs=1 skip=66 count=1340

…which now verifies:

openssl smime -verify -CAfile ms/certs.pem -in ecfirmware.cat -inform DER -attime $(date +%s --date=2015-01-01) -content ecfirmware.cat.payload
Verification successful

The blob appears to be a few tables of UTF-16 filename and SHA1/SHA256 checksummed data, encoded in ASN.1 notation. I’ve spent quite a few evenings decompiling the DER file into an ASN file without a whole lot of success (there are 14(!) layers of structures to contend with) and I’ve still not got an ASN file that can correctly parse my DER file for my simple unsigned v1 (ohh yes, v1 = SHA1, v2 = SHA256) test files. There is also a lot of junk data in the blob, and some questionable design choices which mean it’s a huge pain to read. Even if I manage to write the code to read the .cat data blob I’ve then got to populate the data (including the junk data…) so that Windows will accept my file to avoid needing a Microsoft box to generate all firmware images. Also add to the mix that the ASN.1 data is different on different Windows versions (with legacy versions overridden), which explains why you see things like 1.3.6.1.4.1.311.12.2.2 rather than translated titles in the catalog viewer in Windows XP when trying to view .cat files created on Windows 7.

I’ve come to the conclusion that writing a library to reliably read and write all versions of .cat files is probably about 3 months work, and that’s 3 months I simply don’t have. Given there isn’t actually a specification (apart from a super small guide on how to use the MS crypto API) it would also be an uphill battle with every Windows release.

We could of course do something Linux specific that does the same thing, although that obviously would not work on Windows and means we have to ask the vendors to do an extra step in release engineering. Using GPG would be easiest, but a lot of the hardware vendors seem wed to the PKCS certificate mechanism, and I suppose it does mean you can layer certificates for root trust, vendors and contractors. GPG signing the firmware file only doesn’t actually give us a file-list with the digest values of the other metadata in the .cab file.

A naive solution would be to do something like this:

sha25sum firmware.inf firmware.metainfo.xml firmware.bin > firmware.digest
openssl dgst -sha256 -sign cert-private.pem -out firmware.sign firmware.digest
openssl dgst -sha256 -verify cert-pubkey.pem -signature firmware.sign firmware.files

But to actually extract the firmware.digest file we need the private key. We can check prepared data using the public key, but that means shipping firmware.digest and firmware.sign when we only really want one file (.cab files checksum the files internally, so we can be sure against data corruption).

Before I go crazy and invent yet another file format specification does anybody know of a signed digest format with an open specification? Better ideas certainly welcome, thanks.

Richard.

July 13, 2015

darktable on Windows


darktable on Windows

Why don't you provide a Windows build?

Due to the heated debate lately, a short foreword:

We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.

The darktable project

darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don’t even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.

The development environment

The team is quite mixed, some have a professional background in computing, others don’t. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let’s say GPU computing, steps up and is willing to provide and maintain code for the new feature.

Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.

Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.

This nicely shows one of the consequences of the project’s organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.

Code contributions and feature requests

Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.

But life is not a picnic. You probably wouldn’t pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.
Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.

This is the feeling created every time someone just passes by leaving as only statement: “Why isn’t there a Windows build (yet)?”.

Providing a Windows build for darktable

The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.

As stated earlier here, the development of darktable is totally about one’s own initiative. This project (as many others) is not about ordering things and getting them delivered. It’s about starting things, participating and contributing. It’s about trying things out yourself. It’s FLOSS.

One argument that pops up from time to time is: “darktable’s user base would grow immensely with a Windows build!”. This might be true. But – what’s the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?

On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn’t the pleasing sort, hunting seldom bugs occurring with some rare camera’s files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.

This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.

But this is different.

Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.

The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.

Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project’s reputation and diminish the confidence in the software treating your photographs carefully.

We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.

References

That other OS

Why don't you provide a Windows build?

Due to the heated debate lately, a short foreword:

We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.

The darktable project

darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don't even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.

The development environment

The team is quite mixed, some have a professional background in computing, others don't. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let's say GPU computing, steps up and is willing to provide and maintain code for the new feature.

Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.

Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.

This nicely shows one of the consequences of the project's organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.

Code contributions and feature requests

Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.

But life is not a picnic. You probably wouldn't pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.
Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.

This is the feeling created every time someone just passes by leaving as only statement: “Why isn't there a Windows build (yet)?”.

Providing a Windows build for darktable

The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.

As stated earlier here, the development of darktable is totally about one's own initiative. This project (as many others) is not about ordering things and getting them delivered. It's about starting things, participating and contributing. It's about trying things out yourself. It's FLOSS.

One argument that pops up from time to time is: “darktable's user base would grow immensely with a Windows build!”. This might be true. But – what's the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?

On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn't the pleasing sort, hunting seldom bugs occurring with some rare camera's files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.

This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.

But this is different.

Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.

The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.

Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project's reputation and diminish the confidence in the software treating your photographs carefully.

We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.

References

That other OS

Translators needed for 1.2.0

MyPaint badly needs your language skills to make the 1.2.0 release a reality. Please help us out by translating the program into your language. We literally cannot make v1.2.0 a good release of MyPaint without your help, so to help you out we’ve made it as easy as we can for you to get involved by translating program texts.

Translation status: Graphical status badge for all mypaint project translations
Begin translating now: https://hosted.weblate.org/engage/mypaint/

Rosetta StoneThe texts in the MyPaint application are in heavy need of updating for the 23 languages currently supported. If you’re fluent in a language other than English, and have a good working knowledge of MyPaint and the English language, then you can help our translation effort.

We’re using a really cool online translation service called WebLate, another Open Source project whose developers have very graciously offered us free hosting. It integrates with our Github development workflow very nicely indeed, so well in fact that I’m hoping to use it for continuous translation after 1.2.0 has been released.

To get involved, click on the begin translating now link above, and sign in with Github, Google, or Facebook. You can create an account limited to just the Weblate developers’ hosted service too. There are two parts to MyPaint: the main application, and its brush-painting library. Both components need translating.

Maintaining language files can be a lot of work, so you should get credit for the work you do. The usual workflow isn’t anonymous: your email address and sign-in name will be recorded in the commit log on Github, and you can put your names in the about box by translating the marker string “translator-credits” when it comes up! If you’d prefer to work anonymously, you don’t have to sign in: you can just make suggestions via WebLate for other translators to review and integrate.

Even if your language is complete, you can help by sharing the link above among your colleagues and friends on social media.

Thank you to all of our current translators, and in advance to new translators, for all the wonderful work you’re doing. I put a lot of my time into MyPaint trying to make sure that it’s beautiful, responsive, and stable. I deeply appreciate all the work that others do on the project too and, from a monoglot like myself, some of the most inspiring work I see happening on the project by others is all the effort put into making MyPaint comprehensible and international. Many, many thank yous.

Frozen for 1.2.0

Quick note to say that MyPaint is now frozen for the upcoming 1.2.0 release. Expect announcements here about dates for specific betas, plus previews and screenshots of new features; however the most current project status can be seen on our Github milestones page.

July 10, 2015

Fri 2015/Jul/10

  • Package repositories FAIL

    Today I was asking around something like, mobile websites work without Flash; how come non-mobile Twitter and YouTube want Flash on my desktop (which is disabled now because of all the 0-day exploits?

    Elad kindly told me that if I install the GStreamer codecs, and disable Flash, it should work. I didn't have those codecs on my not-for-watching-TV machine, so I set to it.

    openSUSE cannot distribute the codecs themselves, so the community does it with an external, convenient one-click-install web-button. When you click it, the packaging machinery churns and you get asked if you want to trust the Packman repository — where all the good non-default stuff packages are.

    Packman's help        page

    It's plain HTTP. No HSTS or anything. It tells you the fingerprint of the repository's signing key... over plain HTTP. On the FAQ page, there is a link to download that public key over plain FTP.

    Packman's key over        plain FTP

    Now, that key is the "PackMan Build Service" key, a key from 2007 with only 1024-bit DSA. The key is not signed by anybody.

    PackMan        Build Service key

    However, the key that the one-click install wants to use is another one, the main PackMan Project key.

    PackMan Project        key

    It has three signatures, but when I went down the rabbit hole of fetching each of those keys to see if I knew those people — I have heard of two of them, but my little web of trust doesn't have them.

    So, YOLO, right? "Accept". "Trust". Because "Cancel" is the only other option.

    The installation churns some more, and it gives me this:

    libdvdcss repository   is unsigned

    YOLO all the way.

    I'm just saying, that if you wanted to pwn people who install codecs, there are many awesome places here to do it.

    But anyway. After uninstalling flash-player, flash-player-gnome, freshplayerplugin, pullin-flash-player, the HTML5 video player works in Firefox and my fat desktop now feels as modern as my phone.

    Update:Hubert Figuière has an add-on for Firefox that will replace embedded Flash video players in other websites with HTML5, the No-flash add-on.

July 09, 2015

Krita 2.9.6 released!

After a month of bugfixing, we give you Krita 2.9.6! With lots of bugfixes, but bugfixes aren’t the only thing in 2.9.6, we also have a few new features!

The biggest change is that we now have selection modifiers! They are configured as follows:

  • Shift+click: add to selection.
  • Alt+click: subtract from selection.
  • Shift+alt+click: intersect selection
  • Ctrl+click: replace selection (for when you have set the
  • selection mode to something else but replace).

These don’t work with the path tool yet, and aren’t configurable, but we’re going to work on that. Check out the manual page for the selection tools for more information on how this relates to constraint and from center for the rectangle and ellipse select.

Also new: Continuous transform and crop!

Now, when you applied a transform or crop, and directly afterwards click on the canvas, Krita will recall the previous transform or crop, and allow you to adjust that instead! If you press ‘esc’ when in this ‘continuous mode’, Krita will forget the continuous transform, and allow you to start a new one.

The final of the big new features must be that the tool-options can now be put into the toolbar:

tool options in the toobar

By default it’s still a docker, but you can configure it in settings->configure Krita->general. You can also easily summon this menu with the ‘\’ key!

And Thorsten Zachmann has improved the speed of all the color adjustment filters, often by a factor of four or more.

Full list of features new to 2.9.6:

  • Add possibility to continue a Crop Tool action
  • Speed up of color balance, desaturate, dodge, hsv adjustment, index color per-channel and posterize filters.
  • Activate Cut/Copy Sharp actions in the menu
  • Implemented continuation of the transform with clicking on canvas
  • new default workspace
  • Add new shortcuts (‘\’ opens the tool options, f5 opens the brush editor, f7 opens the preset selector.)
  • Show the tool options in a popup (toggle this on or off in the general preferences, needs restarting Krita)
  • Add three new default shortcuts (Create group layer = Ctrl+G, Merge Selected layer = Ctrl+Alt+E, Scale image to new size = Alt+Ctrl+I )
  • Add an ‘hide pop-up on mouseclick option’ to advanced color selector.
  • Make brush ‘speed’ sensor work properly
  • Allow preview for “Image Background Color and Transparency” dialog.
  • Selection modifier patch is finally in! (shift=add, alt=subtract, shift+alt=intersect, ctrl=replace. Path tool doesn’t work yet, and they can’t be configured yet)

Bugfixes new to 2.9.6

  • BUG:346932 Fix crash when saving a pattern to a *.kra
  • Make Group Layer return correct extent and exact bounds when in pass-through mode
  • Make fixes to pass-through mode.
  • Added an optional optimization to slider spin box
  • BUG:348599 Fix node activating on the wrong image
  • BUG:349792 Fix deleting a color in the palette docker
  • BUG:349823 Fix scale to image size while adding a file layer
  • Fixed wrapping issue for all dial widgets in Layer Styles dialog
  • Fix calculation of y-res when loading .kra files
  • BUG:349598 Prevent a divide by zero
  • BUG:347800 Reset cursor when canvas is extended to avoid cursor getting stuck in “pointing hand” mode
  • BUG:348730 Fix tool options visibility by default
  • BUG:349446 Fix issue where changing theme doesn’t update user config
  • BUG:348451 Fix internal brush name of LJF smoke.
  • BUG:349424 Set documents created from clipboard to modified
  • BUG:349451 Make more robust: check pointers before use
  • Use our own code to save the merged image for kra and ora (is faster)
  • BUG:313296 Fix Hairy brush not to paint black over transparent pixels in Soak Ink mode
  • Fix PVS warning in hairy brush
  • (gmic) Try to workaround the problem with busy cursor
  • BUG:348750 Don’t limit the allowed dock areas
  • BUG:348795 Fix uninitialized m_maxPresets
  • BUG:349346 (gmic) If there is selection, do not synchronize image size
  • BUG:348887 Disable autoscroll for the fill-tool as well.
  • BUG:348914 Rename the fill layers.

Downloads

 

Taming annoyances in the new Google Maps

For a year or so, I've been appending "output=classic" to any Google Maps URL. But Google disabled Classic mode last month. (There have been a few other ways to get classic Google maps back, but Google is gradually disabling them one by one.)

I have basically three problems with the new maps:

  1. If you search for something, the screen is taken up by a huge box showing you what you searched for; if you click the "x" to dismiss the huge box so you can see the map underneath, the box disappears but so does the pin showing your search target.
  2. A big swath at the bottom of the screen is taken up by a filmstrip of photos from the location, and it's an extra click to dismiss that.
  3. Moving or zooming the map is very, very slow: it relies on OpenGL support in the browser, which doesn't work well on Linux in general, or on a lot of graphics cards on any platform.

Now that I don't have the "classic" option any more, I've had to find ways around the problems -- either that, or switch to Bing maps. Here's how to make the maps usable in Firefox.

First, for the slowness: the cure is to disable webgl in Firefox. Go to about:config and search for webgl. Then doubleclick on the line for webgl.disabled to make it true.

For the other two, you can add userContent lines to tell Firefox to hide those boxes.

Locate your Firefox profile. Inside it, edit chrome/userContent.css (create that file if it doesn't already exist), and add the following two lines:

div#cards { display: none !important; }
div#viewcard { display: none !important; }

Voilà! The boxes that used to hide the map are now invisible. Of course, that also means you can't use anything inside them; but I never found them useful for anything anyway.

July 07, 2015

What's New, Some New Tutorials, and PIXLS!

What's been going on?! A bunch!

In case you've not noticed around here, I've been transitioning tutorials and photography related stuff over to PIXLS.US.

I built that site from scratch, so it's taken a bit of my time... I've also been slowly porting some of my older tutorials that I thought would still be useful over there. I've also been convincing all sorts of awesome folks from the community to help out by writing/recording tutorials for everyone, and we've already got quite a few nice ones over there:


A Blended Panorama with PhotoFlow


Basic Landscape Exposure Blending with GIMP and G'MIC


An Open Source Portrait (Mairi)


Skin Retouching with Wavelet Decompose


Luminosity Masking in darktable


Digital B&W Conversion (GIMP)



So just a gentle reminder that the tutorials have all mostly moved to PIXLS.US. Head over there for the newest versions and brand-new material, like the latest post from the creator of PhotoFlow, Andrea Ferrero on Panorama Exposure Blending with Hugin and PhotoFlow!

Also, don't forget to come by the forums and join the community at:

discuss.pixls.us

That's not to say I've abandoned this blog, just that I've been busy trying to kickstart a community over there! I'm also accepting submissions and/or ideas for new articles. Feel free to email me!

PhotoFlow Blended Panorama Tutorial


PhotoFlow Blended Panorama Tutorial

Andrea Ferrero has been busy!

After quite a bit of back and forth I am quite happy to be able to announce that the latest tutorial is up: A Blended Panorama with PhotoFlow! This contribution comes from Andrea Ferrero, the creator of a new project: PhotoFlow.

In it, he walks through a process of stitching a panorama together using Hugin and blending multiple exposure options through masking in PhotoFlow (see lede image). The results are quite nice and natural looking!

Local Contrast Enhancement: Gaussian vs. Bilateral

Andrea also runs through a quick video comparison of doing LCE using both a Gaussian and Bilateral blur, in case you ever wanted to see them compared side-by-side:

He started a topic post about it in the forums as well.

Thoughts on the Main Page

Over on discuss I started a thread to talk about some possible changes to the main page of the site.

Specifically I’m talking about the background lede image at the very top of the main page:

I had originally created that image as a placeholder in Blender. The site is intended as a photography-centric site, so the natural thought was why not use photos as a background instead?

The thought is to rotate through images as provided by the community. I’ve also mocked up two version of using an image as a background.

Simple replacement of the image with photos from the community. This is the most popular in the poll on the forum at the moment. The image will be rotated amongst images provided by community members. I just need to make sure that the text shown is legible over whatever the image may be…

Full viewport splash version, where the image fills the viewport. This is not very popular from the feedback I received (thank you akk, ankh, muks, DrSlony, LebedevRI, and others on irc!). I personally like the idea but I can understand why others may not like it.

If anyone wants to chime in (or vote in the poll) then head over to the forum topic and let us know your thoughts!

Also, a big thank you to Morgan Hardwood for allowing us to use that image as a background example. If you want a nice way to support F/OSS development, it just so happens that Morgan is a developer for RawTherapee, and a print of that image is available for purchase. Contact him for details.

July 06, 2015

The votes are in!

Here’s the definitive list of stretch goal votes. A whopping 94,1% of eligible voters (622 of 661) actually voted: 94,9% of kickstarter backers and 84,01% of paypal backers. Thank you again, everyone who pledged, donated and voted, for your support!

Votes Stretch goal Phabricator Task
0 N/A Extra Lazy Brush: interactive tool for coloring the image in a couple of strokes T372
1 120 19.29% 10. Animated file formats export: animated gif, animated png and spritemaps T116
2 56 9.00% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document. T114
3 51 8.20% 1. Multiple layer selection improvements T105
4 48 7.72% 19. Make it possible to edit brush tips in Krita T125
5 42 6.75% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others. T127
6 38 6.11% 2. Update the look & feel of the layer docker panel (1500 euro stretch goal) T106
7 37 5.95% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes. T166
8 33 5.31% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid T109
9 31 4.98% 6. Manage palettes and color swatches T112
10 28 4.50% 18. Stacked brushes: stack two or more brushes together and use them in one stroke T124

These didn’t make it, but we’re keeping them for next time:

  Votes   Stretch goal
11 23 3.70% 4. Select presets using keyboard shortcuts
12 19 3.05% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 19 3.05% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 18 2.89% 7. Implement a Heads-Up-Display for easy manipulation of the view
15 17 2.73% 20. Select textures on the fly to use in textured brushes
16 9 1.45% 15. HDR gradients
17 9 1.45% 11. Add precision to the layer move tool
18 8 1.29% 17. Gradient map filter
19 5 0.80% 16. On-canvas gradient previews
20 5 0.80% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.48% 3. Improve feedback when using more than one color space in a single image
22 3 0.48% 14. Add a gradient editor for stop gradients

July 04, 2015

Create a signed app with Cordova

I wrote last week about developing apps with PhoneGap/Cordova. But one thing I didn't cover. When you type cordova build, you're building only a debug version of your app. If you want to release it, you have to sign it. Figuring out how turned out to be a little tricky.

Most pages on the web say you can sign your apps by creating platforms/android/ant.properties with the same keystore information in it that you'd put in an ant build, then running cordova build android --release

But Cordova completely ignored my ant.properties file and went on creating a debug .apk file and no signed one.

I found various other purported solutions on the web, like creating a build.json file in the app's top-level directory ... but that just made Cordova die with a syntax error inside one of its own files). This is the only method that worked for me:

Create a file called platforms/android/release-signing.properties, and put this in it:

storeFile=/path/to/your-keystore.keystore
storeType=jks
keyAlias=some-key
// if you don't want to enter the password at every build, use this:
keyPassword=your-key-password
storePassword=your-store-password

Then cordova build android --release finally works, and creates a file called platforms/android/build/outputs/apk/android-release.apk

July 02, 2015

libmypaint is ready for translation

MyPaint is well on its way to feature and string freeze, but its brush library is stable enough to be translated now.

You can help! Example status page from WebLateThe developers of WebLate, a really nice online translation tool, have offered us hosting for translations.

Translation status: Graphical status badge for all mypaint project translations
Join: https://hosted.weblate.org/engage/mypaint/

If you’re fluent in a language other than English, or know a FOSS-friendly person who is, you can help with the translation effort. Please share the link above as widely as you can, or dive in yourself and start translating brush setting texts. It’s a surprisingly simple workflow: you translate program texts one at a time resolving any discrepancies and correcting problems the system has discovered. Each text has a link back to the source code too, if you want to see where it was set up. At the end of translating into your language you get a nice fully green progress bar, a glowing sense of satisfaction, and your email address in the commit log ☺

If you want to help out and good language skills, we’d really appreciate your assistance. Helping to translate a project is a great way of learning about how it works internally, and it’s one of the easiest and most effective ways of getting involved in the Free/Open Source culture and putting great software into people’s hands, worldwide.

July 01, 2015

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

</svg>
<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);
}
</style>

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Fedora Hubs Update!!!

fedora-hubs_logo

The dream is real – we are cranking away, actively building this very cool, open source, socially-oriented collaboration platform for Fedora.

Myself and Meghan Richardson, the Fedora Engineering Team’s UX intern for this summer, have been cranking out UI mockups over the past month or so (Meghan way more than me at this point. :) )

Screenshot from 2015-06-23 09-24-44

We also had another brainstorming session. We ran the Fedora Hubs Hackfest, a prequel to the Fedora Release Engineering FAD a couple of weeks ago.

After a lot of issues with the video, full video of the hackfest is now finally available (the reason for the delay in my posting this :) ).

Let’s talk about what went down during this hackfest and where we are today with Fedora Hubs:

What is Fedora Hubs, Exactly?

(Skip directly to this part of the video)

We talked about two elevator pitches for explaining it:

  • It’s an ‘intranet’ page for the Fedora Project. You work on all these different projects in Fedora, and it’s a single place you can get information on all of them as a contributor.
  • It’s a social network for Fedora contributors. One place to go to keep up with everything across the project in ways that aren’t currently possible. We have a lot of places where teams do things differently, and it’s a way to provide a consistent contributor experience across projects / teams.

Who are we building it for?

(Skip directly to this part of the video)

  • New Fedora Contributors – A big goal of this project is to enable more contributors and make bootstrapping yourself as a Fedora contributor less of a daunting task.
  • Existing Fedora Contributors – They already have a workflow, and already know what they’re doing. We need to accommodate them and not break their workflows.

The main philosophy here is to provide a compelling user experience for new users that can potentially enhance the experience for existing contributors but at the very least will never disrupt the current workflow of those existing contributors. Let’s look at this through the example of IRC, which Meghan has mocked up in the form of a web client built into Fedora Hubs aimed at new contributor use:

If you’re an experienced contributor, you’ve probably got an IRC client, and you’re probalby used to using IRC and wouldn’t want to use a web client. IRC, though, is a barrier to new contributors. It’s more technical than the types of chat systems they’re accustomed to. It becomes another hurdle on top of 20 or so other hurdles they have to clear in the process of joining as a contributor – completely unrelated to the actual work they want to do (whatever it is – design, marketing, docs, ambassadors, etc.)

New contributors should be able to interact with the hubs IRC client without having to install anything else or really learn a whole lot about IRC. Existing contributors can opt into using it if they want, or they can simply disable the functionality in the hubs web interface and continue using their IRC clients as they have been.

Hackfest Attendee Introductions

(Skip directly to this part of the video)

Next, Paul suggested we go around the room and introduce ourselves for anybody interested in the project (and watching the video.)

  • Máirín Duffy (mizmo) – Fedora Engineering UX designer working on the UX design for the hubs project
  • Meghan Richardson (mrichard) – Fedora Engineering UX intern from MSU also working on the UX design for the hubs project
  • Remy Decausemaker (decause) – Fedora Community lead, Fedora Council member
  • Luke Macken (lmacken) – Works on Fedora Infrastructure, release engineering, tools, QA
  • Adam Miller (maxamillion) – Works on Release engineering for Fedora, working on build tooling and automation for composes and other things
  • Ralph Bean (threebean) – Software engineer on Fedora Engineering team, will be spending a lot of time working on hubs in the next year
  • Stephen Gallagher (sgallagh) – Architect at Red Hat working on the Server platform, on Fedora’s Server working group, interested in helping onboard as many people as possible
  • Aurélien Bompard (abompard) – Software developer, lead developer of Hyperkitty
  • David Gay (oddshocks) – Works on Fedora infrastructure team and cloud teams, hoping to work on Fedora Hubs in the next year
  • Paul Frields (sticksteR) – Fedora Engineering team manager
  • Pierre-Yves Chibon (pingou) – Fedora Infrastructure team member working mostly on web development
  • Patrick Uiterwijk (puiterwijk) – Member of Fedora’s system administration team
  • Xavier Lamien (SmootherFrOgZ) – Fedora Infrastructure team member working on Fedora cloud SIG
  • Atanas Beloborodov (nask0) – A very new contributor to Fedora, he is a web developer based in Bulgaria.
  • (Matthew Miller and Langdon White joined us after the intros)

Game to Explore Fedora Hub’s Target Users

(Skip directly to this part of the video)

We played a game called ‘Pain Gain’ to explore both of the types of users we are targeting: new contributors and experienced Fedora contributors. We started talking about Experienced Contributors. I opened up a shared Inkscape window and made two columns: “pain” and “gain:”

  • For the pain column, we came up with things that are a pain for experienced contributors the way our systems / processes currently work.
  • For the gain column, we listed out ways that Fedora Hubs could provide benefits for experienced contributors.

Then we rinsed and repeated for new contributors:

paingain

While we discussed the pains/gains, we also came up with a lot of sidebar ideas that we documented in an “Idea Bucket” area in the file:

idea-bucket

I was worried that this wouldn’t work well in a video chat context, but I screen-shared my Inkscape window and wrote down suggestions as they were brought up and I think we came out with a useful list of ideas. I was actually surprised at the number of pains and gains on the experienced contributor side: I had assumed new contributors would have way more pains and gains and that the experienced contributors wouldn’t have that many.

Prototype Demo

(Skip directly to this part of the video)

Screenshot from 2015-06-23 12-57-27

Ralph gave us a demo of his Fedora Hubs prototype – first he walked us through how it’s built, then gave the demo.

diagram

In the README there is full explanation of how the prototype works so I won’t reiterate everything there. Some points that came up during this part of the meeting:

  • Would we support hubs running without Javascript? The current prototype completely relies on JS. Without JS, it would be hard to do widgets like the IRC widget. Some of the JS frameworks come with built-in fail modes. There are some accessibility issues with ways of doing things with JS, but a good design can ensure that won’t happen. For the most part, we are going to try to support what a default Fedora workstation install could support.
  • vi hotkeys for Hubs would be awesome. :) Fedora Tagger does this!
  • The way the widgets work now, each widget has to define a data function that gets called with a session object, and it has to return JSON-ifiable python code. That gets stored in memcached and is how the wsgi app and backend communicate. If you can write a data function to return JSON and write a template the data gets plugged into – that’s mainly what’s needed. Take a look at the stats widget – it’s pretty simple!
  • All widgets also need a ‘should_invalidate()’ function that lets the system know what kinds of information apply to which widgets. Every fedmsg has to go through every widget to see if it invalidates a given widget’s data – we were worried that this would result in a terrible performance issue, but by the end of the hackfest we had that figured out.
  • Right now the templates are ginja2, but Ralph thinks we should move to client-side (javascript) templates. The reason is that when updated data gets pushed over websockets from the bus, it can involve garbage communication any time new changes in data come across – it’s simpler that the widget doesn’t have to request the templates and instead the templates are already there in the client.
  • Angular could be a nice client-side way of doing the templates, but Ralph had heard some rumors that AngularJS 2 was going to support only Chrome, and AngularJS 1.3 and 2 aren’t compatible. nask0 has a lot of experience with Angular though and does not think v2 is going to be Chrome-only.
  • TODO: Smoother transitions for when widgets pop into view as they load on an initial load.
  • Langdon wondered if there would be a way to consider individual widgets being able to function as stand-alones on desktops or mobile. The raw zeromq pipes could be hooked up to do this, but the current design uses EventSource which is web-specific and wouldn’t translate to say a desktop widget. Fedora Hubs will emit its own fedmsgs too, so you could build a desktop widget using that as well.
  • Cache invalidation issues was the main driver of the slowness in Fedora Packages, but now we have a cache that updates very quickly so we get constant time access to delivering those pages.

Mockup Review

Screenshot from 2015-06-23 13-48-56

Next, Meghan walked us through the latest (at the time :) we have more now!) mockups for Fedora Hubs, many based on suggestions and ideas from our May meetup (the 2nd hubs video chat.)

Creating / Editing Hubs

(Skip directly to this part of the video)

First, she walked us through her mockups for creating/editing hubs – how a hub admin would be able to modify / set up their hub. (Mockup (download from ‘Raw’ and view in Inkscape to see all screens.)) Things you can modify are the welcome message, colors, what widgets get displayed, the configuration for widgets (e.g. what IRC channel is associated with the hub?), and how to add widgets, among many other things.

Meghan also put together a blog post detailing these mockups.

One point that came up here – a difference is that when users edit their own hubs, they can’t associate an IRC channel with it, but a nick and a network, to enable their profile viewers to pm them.

We talked about hub admins vs FAS group admins. Should they be different or exactly the same? We could make a new role in FAS – “hub admin” – and store it there if it’s another one. Ralph recommended keeping it simple by having FAS group admins and hub admins one and the same. Some groups are more strict about group admins in FAS, some are not. Would there be scenarios where we’d want people to be able to admin the FAS group for a team but not be able to modify the hub layout (or vice-versa?) Maybe nesting the roles – if you’re a FAS admin you can be FAS admin + hub admin, if you’re a hub admin you can just admin the hub but not the FAS group.

Another thing we talked about is theming hubs. Luke mentioned that Reddit allows admins to have free reign in terms of modifying the CSS. Matthew mentioned having a set of backgrounds to choose from, like former Fedora wallpapers. David cautioned that we want to maintain some uniformity across the hubs to help enable new contributors – he gave the example of Facebook, where key navigational elements are not configurable. I suggested maybe they could only tweak certain CSS classes. Any customizations could be stored in the database.

Another point: members vs subscribers on a hub. Subscribers ‘subscribe’ to a hub, members ‘join’ a hub. Subscribing to a hub adds it to your bookmarks in the main horizontal nav bar, and enables certain notifications for that hub to appear in your feed. We talked about different vocabulary for ‘subscribe’ vs ‘join’ – instead of ‘subscribe’ we talking about ‘following’ or ‘starring’ (as in Github) vs joining. (Breaking News :) Since then Meghan has mocked up the different modes for these buttons and added the “star” concept! See below.)

hub-buttons

We had a bit of an extended discussion about a lot of the different ways someone could be affiliated with a team/project that has a hub. Is following/subscribing too non-committal? Should we have a rank system so you could move your way up ranks, or is it a redundant gameification given the badge system we have in place? (Maybe we can assign ranks based on badges earned?) Part of the issue here is for others to identify the authority of the other people they’re interacting with, but another part is for helping people feel more a part of the community and feel like valued members. Subscribing is more like following a news feed, being a member is more being part of the team.

Joining Hubs

(Skip directly to this part of the video)

The next set of mockups Meghan went through showed us the workflow of how a user requests membership in a given hub and how the admin receives the membership request and handles it.

We also tangented^Wtalked about the welcome message on hubs and how to dismiss or minimize them. I think we concluded that we would let people collapse them and remove them, and if they remove them we’ll give them a notification that if they want to view them at any time they can click on “Community Rules and Guidelines.”

Similarly, the notification to let the admin know that a user has requested access to something and they dismiss it and want to tend to it later – it will appear in the admin’s personal stream as well for later retrieval.

We talked about how to make action items in a user’s notification feed appear differently than informational notifications; some kind of different visual design for them. One idea that came up was having tabs at the top to filter between types of notifications (action, informational, etc.) I explained how we were thinking about having a contextual filter system in the top right of each ‘card’ or notification to let users show or hide content too. Meghan is working on mockups for this currently.

David had the idea of having action items assigned to people appear as actions within their personal stream… since then I have mocked this up:

actionitem_preview

Personal Profiles

(Skip directly to this part of the video)

Next Meghan walked us through the mockups she worked on for personal profiles / personal streams. One widget she mocked up is for personal library widgets. Other widgets included a personal badges earned display, hubs you’re a member of, IRC private message, a personal profile.

Meghan also talked about privacy with respect to profiles and we had a bit of a discussion about that. Maybe, for example, by default your library could be private, maybe your stream only shows your five most recent notifications and if someone is approved (using a handshake) as a follower of yours they can see the whole stream. Part of this is sort of a bike lock thing…. everything in a user’s profile is broadcast on fedmsg, but having it easily accessible in one place in a nice interface makes it a lot easier (like not having a lock on your bike.) One thing Langdon brought up is that we don’t want to give people a false sense of privacy. So we have to be careful about the messaging we do around it. We thought about whether or not we wanted to offer this intermediate ‘preview’ state for people’s profiles for those viewing them without the handshake. An alternative would be to let the user know who is following them when they first start following them and to maintain a roster of followers so it is clear who is reading their information.

Here’s the blog post Meghan wrote up on the joining hubs and personal profile mockups with each of the mockups and more details.

Bookmarks / main nav

(Skip directly to this part of the video)

The main horizontal navbar in Fedora Hubs is basically a bookmarks bar of the hubs you’re most interested in. Meghan walked us through the bookmarks mockups – she also covered these mockups in detail on her bookmarks blog post.

ZOMG THIS IS SO AWESOME!

Yes. Yes, it is.

So you may be wondering when this is going to be available. Well, we’re working on it. We could always use more help….

help-1

Where’s stuff happening?

How does one help? Well, let me walk you through where things are taking place, so you can follow along more closely than my lazy blog posts if you so desire:

  • Chat with us: #fedora-hubs on irc.freenode.net is where most of the folks working on Fedora Hubs hang out, day in and day out. threebean’s hooked up a bot in there too that pushes notifications when folks check in code or mockup updates.
  • Mockups repo: Meghan and I have our mockups repo at https://github.com/fedoradesign/fedora-hubs, which we both have hooked up via Sparkleshare. (You are free to check it out without Sparkleshare and poke around as you like, of course.)
  • Code repo: The code is kept in a Pagure repo at https://pagure.io/fedora-hubs. You’ll want to check out the ‘develop’ branch and follow the README instructions to get all setup. (If I can do it, you can. :) )
  • Feature planning / Bug reporting: We are using Pagure’s issue tracker at https://pagure.io/fedora-hubs/issues to plan out features and track bugs. One way we are using this which I think is kind of interesting – it’s the first time I’ve used a ticketing system in exactly this way – is that for every widget in the mockups, we’ve opened up a ticket that serves as the design spec with mockups from our mockup repo embedded in the ticket.
  • Project tracking: This one is a bit experimental. But the Fedora infra and webdev guys set up http://taiga.fedoraproject.org – an open source kanban board – that Meghan and I started using to keep track of our todo list since we had been passing post-it notes back and forth and that gets a bit unwieldy. It’s just us designers using it so far, but you are more than welcome to join if you’d like. Log in with your Fedora staging password (you can reset it if it’s not working and it’ll only affect stg) and ping us in #fedora-hubs to have your account added to the kanban board.
  • Notification Inventory: This is an inventory that Meghan started of the notifications we’ve come up with for hubs in the mockups.
  • Nomenclature Diagram for Fedora Hubs: We’ve got a lot of neat little features and widgets and bits and bobs in Fedora Hubs, but it can be confusing talking about them without a consistent naming scheme. Meghan created this diagram to help sort out what things are called.

How can I help?

Well, I’m sure glad you asked. :) There’s a few ways you can easily dive in and help right now, from development to design to coming up with cool ideas for features / notifications:

  1. Come up with ideas for notifications you would find useful in Fedora Hubs! Add your ideas to our notification inventory and hit us up in #fedora-hubs to discuss!
  2. Look through our mockups and come up with ideas for new widgets and/or features in Fedora Hubs! The easiest way to do this is probably to peruse the mini specs we have in the pagure issue tracker for the project. But you’re free to look around our mockups repo as well! You can file your widget ideas in Pagure (start the issue name with “Idea:” and we’ll review them and discuss!
  3. Help us develop the widgets we’ve planned! We’ve got little mini design specs for the widgets in the Fedora Hubs pagure issue tracker. If a widget ticket is unassigned (and most are!), it’s open and free for you to start hacking on! Ask Meghan and I any questions in IRC about the spec / design as needed. Take a look at the stats widget that Ralph reviewed in explaining the architecture during the hackfest, and watch Ralph’s demo and explanation of how Hubs is built to see how the widgets are put together.
  4. There are many other ways to help (ask around in #fedora-hubs to learn more,) but I think these have a pretty low barrier for starting up depending on your skillset and I think they are pretty clearly documented so you can be confident you’re working on tasks that need to get done and aren’t duplicating efforts!

    Hope to see you in #fedora-hubs! :)

June 30, 2015

Parsing Option ROM Firmware

A few weeks ago an issue was opened on fwupd by pippin. He was basically asking for a command to return all the hashes of the firmwares installed on his hardware, which I initially didn’t really see the point of doing. However, after doing a few hours research about all the malware that can hide in VBIOS for graphics cards, option ROM in network cards, and keyboard matrix EC processors I was suitably worried also. I figured fixing the issue was a good idea. Of course, malware could perhaps hide itself (i.e. hiding in an unused padding segment and masking itself out on read) but this at least raises the bar from a security audit point of view, and is somewhat easier than opening the case and attaching a SPI programmer to the chip itself.

Fast forward a few nights. We can now verify ATI, NVIDIA, INTEL and ColorHug firmware. I’ve not got any other hardware with ROM that I can read from userspace, so this is where I need your help. I need willing volunteers to compile fwupd from git master (or rebuild my srpm) and then run:

cd fwupd/src
find /sys/devices -name rom -exec sudo ./fwupdmgr dump-rom {} \;

All being well you should see something like this:

/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom -> f21e1d2c969dedbefcf5acfdab4fa0c5ff111a57 [Version: 013.012.000.019.000000]

If you see something just that, you’re not super helpful to me. If you see Error reading from file: Input/output error then you’re also not so helpful as the kernel module for your hardware is exporting a rom file and not hooking up the read vfuncs. If you get an error like Failed to detect firmware header [8950] or Firmware version extractor not known then you’ve just become interesting. If that’s you, can you send the rom file to richard_at_hughsie.com as an attachment along with any details you know about the hardware. Thanks!

Richard.

Interview with Livio Fania

r_dijeau-800

Could you tell us something about yourself?

I’m Livio Fania. I’m Italian Illustrator living in France.

Do you paint professionally, as a hobby artist, or both?

I paint professionally.

What genre(s) do you work in?

I make illustrations for press, posters and children books. My universe is made by geometrical shapes, stylized characters and flashy colors.

Whose work inspires you most — who are your role models as an artist?

I like the work of João Fazenda, Riccardo Guasco and Nick Iluzada among many others.

What makes you choose digital over traditional painting?

I did not take a definite choice. Even if I work mainly digitally, I still have a lot of fun using traditional tools such as colored pencils, brush pens and watercolors. Besides, in the 90% of cases I draw by hand, I scan, and just at the end of the process I grab my graphic tablet stylus.

I do not think that working digitally means to be faster. On the contrary, I can work more quickly by hand, especially in the first sketching phases. What digital art allows is CONTROL all over the process. If you keep your layer stack well organized, you can always edit your art without losing the original version, and this is very useful when your client asks for changes. If you work with traditional tools and you drop your ink in the wrong place, you can’t press Ctrl+z.

r_Ev-800

How did you find out about Krita?

I discovered Krita through a video conference posted on David Revoy’s blog. Even if I don’t particularly like his universe, I think he is probably the most influential artist using FLOSS tools, and I’m very grateful to him for sharing his knowledge with the community. Previously, I used to work with MyPaint, mainly for its minimalist interface which was perfect for the small laptop I had. Then I discovered that Krita was more versatile and better developed, so I took some time to learn it and now I could not do without it.

What was your first impression?

At first I thought it was not the right tool for me. Actually, most digital artists use Krita for its painting features, like blending modes and textured brushes, which allow to obtain realistic light effects. Personally, I think that realism can be very boring and that is why I paint in a stylized way with uniform tints. Besides, I like to bound my range of possibilities in a set of limited elements: palettes of 5-8 colors and 2-3 brushes. So at the beginning I felt like Krita had too many options for me. But little by little I adapted the GUI to my workflow. Now I really think everybody can find their own way to use Krita, no matter the painting style they use.

What do you love about Krita?

Two elements I really love:
1) The favourite presets docker which pops up with right click. It contains everything you need to keep painting and it is a pleasure to control everything with a glance.
2) The Composition tab, which allows to completely change the color palette or experiment with new effects without losing the original version of a drawing.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think that selections are not intuitive at all and could be improved. When dealing with complex selections, it is time-consuming to check the selecting mode in the option tab (replace, intersect, subtract) and proceed accordingly. Especially considering that by default the selecting mode is the same you had when you used the tool last time (but in the meantime you probably forgot it). I think it would be much better if every time a selection tool is taken, it would be be in “normal” mode by default, and then one can switch to a different modes by pressing Ctrl/Shift.

What sets Krita apart from the other tools that you use?

Krita is by far the most complete digital painting tool developed on Linux. It is widely customizable (interface, workspaces, shortcuts, tabs) and it offers a very powerful brush engine, even compared to proprietary applications. Also, a very important aspect is the that the Krita foundation has a solid organization and develops it in a continuous way thanks to donations, Kickstarter campaigns etcetera. This is particularly important in the open source community, where we have sometimes well designed projects which disappear because they are not supported properly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

r_DM170-800
The musicians in the field.

What techniques and brushes did you use in it?

As i said, I like to have limited presets. In this illustration I mostly used the “pastel_texture_thin” brush which is part of the default set of brushes in Krita. I love its texture and the fact that it is pressure sensitive. Also, I applied a global bitmap texture on an overlay layer.

Where can people see more of your work?

www.liviofania.com
https://www.facebook.com/livio.fania.art

Anything else you’d like to share?

Yes, I would like to add that I also release all my illustrations under a Creative Commons license, so you can Download my portfolio, copy it and use it for non-commercial purposes.

June 29, 2015

Approaching string freeze and beta release cycle

It’s time to start putting the next release together, folks!

I would like to announce a freeze of all translated strings on Sat 11th July 2015, and then begin working on the first beta release properly. New features raised as a Github Pull Request by the end of Sat 4th July stand a chance of getting in, but midnight on that day is the deadline for new code submissions if they touch text the user will see on screen.

The next release will be numbered 1.2.0; we are currently in the alpha phase of development for it, but that phase will end shortly.

Fixing the remaining alpha-cycle bugs is going well. We currently only have four bugs left in the cycle milestone, and that number will diminish further shortly. The main goal right now is to merge and test any pending small features that people want to get into 1.2.0, and to thoroughly spellcheck and review the English-language source strings so that things will be better for our translators.

Expect announcements about the translation effort, and dates for beta releases shortly.

Just Say It!

While I love typing on small on screen keyboards on my phone, it is much easier to just talk. When we did the HUD we added speech recognition there, and it processed the audio on the device giving the great experience of controlling your phone with your voice. And that worked well with the limited command set exported by the application, but to do generic voice, today, that requires more processing power than a phone can reasonably provide. Which made me pretty excited to find out about HP's IDOL on Demand service.

I made a small application for Ubuntu Phone that records the audio you speak at it, and sends it up to the HP IDOL on Demand service. The HP service then does the speech recognition on it and returns the text back to us. Once I have the text (with help from Ken VanDine) I set it up to use Content Hub to export the text to any other application that can receive it. This way you can use speech recognition to write your Telegram notes, without Telegram having to know anything about speech at all.

The application is called Just Say It! and is in the Ubuntu App Store right now. It isn't beautiful, but definitely shows what can be done with this type of technology today. I hope to make it prettier and add additional features in the future. If you'd like to see how I did it you can look at the source.

As an aside: I can't get any of the non-English languages to work. This could be because I'm not a native speaker of those languages. If people could try them I'd love to know if they're useful.


Chollas in bloom, and other early summer treats

[Bee in cholla blossom] We have three or four cholla cacti on our property. Impressive, pretty cacti, but we were disappointed last year that they never bloomed. They looked like they were forming buds ... and then one day the buds were gone. We thought maybe some animal ate them before the flowers had a chance to open.

Not this year! All of our chollas have gone crazy, with the early rain followed by hot weather. Last week we thought they were spectacular, but they just kept getting better and better. In the heat of the day, it's a bee party: they're aswarm with at least three species of bees and wasps (I don't know enough about bees to identify them, but I can tell they're different from one another) plus some tiny gnat-like insects.

I wrote a few weeks ago about the piñons bursting with cones. What I didn't realize was that these little red-brown cones are all the male, pollen-bearing cones. The ones that bear the seeds, apparently, are the larger bright green cones, and we don't have many of those. But maybe they're just small now, and there will be more later. Keeping fingers crossed. The tall spikes of new growth are called "candles" and there are lots of those, so I guess the trees are happy.

[Desert willow in bloom] Other plants besides cacti are blooming. Last fall we planted a desert willow from a local native plant nursery. The desert willow isn't actually native to White Rock -- we're around the upper end of its elevation range -- but we missed the Mojave desert willow we'd planted back in San Jose, and wanted to try one of the Southwest varieties here. Apparently they're all the same species, Chilopsis linearis.

But we didn't expect the flowers to be so showy! A couple of blossoms just opened today for the first time, and they're as beautiful as any of the cultivated flowers in the garden. I think that means our willow is a 'Rio Salado' type.

Not all the growing plants are good. We've been keeping ourselves busy pulling up tumbleweed (Russian thistle) and stickseed while they're young, trying to prevent them from seeding. But more on that in a separate post.

As I write this, a bluebird is performing short aerobatic flights outside the window. Curiously, it's usually the female doing the showy flying; there's a male out there too, balancing himself on a piñon candle, but he doesn't seem to feel the need to show off. Is the female catching flies, showing off for the male, or just enjoying herself? I don't know, but I'm happy to have bluebirds around. Still no definite sign of whether anyone's nesting in our bluebird box. We have ash-throated flycatchers paired up nearby too, and I'm told they use bluebird boxes more than the bluebirds do. They're both beautiful birds, and welcome here.

Image gallery: Chollas in bloom (and other early summer flowers.

June 28, 2015

FreeCAD and BIM FAQ

A couple of FreeCAD architecture/BIM related questions that I get often: Is FreeCAD ready enough to do serious BIM work? This is a very complex question, and the answer could be yes or no, depending on what's important to you. It of course also depends on what is BIM for you, because clearly enough, there isn't a universal...

June 26, 2015

A Blended Panorama with PhotoFlow


A Blended Panorama with PhotoFlow

Creating panoramas with Hugin and PhotoFlow

The goal of this tutorial is to show how to create a sort-of-HDR panoramic image using only Free and Open Source tools. To explain my workflow I will use the image below as an example.

This panorama was obtained from the combination of six views, each consisting of three bracketed shots at -1EV, 0EV and +1EV exposure. The three exposures are stitched together with the Hugin suite, and then exposure-blended with enfuse. The PhotoFlow RAW editor is used to prepare the initial images and to finalize the processing of the assembled panorama. The final result of the post-processing is below:

Final result Final result of the panorama editing (click to compare to simple +1EV exposure)

In this case I have used the brightest image for the foreground, the darkest one for the sky and clouds, and and exposure-fused one for a seamless transition between the two.

The rest of the post will show how to get there…

Before we continue, let me advise you that I’m not a pro, and that the tips and “recommendations” that I’ll be giving in this post are mostly derived from trial-and-error and common sense. Feel free to correct/add/suggest anything… we are all here to learn!

Taking the shots

Shooting a panorama requires a bit of preparation and planning to make sure that one can get the best out of Hugin when stitching the shots together. Here is my personal “checklist”:

  • Manual Focus - set the camera to manual focus, so that the focus plane is the same for all shots
  • Overlap Shots - make sure that each frame has sufficient overlap with the previous one (something between 1/2 and 1/3 of the total area), so that hugin can find enough control points to align the images and determine the lens correction parameters
  • Follow A Straight Line - when taking the shots, try to follow as much as possible a straight line (keeping for example the horizon at the same height in your viewfinder); if you have a tripod, use it!
  • Frame Appropriately - to maximize the angle of view, frame vertically for an horizontal panorama (and vice-versa for a vertical one)
  • Leave Some Room - frame the shots a bit wider than needed, to avoid bad surprises when cropping the stitched panorama
  • Fixed Exposure - take all shots with a fixed exposure (manual or locked) to avoid luminance variations that might not be fully compensated by hugin
  • Bracket if Needed - if you shoot during a sunny day, the brightness might vary significantly across the whole panorama; in this case, take three or more bracketed exposures for each view (we will see later how to blend them in the post-processing)

Processing the RAW files

If you plan to create the panorama starting from the in-camera Jpeg images, you can safely skip this section. On the other hand, if you are shooting RAW you will need to process and prepare all the input images for Hugin. In this case it is important to make sure that the RAW processing parameters are exactly the same for all the shots. The best is to adjust the parameters on one reference image, and then batch-process the rest of the images using those settings.

Using PhotoFlow

Loading and processing a RAW file is rather easy:

  1. Click the “Open” button and choose the appropriate RAW file from your hard disk; the image preview area will show at this point a grey and rather dark image

  2. Add a “RAW developer” layer; a configuration dialog will show up which allows to access and modify all the typical RAW processing parameters (white balance, exposure, color conversion, etc… see screenshots below).

More details on the RAW processing in PhotoFlow can be found in this tutorial.

Once the result is ok the RAW processing parameters need to be saved into a preset. This can be done following a couple of simple steps:

  1. Select the “RAW developer” layer and click on the “Save” button below the layers list widget (at the bottom-right of the photoflow’s window)

  2. A file chooser dialog chooser dialog will pop-up, where one has to choose an appropriate file name and location for the preset and then click “Save”;
    the preset file name must have a “.pfp” extension

The saved preset needs then to be applied to all the RAW files in the set. Under Linux, PhotoFlow comes with an handy script that automates the process. The script is called pfconv and can be found here. It is a wrapper around the pfbatch and exiftool commands, and is used to process and convert a bunch of files to TIFF format. Save the script in one of the folders included in your PATH environment variable (for example /usr/local/bin) and make it executable:

sudo chmod u+x /usr/local/bin/pfconv

Processing all RAW files of a given folder is quite easy. Assuming that the RAW processing preset is stored in the same folder under the name raw_params.pfp, run this commands in your preferred terminal application:

cd panorama_dir
pfconv -p raw_params.pfp *.NEF

Of course, you have to change panorama_dir to your actual folder and the .NEF extension to the one of your RAW fles.

Now go for a cup of coffee, and be patient… a panorama with three or five bracketed shots for each view can easily have more than 50 files, and the processing can take half an hour or more. Once the processing completed, there will be one tiff file for each RAW image, an the fun with Hugin can start!

Assembling the shots

Hugin is a powerful and free software suite for stitching multiple shots into a seamless panorama, and more. Under Linux, Hugin can be usually installed through the package manager of your distribution. In the case of Ubuntu-based distros it can be usually installed with:

sudo apt-get install hugin

If you are running Hugin for the first time, I suggest to switch the interface type to Advanced in order to have full control over the available parameters.

The first steps have to be done in the Photos tab:

  1. Click on Add images and load all the tiff files included in your panorama. Hugin should automatically determine the lens focal length and the exposure values from the EXIF data embedded in the tiff files.

  2. Click on Create control points to let hugin determine the anchor points that will be used to align the images and to determine the lens correction parameters so that all shots overlap perfectly. If the scene contains a large amount of clouds that have likely moved during the shooting, you can try setting the feature matching algorithm to cpfind+celeste to automatically exclude non-reliable control points in the clouds.

  3. Set the geometric parameters to Positions and Barrel Distortion and hit the Calculate button.

  4. Set the photometric parameters to High dynamic range, fixed exposure (since we are going to stitch bracketed shots that have been taken with fixed exposures), and hit the Calculate button again.

At this point we can have a first look at the assembled panorama. Hugin provides an OpenGL-based previewer that can be opened by clicking on the on the GL icon in the top toolbar (marked with the arrow in the above screenshot). This will open a window like this:

If the shots have been taken handheld and are not perfectly aligned, the panorama will probably look a bit “wavy” like in my example. This can be easily fixed by clicking on the Straighten button (at the top of the Move/Drag tab). Next, the image can be centered in the preview area with the Center and Fit buttons.

If the horizon is still not straight, you can further correct it by dragging the center of the image up or down:

At this point, one can switch to the Projection tab and play with the different options. I usually find the Cylindrical projection better than the Equirectangular that is proposed by default (the vertical dimension is less “compressed”). For architectural panoramas that are not too wide, the Rectilinear projection can be a good option since vertical lines are kept straight.

If the projection type is changed, one has to click once more on the Center and Fit buttons.

Finally, you can switch to the Crop tab and click on the HDR Autocrop button to determine the limits of the area containing only valid pixels.

We are now done with the preview window; it can be closed and we can go back to the main window, in the Stitcher tab. Here we have to set the options to produce the output images the way we want. The idea is to blend each bracketed exposure into a separate panorama, and then use enfuse to create the final exposure-blended version. The intermediate panoramas, which will be saved along with the enfuse output, are already aligned with respect to each other and can be combined using different type of masks (luminosity, gradients, freehand, etc…).

The Stitcher tab has to be configured as in the image below, selecting Exposure fused from any arrangement and Blended layers of similar exposure, without exposure correction. I usually set the output format to TIFF to avoid compression artifacts.

The final act starts by clicking on the Stitch! button. The input images will be distorted, corrected for the lens vignetting and blended into seamless panoramas. The whole process is likely to take quite long, so it is probably a good opportunity for taking a pause…

At the end of the processing, few new images should appear in the output directory: one with an “blended_fused.tif” suffix containing the output of the final enfuse step, and few with an “_exposure????.tif” suffix that contain intermediate panoramas for each exposure value.

Blending the exposures

Very often, photo editing is all about getting what your eyes have seen out of what your camera has captured.

The image that will be edited through this tutorial is no exception: the human vision system can “compensate” large luminosity variations and can “record” scenes with a wider dynamic range than your camera sensor. In the following I will attempt to restore such large dynamics by combining under- and over-exposed shots together, in a way that does not produce unpleasing halos or artifacts. Nevertheless, I have intentionally pushed the edit a bit “over the top” in order to better show how far one can go with such a technique.

This second part introduces a certain number of quite general editing ideas, mixed with details specific to their realization in PhotoFlow. Most of what is described here can be reproduced in GIMP with little extra effort, but without the ease of non-destructive editing.

The steps that I followed to go from one to the other can be more or less outlined like that:

  1. take the foreground from the +1EV version and the clouds from the -1EV version; use the exposure-blended Hugin output to improve the transition between the two exposures

  2. apply an S-shaped tonal curve to increase the overall brightness and add contrast.

  3. apply a combination of the a and b channels of the CIE-Lab colorspace in overlay blend mode to give more “pop” to the green and yellow regions in the foreground

The image below shows side-by-side three of the output images produced with Hugin at the end of the first part. The left part contains the brightest panorama, obtained by blending the shots taken at +1EV. The right part contains the darkest version, obtained from the shots taken at -1EV. Finally, the central part shows the result of running the enfuse program to combine the -1EV, 0EV and +1EV panoramas.

Comparison between the +1EV exposure (left), the enfuse output (center) and the -1EV exposure (right)

Exposure blending in general

In scenes that exhibit strong brightness variations, one often needs to combine different exposures in order to compress the dynamic range so that the overall contrast can be further tweaked without the risk of losing details in the shadows or highlights.

In this case, the name of the game is “seamless blending”, i.e. combining the exposures in a way that looks natural, without visible transitions or halos. In our specific case, the easiest thing would be to simply combine the +1EV and -1EV images through some smooth transition, like in the example below.

Simple blending of the +1EV and -1EV exposures

The result is not too bad, however it is very difficult to avoid some brightening of the bottom part of the clouds (or alternatively some darkening of the hills), something that will most likely look artificial even if the effect is subtle (our brain will recognize that something is wrong, even if one cannot clearly explain the reason…). We need something to “bridge” the two images, so that the transition looks more natural.

At this point it is good to recall that the last step performed by Hugin was to call the enfuse program to blend the three bracketed exposures. The enfuse output is somehow intermediate between the -1EV and +1EV versions, however a side-by-side comparison with the 0EV image reveals the subtle and sophisticated work done by the program: the foreground hill is brighter and the clouds are darker than in the 0EV version. And even more importantly, this job is done without triggering any alarm in your brain! Hence, the enfuse output is a perfect candidate to improve the transition between the hill and the sky.

Final result Enfuse output (click to see 0EV version)

Exposure blending in PhotoFlow

It is time to put all the stuff together. First of all, we should open PhotoFlow and load the +1EV image. Next we need to add the enfuse output on top of it: for that you first need to add a new layer (1) and choose the Open image tool from the dialog that will open up (2)(see below).

Inserting as image from disk as a layer

After clicking the “OK” button, a new layer will be added and the corresponding configuration dialog will be shown. There you can choose the name of the file to be added; in this case, choose the one ending with “_blended_fused.tif” among those created by Hugin:

“Open image” tool dialog

Layer masks: theory (a bit) and practice (a lot)

For the moment, the new layer completely replaces the background image. This is not the desired result: instead, we want to keep the hills from the background layer and only take the clouds from the “_blended_fused.tif” version. In other words, we need a layer mask.

To access the mask associated to the “enfuse” layer, double-click on the small gradient icon next to the name of the layer itself. This will open a new tab with an initially empty stack, where we can start adding layers to generate the desired mask.

How to access the grayscale mask associated to a layer

In PhotoFlow, masks are edited the same way as the rest of the image: through a stack of layers that can be associated to most of the available tools. In this specific case, we are going to use a combination of gradients and curves to create a smooth transition that follows the shape of the edge between the hills and the clouds. The technique is explained in detail in this screencast.

To avoid the boring and lengthy procedure of creating all the necessary layers, you can download this preset file and load it as shown below:

The mask is initially a simple vertical linear gradient. At the bottom (where the mask is black) the associated layer is completely transparent and therefore hidden, while at the top (where the mask is white) the layer is completely opaque and therefore replaces anything below it. Everywhere in between, the layer has a degree of transparency equal to the shade of gray in the mask.

In order to show the mask, activate the “show active layer” radio button below the preview area, and then select the layer that has to be visualized. In the example above, I am showing the output of the topmost layer in the mask, the one called “transition”. Double-clicking on the name of the “transition layer allows to open the corresponding configuration dialog, where the parameters of the layer (a curves adjustment in this case) can be modified. The curve is initially a simple diagonal: output values exactly match input ones.

If the rightmost point in the curve is moved to the left, and the leftmost to the right, it is possible to modify the vertical gradient and the reduce the size of the transition between pure black and pure white, as shown below:

We are getting closer to our goal of revealing the hills from the background layer, by making the corresponding portion of the mask purely black. However, the transition we have obtained so far is straight, while the contour of the hills has a quite complex curvy shape… this is where the second curves adjustment, associated to the “modulation” layer, comes into play.

As one can see from the screenshot above, between the bottom gradient and the “transition” curve there is a group of three layers: an horizontal gradient, a modulation curve and invert operation. Moreover, the group itself is combined with the bottom vertical gradient in grain merge blending mode.

Double-clicking on the “modulation” layer reveals a tone curve which is initially flat: output values are always 50% independently of the input. Since the output of this “modulation” curve is combined with the bottom gradient in grain merge mode, nothing happens for the moment. However, something interesting happens when a new point is added and dragged in the curve: the shape of the mask matches exactly the curve, like in the example below.

The sky/hills transition

The technique introduced above is used here to create a precise and smooth transition between the sky and the hills. As you can see, with a sufficiently large number of points in the modulation curve one can precisely follow the shape of the hills:

The result of the blending looks like that (click the image to see the initial +1EV version):

Final result Enfuse output blended with the +1EV image (click to see the initial +1EV version)

The sky looks already much denser and saturated in this version, and the clouds have gained in volume and tonal variations. However, the -1EV image looks even better, therefore we are going to take the sky and clouds from it.

To include the -1EV image we are going to follow the same procedure done already in the case of the enfuse output:

  1. add a new layer of type “Open image” and load the -1EV Hugin output (I’ve named this new layer “sky”)

  2. open the mask of the newly created layer and add a transition that reveals only the upper portion of the image

Fortunately we are not obliged to recreate the mask from scratch. PhotoFlow includes a feature called layer cloning, which allows to dynamically copy the content of one layer into another one. Dynamically in the sense that the pixel data gets copied on the fly, such that the destination always reflects the most recent state of the source layer.

After activating the mask of the “sky” layer, add a new layer inside it and choose the “clone layer” tool (see screenshot below).

Cloning a layer from one mask to another

In the tool configuration dialog that will pop-up, one has to choose the desired source layer among those proposed in the list under the label “Layer name”. The generic naming scheme of the layers in the list is “[root group name]/root layer name/OMap/[mask group name]/[maks layer name]”, where the items inside square brackets are optional.

Choice of the clone source layer

In this specific case, I want to apply a smoother transition curve to the same base gradient already used in the mask of the “enfuse” layer. For that we need to choose “enfuse/OMap/gradient modulation (blended)” in order to clone the output of the “gradient modulation” group after the grain merge blend, and then add a new curves tool above the cloned layer:

The final transition mask between the hills and the sky

The result of all the efforts done up to now is shown below; it can be compared with the initial starting point by clicking on the image itself:

Final result Edited image after blending the upper portion of the -1EV version through a layer mask. Click to see the initial +1EV image.

Contrast and saturation

We are not quite done yet, as the image is still a bit too dark and flat, however this version will “tolerate” some contrast and luminance boost much better than a single exposure. In this case I’ve added a curves adjustment at the top of the layer’s stack, and I’ve drawn an S-shaped RGB tone curve as shown below:

The effect of this tone curve is to increase the overall brightness of the image (the middle point is moved to the left) and to compress the shadows and highlights without modifying the black and white points (i.e. the extremes of the curve). This curve definitely gives “pop” to the image (click to see the version before the tone adjustment):

Final result Result of the S-shaped tonal adjustment (click the image to see the version before the adjustment).

However, this comes at the expense of an overall increase in the color saturation, which is a typical side effect of RGB curves. While this saturation boost looks quite nice in the hills, the effect is rather disastrous in the sky. The blue as turned electric, and is far from what a nice, saturated blue sky should look like!

However, there is a simple fix to this problem: change the blend mode of the curves layer from Normal to Luminosity. The tone curve in this case only modified the luminosity of the image, but preserves as much as possible the original colors. The difference between normal and lumnosity blending is shown below (click to see the Normal blending). As one can see, the Luminosity blend tends to produce a duller image, therefore we will need to fix the overall saturation in the next step.

Luminosity blend S-shaped tonal adjustment with Luminosity blend mode (click the image to see the version with Normal blend mode).

To adjust the overall saturation of the image, let’s now add an Hue/Saturation layer above the tone curve and set the saturation value to +50. The result is shown below (click to see the Luminosity blend output).

Saturation boost Saturation set to +50 (click the image to see the Luminosity blend output).

This definitely looks better on the hills, however the sky is again “too blue”. The solution is to decrease the saturation of the top part through an opacity mask. In this case I have followed the same steps as for the mask of the sky blend, but I’ve changed the transition curve to the one shown here:

Saturation mask

In the bottom part the mask is perfectly white, and therefore a +50 saturation boost is applied. On the top the mask is instead just about 30%, and therefore the saturation is increased of only about +15. This gives a better overall color balance to the whole image:

Saturation boost after mask Saturation set to +50 through a transition mask (click the image to see the Luminosity blend output).

Lab blending

The image is already quite ok, but I still would like to add some more tonal variations in the hills. This could be done with lots of different techniques, but in this case I will use one that is very simple and straightforward, and that does not require any complex curve or mask since it uses the image data itself. The basic idea is to take the a and/or b channels of the Lab colorspace, and combine them with the image itself in Overlay blend mode. This will introduce tonal variations depending on the color of the pixels (since the a and b channels only encode the color information). Here I will assume you are quite familiar wit the Lab colorspace. Otherwise, here is the link to the Wikipedia page that should give you enough informations to follow the rest of the tutorial.

Looking at the image, one can already guess that most of the areas in the hills have a yellow component, and will therefore be positive in the b channel, while the sky and clouds are neutral or strongly blue, and therefore have b values that are negative or close to zero. The grass is obviously green and therefore negative in the a channel, while the wineyards are brownish and therefore most likely with positive a values. In PhotoFlow the a and b values are re-mapped to a range between 0 and 100%, so that for example a=0 corresponds to 50%. You will see that this is very convenient for channel blending.

My goal is to lighten the green and the yellow tones, to create a better contrast around the wineyards and add some “volume” to the grass and trees. Let’s first of all inspect the a channel: for that, we’ll need to add a group layer on top of everything (I’ve called it “ab overlay”) and then added a clone layer inside this group. The source of the clone layer is set to the a channel of the “backgroud” layer, as shown in this screenshot:

a channel clone Cloning of the Lab “a” channel of the background layer

A copy of the a channel is shown below, with the contrast enhanced to better see the tonal variations (click to see the original versions):

Saturation boost after mask The Lab a channel (boosted contrast)

As we have already seen, in the a channel the grass is negative and therefore looks dark in the image above. If we want to lighten the grass we therefore need to invert it, to obtain this:

Saturation boost after mask The inverted Lab a channel (boosted contrast)

Let’s now consider the b channel: as sursprising as it might seem, the grass is actually more yellow than green, or at least the b channel values in the grass are higher than the inverted a values. In addition, the trees at the top of the hill stick nicely out of the clouds, much more than in the a channel. All in all, a combination of the two Lab channels seems to be the best for what we want to achieve.

With one exception: the blue sky is very dark in the b channel, while the goal is to leave the sky almost unchanged. The solution is to blend the b channel into the a channel in Lighten mode, so that only the b pixels that are lighter than the corresponding a ones end up in the blended image. The result is shown below (click on the image to see the b channel).

b channel lighten blend b channel blended in Lighten mode (boosted contrast, click the image to see the b channel itself).

And this are the blended a and b channels with the original contrast:

b channel lighten blend The final a and b mask, without contrast correction

The last act is to change the blending mode of the “ab overlay” group to Overlay: the grass and trees get some nice “pop”, while the sky remains basically unchanged:

ab overlay Lab channels overlay (click to see the image after the saturation adjustment).

I’m now almost satisfied with the result, except for one thing: the Lab overlay makes the yellow area on the left of the image way too bright. The solution is a gradient mask (horizontal this time) associated to the “ab overlay group”, to exclude the left part of the image as shown below:

overlay blend mask

The final, masked image is shown here, to be compared with the initial starting point:

final result The image after the masked Lab overlay blend (click to see the initial +1EV version).

The Final Touch

Through the tutorial I have intentionally pushed the editing quite above what I would personally find acceptable. The idea was to show how far one can go with the techniques I have described; fortunatey, the non-destructive editing allows us to go back on our steps and reduce the strength of the various effects until the result looks really ok.

In this specific case, I have lowered the opacity of the “contrast” layer to 90%, the one of the “saturation” layer to 80% and the one of the “ab overlay” group to 40%. Then, feeling that the “b channel” blend was still brightening the yellow areas too much, I have reduced the opacity of the “b channel” layer to 70%.

opacity adjustment Opacities adjusted for a “softer” edit (click on the image to see the previous version).

Another thing I still did not like in the image was the overall color balance: the grass in the foreground looked a bit too “emerald” instead of “yellowish green”, therefore I thought that the image could profit of a general warming up of the colors. For that I have added a curves layer at the top of the editing stack, and brought down the middle of the curve in both the green and blue channels. The move needs to be quite subtle: I brought the middle point down from 50% to 47% in the greens and 45% in the blues, and then I further reduced the opacity of the adjustment to 50%. Here comes the warmed-up version, compared with the image before:

opacity adjustment “Warmer” version (click to see the previous version)

At this point I was almost satisfied. However, I still found that the green stuff at the bottom-right of the image attracted too much my attention and distracted the eye. Therefore I darkened the bottom of the image with a slightly curved gradient applied in “soft light” blend mode. The gradient was created with the same technique used for blending the various exposures. The transition curve is shown below: in this case, the top part was set to 50% gray (remember that we blend the gradient in “soft light” mode) and the bottom part was moved a bit below 50% to obtain a slightly darkening effect:

vignetting gradient Gradient used for darkening the bottom of the image.

It’s done! If you managed to follow me ‘till the end, you are now rewarded with the final image in all its glory, that you can again compare with the initial starting point.

final result The final image (click to see the initial +1EV version).

It has been a quite long journey to arrive here… and I hope not to have lost too many followers on the way!

June 24, 2015

Introducing the Linux Vendor Firmware Service

As some of you may know, I’ve spent the last couple of months talking with various Red Hat partners and other OpenHardware vendors that produce firmware updates. These include most of the laptop vendors that you know and love, along with a few more companies making very specialized hardware.

We’ve now got a process, fwupd, that is capable of taking the packaged update and applying it to the hardware using various forms of upload mechanism. We’ve got a specification, AppStream, which is used to describe the updates and provide metadata for what firmware updates are available to be installed. What we were missing was to “close the circle” and provide a web service for small and medium size vendors to use to upload new firmware and make it available to Linux users.

Microsoft already provides such a thing for vendors to use, and it’s part of the Microsoft Update service. From the vendors I’ve talked to, the majority don’t want to run any tools on their firmware to generate metadata. Most of them don’t even want to commit to hosting the metadata or firmware files in the same place forever, and with a couple of exceptions actually like the Microsoft Update model.

I’ve created a simple web service that’s being called Linux Vendor Firmware Service (perhaps not the final name). You can see the site in action here, although it’s not terribly useful or exciting if you’re not a hardware vendor.

If you are vendor that produces firmware and want an access key for the beta site, please let me know. All firmware uploaded will be transferred to the final site, although I’m still waiting to hear back from Red Hat legal about a longer version of the redistribution agreement.

Anyway, comments very welcome, thanks.

designing openType‐features UI /intro

This blog post kicks off my involvement with bringing openType features to F/LOSS (typo‐)graphical software. I will explain why this is a special, complicated project, followed by the approach and structure I will apply to this design project and finish with what to expect as deliverables.

a bit of a situation

First things first. It is quite likely that when you are reading this, you know what openType features in fonts are. But just in case you don’t, here is a friendly, illustrated explanation of some of the features, without putting you straight into corporate specification hell. The reason I said ‘some’ will become clear below.

What is interesting is that there is a riot going on. The 800‑pound gorillas of (typo‐)graphical software—the adobe creative suite applications—have such bad and disparate UI for handling openType features that a grass‐roots protest movement started among typographers and font designers to do something about it. What followed was a a petition and a hasty promise by adobe to do better—in the future.

meanwhile in Toronto…

These events prodded Nathan Willis into action, because ‘open‐source applications aren’t any better in this regard.’ He organised a openType workshop at this year’s LGM to get a process started to change that. I went there because this is smack in the middle of one of my fields of specialisation: interaction for creatives. As you can read in Nathan’s report, I got immediately drawn into the UI discussion and now we have a loose‐knit project.

The contents and vibe of the questions, and my answers, in the UI discussion all pointed in a certain direction, that I was only able to name a day later: harmonised openType features for all F/LOSS (typo‐)graphical applications definitely has an infrastructure component.

the untouchables

Pure infrastructure—e.g. tap water, electricity, telecoms—offers its designers some unique challenges:

everybody uses it
and everybody’s needs are equally important; there is no opportunity to optimise the design for the specific needs of user groups.
nobody cares
usage is ubiquitous, i.e. we all do not even register that we are using this stuff all the time—until it stops working, then we miss it a hundred times a day. This makes it very hard to research; no recollection, feelings or values are connected to infrastructure, just entitlement.
anyplace, anywhere, anytime
there is no specific contextual information to work with: why is it used; what is the goal; what does it mean in the overall scheme of things; how much is a little, and a lot; it is used sparsely, all the time, at regular intervals, in bursts? It all depends and it all happens. Just deal with it, all of it.
millions of use cases
(not that I consider use cases a method that contributes positively to any software project, but‐) in the case of infrastructure something funny and instructive happens: after a week or two of exploration and mapping, the number of use cases grows exponentially towards a million and… keeps on growing. I have seen this happen, it is like peeling an onion and for every layer you peel off, the number goes up by an order of magnitude. These millions of use cases are an expression of everybody using it anyplace, anywhere, anytime.
heterogeneous capabilities
this is not always the case, but what is available can vary, a lot. For instance public transport: how many connections (incl. zero) are available for a given trip—and how fast, frequent and comfortable these are—is set by the network routes and timetables. An asked‑for capability is on offer, or not. It all depends and it all happens. Just deal with it, all of it.

I have worked as design lead on two infrastructure projects. One was Nokia dual‑SIM, the other openPrinting, where we designed printing dialogs for all linux users (everyone), as used in 10.000 applications (anyplace, anywhere, anytime), connected to 10.000 different printer models (heterogeneous capabilities). I dubbed it the project with five million use cases.

Ah, and since both application and printer configure the available options of the print dialog, there are potentially 100 million configurations. Even if in reality the variability is far less (say, just 1% on both application and printer side; i.e. 100 significantly different printer models and 100 apps that add serious, vital printing options), then it is still an overwhelming 10.000 configurations.

drowning, not waving

In my experience, designing infrastructure is very demanding. All the time one switches between making the big, abstract plan for everything, and designing, minutely, one of many details in complete isolation. Mid‑level interaction design, the journeyman, run‑of‐the‑mill, lay‑out‐a‑screen level, is completely missing.

It is like landscaping a featureless dessert, where every grain of sand is a detail that has to be dealt with. With no focus on particular users, no basis for research, no context, no just‑design‐the‐example, millions of use cases and highly variable capabilities, I have seen very capable design colleagues lose their bearings and give up.

back at the ranch

Enough war stories. How large is this infrastructure component of openType features in (typo‐)graphical software? Let’s check the list:

  • everybody uses it—nope. Whether the user groups turn out to be defined real narrow or quite wide—a matter of vision—they will have in common that all of them know their typesetting. That is a craft, not common knowledge.
  • nobody cares—well, soon they won’t. Right now there is upheaval because nothing is working. As soon as users get a working solution in the programs they use, it will become as interesting as the streetlights in your street.
  • anyplace, anywhere, anytime—right on! This has to work in (typo‐)graphical software; all of it—even the kind I have never heard of, or that will be invented in five years from now. All we know, is that serious typesetting is performed there by users, on any length of text selection.
  • millions of use cases—not quite. The limited user group provides the breaks here. But there is no such limit from the application side; on the contrary: most of these are (open‐ended) tools for creatives. Just thinking about how flexible a medium text is, for information or shapes, gives me the confidence to say that 10.000 use cases could be compiled, if someone would sit down and do it.
  • heterogeneous capabilities—hell yeah! OpenType‐features support in fonts is all over the place and not just because of negligence. First there is the kaleidoscopic diversity of scripts used around the world, most of which you and I have never heard of. Latin script is just the tip of the iceberg. Furthermore, what is supported, and how each supported feature is actually realised, is completely up to the font designer. The openType‐features standard is open‐ended and creates opportunities for adding sophistication. This is only limited by the combined imagination of the font design community.

Adding that up, we get a score of 3&half out of 5. By doing this exercise I have just found out that openType features in (typo‐)graphical software is 70% infrastructural. This is what I meant with that it is a special, complicated project.

structure—the future

In projects like these structuring the design work is make‑or‐break; either we set off in the right direction, or never get to any destination—not even a wrong one. The structure I use is no secret. Here is my adaptation for this project:

A product vision is not that easy to formulate for pure infrastructure; it tends to shrink towards ‘because it’s there.’ For instance at openPrinting the vision was ‘printing that just works.’ I still regret not having twisted some arms to get a value statement added to that. There were times that this value void was keeping us from creating true next‐generation solutions.

Apart from ‘what’s the value?’ also ‘who is this for?’ needs to be clarified; as we saw earlier, openType features is not for everyone. The identity question, ‘what is it we are making?’ may be a lot less spectacular, but it needs to be agreed. I will take this to the Create mailing list first, mainly to find out who are the ‘fish that swim upfront’, i.e. the people with vision and drive. Step two is an online vision session, resulting in a defined vision.

The deliverable is a to‑the‐point vision statement. If you want to get a good picture of what that entails, then I recommend you read this super‐informative blog post. Bonus: it is completely font‐related.

we want the funk, the whole funk, nothing but the funk

A deep understanding of the functionality is the key to success in this project. I already got burned once with openType features in the Metapolator project. Several font designers told me: ‘it is only a bunch of substitution rules.’ Until it turned out it isn’t. Then at the LGM meeting another surprise complication surfaced. Later I briefly check the specification and there is yet another.

This is what I meant before with that friendly page explaining some of the features. I do not trust it to be complete (and it is only Latin‐centric, anyway). As interaction architect I will have to be completely on top of the functionality, never having to rely on someone else to explain me what ‘is in the box.’ This means knowing the openType standards.

Central to it is the feature tags specification and the feature definition syntax. This contains both the material for understanding of how complicated it all can get and the structures that I can use to formulate UI solutions. It is one of the few aspects that are firm and finite in this project.

The deliverable is a functionality overview, written up in the project wiki.

talking heads

I will do user research, say interview half a dozen users, to gain insight into the act of typesetting, the other aspect that is firm and finite in this project. Which users to recruit depends on what is defined in the product vision. Note that the focus is on the essence of typesetting, while ignoring its specific role in the different (typo‐)graphical applications, and not get swamped by the latter’s diversity.

The deliverable is notes of interest from the interviews, written up in the wiki.

I look forward to an exchange with F/LOSS (typo‐)graphical applications via the Create list. This is not intended to get some kind of inventory of all the apps and how different they are. In this project that is taken as abstract and infinite—the good old infrastructural way.

What I want to find out is in how many different ways openType features must, or can, be integrated in the UIs of (typo‐)graphical applications. In blunt terms: how much space is there available for this stuff, what shape does it have and what is the duty cycle (permanently displayed, or a pop‑up, or…)? These diverse application needs are clustered into just enough UI models (say, six) and used below.

The deliverable is the UI models, written up in the wiki.

getting an eyeful

Then it is time to do an expert evaluation of existing openType‐features UI and all these UI ideas offered by users when the petition did its rounds. All of these get evaluated against—

  • the product vision: does it realise the goals? Is it appropriate for the defined user groups?
  • the functionality: can it cope with the heterogeneous capabilities?
  • the user research: how tuned is it for the essence of typesetting?
  • the UI models: how well does it fit with each model?

All of it gets analysed, then sorted into the good, the bad and the ugly. There will be a tiny amount of gold, mostly in the form ideas and intentions—not really what one would call a design—and a large catalog of what exactly not to do.

The deliverable is notes of interest from the evaluation, written up in the wiki.

warp drive

Then comes the moment to stop looking backwards and start working forwards; to start creating the future. First a solutions model is made. This is a combination of a broad‐strokes solution that cuts the project down to manageable proportions and a defined approach how to deal with the rest, the more detailed design work.

The next stage is to design a generic solution, one that already deals with all of it, all the hairy stuff: text selections of any length, all the heterogeneous capabilities, the typesetting workflow, clear representation of all openType features available and their current state. This will be specified in a wiki, in the form of UI patterns.

With the generic solution in place it will be real clear for the central software library in this small universe, HarfBuzz, which new capabilities it will need to offer to F/LOSS (typo‐)graphical software.

home straight

The final design phase is to work out the generic solution for each UI model. These will still be toolkit agnostic (not specific for KDE or gnome) and, btw, for desktop UI‐only (touch is a whole ’nother kettle of fish). This will also be specified in the wiki.

With this, every (typo‐)graphical software project can go to the wiki, pick a UI model that most matches their own UI structure and see a concrete UI design that, with a minimum of adaptations, they can implement in their own application. They will find that HarfBuzz fully supports their implementation.

While working on Metapolator in the last year I had good experience with sharing what I was doing almost every day I was working on it, through its community. There were encouragement, ideas, discussions, petitions and corrections—all useful. I think this can be replicated on the Create list.

June 23, 2015

Cross-Platform Android Development Toolkits: Kivy vs. PhoneGap / Cordova

Although Ant builds have made Android development much easier, I've long been curious about the cross-platform phone development apps: you write a simple app in some common language, like HTML or Python, then run something that can turn it into apps on multiple mobile platforms, like Android, iOS, Blackberry, Windows phone, UbuntoOS, FirefoxOS or Tizen.

Last week I tried two of the many cross-platform mobile frameworks: Kivy and PhoneGap.

Kivy lets you develop in Python, which sounded like a big plus. I went to a Kivy talk at PyCon a year ago and it looked pretty interesting. PhoneGap takes web apps written in HTML, CSS and Javascript and packages them like native applications. PhoneGap seems much more popular, but I wanted to see how it and Kivy compared. Both projects are free, open source software.

If you want to skip the gory details, skip to the summary: how do Kivy and PhoneGap compare?

PhoneGap

I tried PhoneGap first. It's based on Node.js, so the first step was installing that. Debian has packages for nodejs, so apt-get install nodejs npm nodejs-legacy did the trick. You need nodejs-legacy to get the "node" command, which you'll need for installing PhoneGap.

Now comes a confusing part. You'll be using npm to install ... something. But depending on which tutorial you're following, it may tell you to install and use either phonegap or cordova.

Cordova is an Apache project which is intertwined with PhoneGap. After reading all their FAQs on the subject, I'm as confused as ever about where PhoneGap ends and Cordova begins, which one is newer, which one is more open-source, whether I should say I'm developing in PhoneGap or Cordova, or even whether I should be asking questions on the #phonegap or #cordova channels on Freenode. (The one question I had, which came up later in the process, I asked on #phonegap and got a helpful answer very quickly.) Neither one is packaged in Debian.

After some searching for a good, comprehensive tutorial, I ended up on a The Cordova tutorial rather than a PhoneGap one. So I typed:

sudo npm install -g cordova

Once it's installed, you can create a new app, add the android platform (assuming you already have android development tools installed) and build your new app:

cordova create hello com.example.hello HelloWorld
cordova platform add android
cordova build

Oops!

Error: Please install Android target: "android-22"
Apparently Cordova/Phonegap can only build with its own preferred version of android, which currently is 22. Editing files to specify android-19 didn't work for me; it just gave errors at a different point.

So I fired up the Android SDK manager, selected android-22 for install, accepted the license ... and waited ... and waited. In the end it took over two hours to download the android-22 SDK; the system image is 13Gb! So that's a bit of a strike against PhoneGap.

While I was waiting for android-22 to download, I took a look at Kivy.

Kivy

As a Python enthusiast, I wanted to like Kivy best. Plus, it's in the Debian repositories: I installed it with sudo apt-get install python-kivy python-kivy-examples

They have a nice quickstart tutorial for writing a Hello World app on their site. You write it, run it locally in python to bring up a window and see what the app will look like. But then the tutorial immediately jumps into more advanced programming without telling you how to build and deploy your Hello World. For Android, that information is in the Android Packaging Guide. They recommend an app called Buildozer (cute name), which you have to pull from git, build and install.

buildozer init
buildozer android debug deploy run
got started on building ... but then I noticed that it was attempting to download and build its own version of apache ant (sort of a Java version of make). I already have ant -- I've been using it for weeks for building my own Java android apps. Why did it want a different version?

The file buildozer.spec in your project's directory lets you uncomment and customize variables like:

# (int) Android SDK version to use
android.sdk = 21

# (str) Android NDK directory (if empty, it will be automatically downloaded.)
# android.ndk_path = 

# (str) Android SDK directory (if empty, it will be automatically downloaded.)
# android.sdk_path = 

Unlike a lot of Android build packages, buildozer will not inherit variables like ANDROID_SDK, ANDROID_NDK and ANDROID_HOME from your environment; you must edit buildozer.spec.

But that doesn't help with ant. Fortunately, when I inspected the Python code for buildozer itself, I discovered there was another variable that isn't mentioned in the default spec file. Just add this line:

android.ant_path = /usr/bin

Next, buildozer gave me a slew of compilation errors:

kivy/graphics/opengl.c: No such file or directory
 ... many many more lines of compilation interspersed with errors
kivy/graphics/vbo.c:1:2: error: #error Do not use this file, it is the result of a failed Cython compilation.

I had to ask on #kivy to solve that one. It turns out that the current version of cython, 0.22, doesn't work with kivy stable. My choices were to uninstall kivy and pull the development version from git, or to uninstall cython and install version 0.21.2 via pip. I opted for the latter option. Either way, there's no "make clean", so removing the dist and build directories let me start over with the new cython.

apt-get purge cython
sudo pip install Cython==0.21.2
rm -rf ./.buildozer/android/platform/python-for-android/dist
rm -rf ./.buildozer/android/platform/python-for-android/build

Buildozer was now happy, and proceeded to download and build Python-2.7.2, pygame and a large collection of other Python libraries for the ARM platform. Apparently each app packages the Python language and all libraries it needs into the Android .apk file.

Eventually I ran into trouble because I'd named my python file hello.py instead of main.py; apparently this is something you're not allowed to change and they don't mention it in the docs, but that was easily solved. Then I ran into trouble again:

Exception: Unable to find capture version in ./main.py (looking for `__version__ = ['"](.*)['"]`)
The buildozer.spec file offers two types of versioning: by default "method 1" is enabled, but I never figured out how to get past that error with "method 1" so I commented it out and uncommented "method 2". With that, I was finally able to build an Android package.

The .apk file it created was quite large because of all the embedded Python libraries: for the little 77-line pong demo, /usr/share/kivy-examples/tutorials/pong in the Debian kivy-examples package, the apk came out 7.3Mb. For comparison, my FeedViewer native java app, roughly 2000 lines of Java plus a few XML files, produces a 44k apk.

The next step was to make a real mini app. But when I looked through the Kivy examples, they all seemed highly specialized, and I couldn't find any documentation that addressed issues like what widgets were available or how to lay them out. How do I add a basic text widget? How do I put a button next to it? How do I get the app to launch in portrait rather than landscape mode? Is there any way to speed up the very slow initialization?

I'd spent a few hours on Kivy and made a Hello World app, but I was having trouble figuring out how to do anything more. I needed a change of scenery.

PhoneGap, redux

By this time, android-22 had finally finished downloading. I was ready to try PhoneGap again.

This time,

cordova platforms add android
cordova build
worked fine. It took a long time, because it downloaded the huge gradle build system rather than using something simpler like ant. I already have a copy of gradle somewhere (I downloaded it for the OsmAnd build), but it's not in my path, and I was too beaten down by this point to figure out where it was and how to get cordova to point to it.

Cordova eventually produced a 1.8Mb "hello world" apk -- a quarter the size of the Kivy package, though 20 times as big as a native Java app. Deployed on Android, it initialized much faster than the Kivy app, and came up in portrait mode but rotated correctly if I rotated the phone.

Editing the HTML, CSS and Javascript was fairly simple. You'll want to replace pretty much all of the default CSS if you don't want your app monopolized by the Cordova icon.

The only tricky part was file access: opening a file:// URL didn't work. I asked on #phonegap and someone helpfully told me I'd need the file plugin. That was easy to find in the documentation, and I added it like this:

cordova plugin search file
cordova plugin add org.apache.cordova.file

My final apk, for a small web app I use regularly on Android, was almost the same size as their hello world example: 1.8Mb. And it works great: phonegap had no problem playing an audio clip, something that was tricky when I was trying to do the same thing from a native Android java WebView class.

Summary: How do Kivy and PhoneGap compare?

This has been a long article, I know. So how do Kivy and PhoneGap compare, and which one will I be using?

They both need a large amount of disk space for the development environment. I wish I had good numbers to give you, but I was working with both systems at the same time, and their packages are scattered all over the disk so I haven't found a good way of measuring their size. I suspect PhoneGap is quite a bit bigger, because it uses gradle rather than ant and because it insists on android-22.

On the other hand, PhoneGap wins big on packaged application size: its .apk files are a quarter the size of Kivy's.

PhoneGap definitely wins on documentation. Kivy has seemingly lots of documentation, but its tutorials jumped around rather than following a logical sequence, and I had trouble finding answers to basic questions like "How do I display a text field with a button?" PhoneGap doesn't need that, because the UI is basic HTML and CSS -- limited though they are, at least most people know how to use them.

Finally, PhoneGap wins on startup speed. For my very simple test app, startup was more or less immediate, while the Kivy Hello World app required several seconds of startup time on my Galaxy S4.

Kivy is an interesting project. I like the ant-based build, the straightforward .spec file, and of course the Python language. But it still has some catching up to do in performance and documentation. For throwing together a simple app and packaging it for Android, I have to give the win to PhoneGap.

Font Features Land in Inkscape Trunk

I’ve just landed basic font features support in the development version of Inkscape. What are font features and why should you be excited? (And maybe why should you not be too excited.)

The letter combination 'st' shown without a ligature and with a 'historical' ligature.

Font Features

Font features support allows one to enable (or disable) the OpenType tables within a given font, allowing you to select alternative glyphs for rendering text.

A series of examples showing the same text with and without applying various OpenType tables.

A sample of font features in action. The font is Linux Biolinum which has reasonable OpenType tables. Try the SVG (with WOFF).

The new CSS Fonts Module Level 3 adds a variety of CSS properties for defining which OpenType tables to enable/disable (as well as having nice examples of each property’s use — this is one of the more readable W3C specifications). Inkscape trunk supports the ‘font-variants-liguatures’, ‘font-variant-caps’, ‘font-variant-numeric’, ‘font-variant-position’, and ‘font-feature-settings’ properties. The properties can be set under the Variants tab in the Text and Font dialog.

The 'Variants' Tab in the 'Text and Fonts' dialog showing a series of buttons to select which font features are enabled.

The Variants tab in the Text and Font dialog.

Why you shouldn’t be too excited

Being able to enable various font features within a font is quite exciting but there are quite a few caveats at the moment:

  • One must use a trunk build of Inkscape linked with the latest unstable version of Pango (1.37.1 or greater).
  • Font feature support in fonts is usually minimal and often buggy. It’s hard to know what OpenType tables are available in which fonts.
  • Browser support is sparse. Firefox has rather good support. Chrome support seems limited to ligatures.
  • Correct display of alternative glyphs requires that the same font as used in content creation is used for rendering. On the Web the best way to do this is to use WOFF but Inkscape has no support for using User fonts (this is a future goal of Inkscape but will require considerable work).

Thanks

I would like to thank: Behdad Esfahbod, maintainer of Pango for adding the code to Pango to make accessing the OpenType tables possible. Thanks as well to Matthias Clasen and Akira Togoh who are the source of the patch to Pango. Thanks also to all the people that supported the Inkscape Hackfest in Toronto where I was able to meet and discuss Pango issues with Behdad in person and also where the idea of adding font feature support to Inkscape germinated.

June 22, 2015

Penultimate Kickstarter voting results

Two weeks before voting closes we’re at a response rate of 91.38%: 604 of 661 possible votes. If you’re eligible to vote and haven’t done so yet, you have until 10am CEST on July 6 to make the response rate even higher! Note that no-award backers who have pledged 15 euros or more can also vote, though they haven’t received a survey. If this is you, please send mail to irina@krita.org, either with your vote or to ask for the list.

We collected enough pledges for nine whole stretch goals. Two 1500-euro backers each added a stretch goal of their own: one already in the list (“Update the look & feel of the layer docker” which is at #6, meaning that #10, “Stacked brushes” got in as well) and one off-list, “Lazy Brush” — you can see how it works here.

The table below shows the penultimate results, with in the last column the related phabricator task. To access phabricator you will need a KDE identity account, which can be made at identity.kde.org. If you have a forum account, you already have made a KDE identity account. Use this login information in the ‘LDAP’ login area. The phabricator tasks are where we discuss the requirements of each feature. This means that all considerations about the implementation are mentioned here. You can subscribe to a phabricator task to get e-mail updates on it.

Votes Stretch goal Phabricator Task
0 N/A Extra Lazy Brush: interactive tool for coloring the image in a couple of strokes T372
1 116 19.02% 10. Animated file formats export: animated gif, animated png and spritemaps T116
2 54 8.85% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document. T114
3 50 8.20% 1. Multiple layer selection improvements T105
4 47 7.70% 19. Make it possible to edit brush tips in Krita T125
5 41 6.72% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others. T127
6 38 6.23% 2. Update the look & feel of the layer docker panel (1500 euro stretch goal) T106
7 36 5.90% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes. T166
8 33 5.41% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid T109
9 31 5.08% 6. Manage palettes and color swatches T112
10 28 4.59% 18. Stacked brushes: stack two or more brushes together and use them in one stroke T124

And these didn’t make it, but we’re keeping them for next time:

  Votes   Stretch goal
11 23 3.77% 4. Select presets using keyboard shortcuts
12 19 3.11% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 18 2.95% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 18 2.95% 7. Implement a Heads-Up-Display for easy manipulation of the view
15 16 2.62% 20. Select textures on the fly to use in textured brushes
16 9 1.48% 15. HDR gradients
17 9 1.48% 11. Add precision to the layer move tool
18 8 1.31% 17. Gradient map filter
19 5 0.82% 16. On-canvas gradient previews
20 5 0.82% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.49% 3. Improve feedback when using more than one color space in a single image
22 3 0.49% 14. Add a gradient editor for stop gradients

June 20, 2015

Call to testers

We work to restoration of multimedia support within Stellarium (playback for audio and video).

Stellarium can playback for audio and video on Linux and Windows (partially?) at the moment. Development team has prepared binary packages for 3 supported platforms for public testing of this feature.

Please check work of GZ_videotest_MP4.ssc and GZ_videotest_WMV.ssc scripts within Stellarium 0.13.57.1 on Windows/OS X (Download page: https://launchpad.net/stellarium/+download)

Ubuntu users (14.10+) can use this PPA for testing - ppa:alexwolf/stellarium-media

Thank you!

June 19, 2015

Krita 2.9.5.1 and Bug Week!

It’s been a while since we made a new build of Krita… So, here’s Krita 2.9.5.1! In all the hectics surrounding the Kickstarter campaign, we worked our tails off to add new features, improvements and fixes, and that caused considerable churn in the code. And that, in turn, meant that the 2.9.5.0 was a bit, well, dot zero! So here’s a 2.9.5.1 with the following improvements:

Features

  • Implemented a composite RGB curve for Curves filter
  • Adding a Fish Eye Vanishing Point assistant.
  • Added concentric ellipse assistant.
  • Have the Settings dialog’s default button only set the defaults for the  currently selected settings page.
  • Added memory configuration options, including the location of the temporary scratch files
  • Add a profiler option: https://userbase.kde.org/Krita/Manual/Preferences/Performance
  • Create a copy of a currently open image (wish 348256)
  • Add a one way pressure sensor(in the sensors) (wish 344753 )
  • Show memory consumption in the statusbar

Fixed Bugs

  • Only set the resolution using tiff tags if they exist, this caused issues with Krita saving JPEG files to .kra.
  • BUG:349078 Fix trimming an image under Filter Layers
  • BUG:324505,294122 Fix Adjustment layers composition
  • Bug 349185 Fix explicitly showing the cursor when the Stabilizer is active
  • Fix showing a floating message when switching MDI subwindows
  • BUG:348533 Fixed a bug when the tools became disabled after new document creation
  • BUG:331708,349108 Fix a crash when redoing actions
  • BUG:348737 Fix copy/pasto: fade isn’t speed
  • BUG:345762 Mirror View now correctly remembers which subwindow is mirrored.
  • BUG:349058 Fixed bug where rulers were only visible on the canvas that was active when the option was first toggled. Fixed similar bugs with Mirror View and Wrap Around Mode.
  • BUG:331708 Fix a crash when trying to redo after canceling a stroke
  • Fixes an issue where some config files may not be picked up by the config system.
  • BUG:299555 Change cursor to “forbidden” when active layer is locked or can’t be edited with the active tool.
  • BUG:345564 Don’t warn about image file being invalid after user cancels “Background Image (overrides color)” dialog while configuring Krita
  • BUG: 348886:Don’t scroll up the list while adding or removing resources to the bundle
  • fix default presets for bristle engine, restoring scale value to 1
  • fixed a small bug in wdglayerproperties.ui that made the color profile not show up properly in the layer properties dialog. Patch by Amadiro, thanks!
  • BUG: 348507 Fix issue with import PDF dialog resolution
  • BUG:347004 Filter preview button difficult to see state
  • BUG:345754 Fixes perspective assistant lockup
  • Remember current meta-data author.
  • BUG:348726 Be more careful when ‘smart’ merging metadata
  • BUG:348652 Correctly initialize the temporary swap file
  • Fix loading PSD files saved in OpenCanvas

Downloads

Bugs…

Now, that’s not to say that 2.9.5.1 is perfect… And the increased interest in Krita has also led to an increase in reported bugs! We’ve got about 315 open bugs now, which is a record!

In fact, we need help. We need help with what’s called bug triage: checking which bugs are actually duplicates of each other and which bugs are actually reproducible and which bugs are more like wishes than bugs.

And then we need to do something about the bugs that are proper, valid and reproducible! So, we propose to have our first 2015 bug weekend. We’d like to invite everyone to install Krita 2.9.5.1 and go through some bugs in  the bug list and help us triage!

Here’s the list of bugs that need urgent triaging:

Unconfirmed, reopened, need-info Krita Bugs

Let’s get this list to zero!

And of course, there are also bugs that are already confirmed, but might have duplicated in the list above:

Confirmed Krita Bugs

We’re not looking for new bugs — but if you find one, take a moment to read Dmitry’s guide on bug reporting.

Here’s the Bug Hunter Howto, too. Join us this weekend and help us get the bug count down and the bug list manageable! In the coming two weeks, the developers will be busy fixing bugs so we can have a really stable base for all the kickstarter work!

Mosquitoes-hunter by David Revoy

Bug hunter by David Revoy

June 18, 2015

rethinking text handling in GIMP /1

At the beginning of this month, most of the open source graphics applications community convened for the libre graphics meeting in Vienna, Austria. After a one‐year hiatus, the GIMP team was back in force, and so were two of its UI team members, my colleague Kate Price and yours truly. We delivered a lecture about our most recent GIMP project, which we will write up in three parts. Here is the first.

beyond the text tool

This project was the first one in our series of open internships. I had created these last year, combining mentoring, open working and getting serious interaction design work done for the GIMP project.

Dominique Schmidt worked with us on this project, which goal is to rethink everything associated with text handling in GIMP. It would have been cool to have Dominique on stage in Vienna, telling the story himself. But he had this holiday booked; to a tropical destination; and surprisingly he insisted on going. Since projects means teamwork at m+mi works, Kate and I were instead fully able to report from the trenches.

The text project is quite a wide‐ranging one and at the moment of writing it is in‑progress. So there are going to be no magic bullets, or detailed interaction design specs to be presented—yet. Certainly a wide‐ranging project demands a structured approach, else it goes nowhere. It is exactly this structure that we will use to present it here, in this and the follow‑up blogposts.

direction

Step one: compiling a vision for the project. With text—and editing, styling it—being so ubiquitous in computers, it is very easy to get stuck in nuts‑and‐bolts discussions about it. The trick is to concentrate on the big issue: what is the meaning of text in GIMP work? What we needed was a vision: ‘what is it; who is it for and where is the value?’

The vision is compiled out of quite a few elements: of course it has to align with the overall product vision of GIMP; we interviewed the GIMP developers who have worked on text; it includes the GEGL future of non‑linear working; and we held an informal user survey on the developer mailing list—plenty of users there—about the essence of working with text.

building blocks

To show how the resulting vision, worked out, let’s discuss it line‑by‑line:

  • ‘Text in GIMP is always part of the composition—unless it is an annotation.’

This puts text thoroughly in its proper place; it is never the end‑goal, by itself. Also defined is a separate annotation workflow: users adding notes for themselves or for collaboration purposes. This sets us up for a small side project: annotations in GIMP.

  • ‘The canvas is not a page; there is no such thing as paging in GIMP.’

I love this one. The first part was phrased by Dominique, the second by Kate. This puts clear limits on what text functionality GIMP needs: beyond paragraphs, but short of page‐related stuff. Note that ‘paging’ is a verb, it is about working with pages and managing pages.

  • ‘Text is both for reading and used as graphical shapes; meta data in text—mark‑up, semantics—are not supported.’

This puts on equal footing that text is for information transport and just shapes; an excellent example where the GIMP context makes a big difference. The second part excludes any meta data based processing: e.g. auto‐layouting or auto‐image manipulation.

And now, we get to the value section:

  • GIMP users get: minute control over typography and the layout of text on the canvas.’

If there is one thing we learned from surveying users, it is the essence of typography: to control exactly, down to the (sub)pixel, the placement of every text glyph. This control is exerted via the typographical parameters: margins, leading, tracking, kerning, etc. GIMP needs to support the full spectrum of these and support top‑notch typographical workflows.

  • GIMP users get: internationalisation of text handling, for all locales supported by unicode.’

This thoroughly expands our horizon, we have to look at use of text word‐wide: many different writing systems, different writing directions. But it also sets clear limits: if it cannot be represented in unicode, it is not in scope.

  • GIMP users get: text remains editable forever.’

This anchors the GEGL dream in the project: no matter how many heavy graphical treatments have been applied on top of a piece of text, one can always change it and see the treated result immediately. But also included here is a deep understanding of projects and workflows. E.g. Murphy’s law: a mistake in the text is always found at the last moment. Or the fact that clients always keep changing the text, even after the delivery date.

  • GIMP users get: super‐fast workflow, when they are experienced.’

This reflects that GIMP is for graphics production work and the speed‑of‐use requirements that accompany that.

it’s a wrap

And there we have it. Here they are again, together as the vision statement:

  • Text in GIMP is always part of the composition—unless it is an annotation;
  • the canvas is not a page; there is no such thing as paging in GIMP;
  • text is both for reading and used as graphical shapes; meta data in text—mark‑up, semantics—are not supported.

GIMP users get:

  • minute control over typography and the layout of text on the canvas;
  • internationalisation of text handling, for all locales supported by unicode;
  • text remains editable forever;
  • super‐fast workflow, when they are experienced.

Nice and compact, so that it can be used as a tool. But these seven simple sentences pack a punch. Just formulating them has knocked this project into shape. The goals are clear from hereon.

And on that note, I hand over to Kate, who will continue our overview of the steps we took, in part two.

June 15, 2015

Interview with Graphos

fleeing

Could you tell us something about yourself?

My name is Przemek Świszcz. I also publish as Graphos. I’m a drawer and a graphic artist. I do comic strips and illustrations and create in 3D as well.

Do you paint professionally, as a hobby artist, or both?

I draw professionally but it is also my hobby. And fortunately, I can bring it together.

What genre(s) do you work in?

I’m interested mostly in fantasy, science fiction and humorous topics.
dragonseye

Whose work inspires you most — who are your role models as an artist?

Among the many excellent artists, my favorite creators are Grzegorz Rosiński, Janusz Christa, Simon Bisley and Don Rosa. However, not only comics artists but also many others are inspiration for me.

How and when did you get to try digital painting for the first time?

This is connected with computer games. I’m a gamer myself so I combined these two hobbies. That’s the reason why I draw such forms as concept art. I decided to try my hand at digital.

What makes you choose digital over traditional painting?

I still use some traditional techniques like watercolor, acrylic and drawing ink. But digital gives endless possibilities and enables editing. Most of all, it is very handy. I have all works on the computer immediately, there is no need to scan and process. This is especially important with regard to comic books.

How did you find out about Krita?

I found out about Krita when I bought a graphic tablet and I was looking for an appropriate drawing tool. I read lots of positive opinions about this program on the Internet forum www.blender.pl, so I decided to try.

What was your first impression?

My first impression of this program was “Wow, it is really good, very handy and intuitive”.

What do you love about Krita?

I like the fact that Krita is free and being updated and added to all the time, It’s a great and professional tool to create comic strips.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Krita is already a really good computer program. Some errors and crashes appears from time to time but with every new version it’s getting better. I think that tools to create animation could be a great novelty. I’m looking forward to the next version of Krita.

What sets Krita apart from the other tools that you use?

I can already do most of my work with Krita. I use other programs occasionally because some of them have useful options for processing drawings and photos.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

dragondwarf800
That could be the drawing with the dragon and dwarf or fantasy illustrations in cartoon style with the brawny man. It gives me lots of fun, both drawing it and finding out new possibilities of Krita.

What techniques and brushes did you use in it?

I mainly used draft pencils and draft brushes which turn out to be my favourite tools in Krita.

Where can people see more of your work?

The majority of my works you can find here: http://drawcrowd.com/graphos/projects

Anything else you’d like to share?

Thank you for inviting me. I hope that Krita will acquire more and more users because it is worth it, keep it up :)

June 11, 2015

designing interaction for creative pros /1

Last week at LGM 2015 I did a lecture on one of my fields of specialisation: designing interaction for creatives. There were four sections and I will cover each of them in a separate blog post. Here is part one.

The lecture coincided with the launch of the demo of Metapolator, a project I have been working on since LGM 2014. All the practical examples will be from that project and my designs for it.

see what I mean?

‘So what’s Metapolator?’ you might ask. Well, there is a definition for that:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.

‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.

‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.

‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

theme time

That is the product vision of Metapolator, which I helped to define the moment I got involved with the project. You can read all about that in the making‑of.

One of the key questions answered in a product vision is: who is this for? And with that, I have arrived at what this blog post is about:

Products need a clear, narrow definition of their target users groups. Software for creatives needs a clear definition whether it is for professionals, or not.

Checking the vision, we see that Metapolator user groups are well defined. They are ‘“pro” font designers’ and ‘typographers.’ The former are pro by definition and the latter come with their own full set of baggage; they are pro by implication.

define it like a pro

But what does pro actually mean? And why is it in quotes in the Metapolator vision? Well, the rather down‐to‐earth definition of professional—earning money with an occupation—is not helping us here. There are many making‐the‐rent professionals who are terrible hacks at what they do.

Instead it is useful to think of pros as those who have mastered a craft—a creative craft in our case. Examples of these are drawing, painting; photographing, filming, writing, animating, and editing these; sewing, the list goes on and on.

Making software for creative pros means making it for those who have worked at least 10.000 hours in that field, honing their craft. And also making it for for the apprentices and journeymen who are working to get there. These two groups do not need special ‘training wheels’ modes; they just need to get their hands dirty with the real thing.

the point

The real world just called and left a message:

making it for pros comes at a price.

First of all, it is very demanding—I will cover this in the follow‑up posts. Second, it puts some real limits on who else you can make it for. Making it for…

pros
is perfectly focussed, to meet those demanding needs.
pros + enthusiasts
(the latter also known as prosumers.) This compromises how good one can make it for pros; better keep in check how sprawling that enthusiast faction is allowed to be.
pros + enthusiasts + casual users
forget it, because pros and casual have diametrically opposite needs. There is no room in the UI for both, and with room I mean screen real estate and communication bandwidth.
pros + casual users
for the same reasons one can royally forget about this one too. Enough said.

the fall‐out

You might think: ‘duh, that speaks for itself, just make the right choice and roll with it.’ If it was only that easy. My experience has been that projects really do not like to commit here, especially when they know the consequences outlined above. And when they did make a choice, I have seen the natural tendency to worm out of it later.

I guess that having clear goals is scary for quite a few folks. Having focussed user groups means saying ‘we don’t care about you’ to vast groups of people. Only the visionary think of that as positive.

Furthermore, clear goals are a fast and effective tool to weed out bad ideas, on an industrial scale. That’s good for the product, but upsets the people who came up with these ideas. So they renegotiate on the clear goals, attacking the root of the rejection.

no fudging!

In short: define it; is your software for creatives made for pros, or not? Then compile a set of coherent user groups. In the case of Metapolator the ‘pro’ font designers and typographers fit together beautifully. Once defined, stick with it.

That’s it for part one. Here is part two: a tale of cars.

[editor’s note: Gee Peter, this post contains a lot of talk about pros, but where is the creative angle?] True, the gist this post is valid for all professionals. The upcoming parts will feature more ‘creative’ content, more Metapolator, and illustrations.

writing a product vision for Metapolator

A week ago I kicked off my involvement with the Metapolator project as I always do: with a product vision session. Metapolator is an open project and it was the first time I did the session online, so you have the chance to see the session recording (warning: 2&half hours long), which is a rare opportunity to witness such a highly strategic meeting; normally this is top‐secret stuff.

boom boom

For those not familiar with a product vision, it is a statement that we define as ‘the heartbeat of your product, it is what you are making, reduced down to its core essence.’ A clear vision helps a project to focus, to fight off distractions and to take tough design decisions.

To get a vision on the table I moderate a session with the people who drive the product development, who I simply ask ‘what is it we are making, who is it for, and where is the value?’ The session lasts until I am satisfied with the answers. I then write up the vision statement in a few short paragraphs and fine-tune it with the session participants.

To cut to the chase, here is the product vision statement for Metapolator:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.
‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

mass deconstruction

I think that makes it already quite clear what Metapolator is. However, to demonstrate what goes into writing a product vision, and to serve as a more fleshed out vision briefing, I will now discuss it sentence by sentence.

‘Metapolator is an open web tool for making many fonts.’
  • There is no standard template for writing a product vision, the structure it needs is as varied as the projects I work with. But then again it has always worked for me to lead off with a statement of identity; to start answering the question ‘what is it we are making?’ And here we have it.
  • open or libre? This was discussed during the session. At the end Simon Egli, Metapolator founder and driving force, wanted to express that we aim beyond just libre (i.e. open source code) and that ‘open’ also applies to the vibe of the tool on the user side.
  • web‑based: this is not just a statement of the technology used, of the fact that it runs in the browser. It is also a solid commitment that it runs on all desktops—mac, win and linux. And it implies that starting to use Metapolator is as easy as clicking/typing the right URL; nothing more required.
  • tool or application? The former fits better with the fact that font design and typography are master crafts (I can just see the tool in the hand of the master).
  • making or designing fonts? I have learned in the last couple of weeks that there is a font design phase where a designer concentrates on shaping eight strategic characters (for latin fonts). This is followed by a production phase where the whole character set is fleshed out, the spacing between all character pairs set, then different weights (e.g. thin and bold) are derived and maybe also narrow end extended variants. This phase is very laborious and often outsourced. ‘Making’ fonts captures both design and production phases.
  • many fonts: this is the heart of the matter. You can see from the previous point that making fonts is up to now a piecemeal activity. Metapolator is going to change that. It is dedicated to either making many different fonts in a row, or a large font family, even a collection of related families. The implication is that in the user interaction of Metapolator the focus is on making many fonts and the user needs for making many fonts take precedence in all design decisions.
‘It supports working in a font design space, instead of one glyph, one face, at a time.’
  • The first sentence said that Metapolator is going to change the world—by introducing a tool for making many fonts, something not seen before; this second one tells us how.
  • supports is not a word one uses lightly in a vision. ‘Supports XYZ’ does not mean it is just technically possible to do XYZ; it means here that this is going to be a world‐class product to do XYZ, which can only be realised with world‐class user interaction to do XYZ.
  • design space is one of these wonderful things that come up in a product vision session. Super‐user Wei Huang coined the phrase when describing working with the current version of Metapolator. It captures very nicely the working in a continuum that Metapolator supports, as contrasted with the traditional piecemeal approach, represented by ‘one glyph, one face, at a time.’ What is great for a vision is that ‘design space’ captures the vibe that working with metapolator should have, but that it is not explicit on the realisation of it. This means there is room for innovation, through technological R&D and interaction design.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency.’
  • With “pro” font designers we encounter the first user group, starting to answer ‘who is it for?’ “Pro” is in quotes because it is not the earning‑a‐living part that interests us, it is the fact that these people mastered a craft.
  • create and edit balances the two activities; it is not all about creating from scratch.
  • fonts and font families balances making very different fonts with making families; it is not all about the latter.
  • much faster is the first value statement, starting to answer ‘where is the value?’ Metapolator stands for an impressive speed increase in font design and production, by abolishing the piecemeal approach.
  • inherent consistency is the second value statement. Because the work is performed by users in the font design space, where everything is connected and continuous, the conventional user overhead of keeping everything consistent disappears.
‘They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.’
  • exploration possibilities is part feature, part value statement, part field of use and part vibe. All these four are completely different things (e.g. there is inherently zero value in a feature), captured in two words.
  • quickly adapt is a continuation of the ‘much faster’ value statement above, highlighting complementary fields of use for it.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.’
  • And with typographers we encounter the second user group. These are people who use fonts, with a whole set of typographical skills and expertise implied.
  • possibility to change is the value statement for this user group. This is a huge deal. Normally typographers have neither the skills, nor the time, to modify a font. Metapolator will open up this world to them, with that fast speed and inherent consistency that was mentioned before.
  • create new goes one step further than the previous point. Here we have now a commitment to enable more ambitious typographers (that is what ‘even’ stands for) to create new fonts.
  • to their needs is a context we should be aware of. These typographers will be designing something, anything with text, and that is their main goal. Changing or creating a font is for them a worthwhile way to get it done. But it is only part of their job, not the job. Note that the needs of typographers includes applying some very heavy graphical treatments to fonts.
‘Metapolator is extendible through plugins and custom specimens.’
  • extendible through plugins is one realisation of the ‘open’ aspect mentioned in the first sentence. This makes Metapolator a platform and its extendability will have to be taken into account in every step of its design.
  • custom specimens is slightly borderline to mention in a vision; you could say it is just a feature. I included it because it programs the project to properly support working with type specimens.
‘It contains all the tools and fine control that designers need to finish a font.’
  • all the tools: this was the result of me probing during the vision session whether Metapolator is thought to be part of a tool chain, or independent. This means that it must be designed to work stand‑alone.
  • fine control: again the result of probing, this time whether Metapolator includes the finesse to take care of those important details, on a glyph level. Yes, it all needs to be there.
  • that designers need makes it clear by whose standards the tools and control needs to be made: that of the two user groups.

this space has intentionally been left blank

Just as important as what it says in a product vision is what it doesn’t say. What it does not say Metapolator is, Metapolator is explicitly not. Not a vector drawing application, not a type layout program, not a system font manager, not a tablet or smartphone app.

The list goes on and on, and I am sure some users will come up with highly creative fields of use. That is up to them, maybe it works out or they are able to cover their needs with a plugin they write, or have written for them. For the Metapolator team that is charming to hear, but definitely out of scope.

User groups that are not mentioned, i.e. everybody who is not a “pro” font designer or a typographer, are welcome to check out Metapolator, it is free software. If their needs overlap partly with that of the defined user groups, then Metapolator will work out partly for them. But the needs of all these users are of no concern to the Metapolator team.

If that sounds harsh, then remember what a product vision is for: it helps a project to focus, to fight off distractions and to take tough design decisions. That part starts now.

designing interaction for creative pros /2

Part two of my LGM 2015 lecture (here is part one). It is a tale of cars. For many years I have had these images in my head and used them in my design practice. Let’s check them out.

freude am fahren

First up is the family car:

a catalog shot of a family car source: netcarshow.com

It stands for general software. It is comfortable, safe and general‐purpose. All you need to use it is a minimum of skills, familiarity and practice—in the case of cars this is covered by qualifying for a driving licence.

In the case of software, we are talking casual and enthusiast use. A good example is web browsers. One can start using them with a minimum of skills and practice. After gaining some experience one can comfortably drive use a browser on a daily basis. If a pro web browser exists, then it has escaped my radar.

(It would make a very interesting project, a pro web browser. But first a product maker would have to stand up with a solid vision of pro web browsing; its user groups; and some big innovation that is valuable for these users.

vroooom

When I think of creative pro interfaces, I think of this:

a rally car blasting around a corner on a rallystage in nature source: imgbuddy.com

The rally car. It is still a car, but… different. It is defined by performance. And from that, we can learn a couple of things.

speed, baby

First, creative pros work fast. They ‘wield the knife’ without doubt. A telltale sign of mastery is the speed of execution. I have this in mind all the time when designing for creative pros.

I vividly remember one of the earliest LGMs, Andy Fitzsimon went on stage and demonstrated combining pixel and vector in one image. The pace was impressive, Andy was performing nearly two operations per second.

Bam bam bam bam. At a tempo of 120 beats per minute; the solid tempo of marching bands and disco. That is the rhythm I aim to support, when designing for creative pros.

command and control

Second, creative pros really know their material, the medium they work with. They can, and need to, work with this material as direct and intimate as possible, in order to fulfil creative or commecial goals. This all can be technology‐assisted, as it is with software, but the technology has to stay out of the way, so that it does not break the bond between master and material.

The material I am talking about is that of film, graphics, music, animation, garments, et cetera. These can be digital, yes. However data and code of the software‐in‐use are not part of a creative pro’s material. Developers are always shocked, angry, then sad to learn this.

Thus Metapolator, has been designed for font designers and typographers who know what makes a font and what makes it tick. They know the role of the strokes, curves, points, the black and the white, and of the spacing. They are experienced in shaping these to get results. It is this material that—by design—Metapolator users access, just that it is organised such that they can work ten times faster.

dog eat dog

Third, it’s a competitive world. Creatives pros are not just in business. Also in zero‐budget circles there are real fun and/or prestigious projects where exactly those with proven creative performance, and ability to deliver, get asked.

Tools and software are in constant competition, also in the world of F/LOSS. It is a constant tussle: which ones provide next‐generation workflows with more speed and/or more room for creativity? Only competitive tools make masters competitive.

the point

Now that we got the picture, here is the conflict. The rules—the law and industry mores—that make good family cars may be a bad idea to apply to rally cars. And what makes rally cars competitive, may simply be illegal for family cars.

Every serious software platform has its HIG (human interface guidelines). It is the law, a spiritual guide and a huge source of security for developers. That is, for general software. It is only partly authoritative for software for creative pros. Because truly sticking to the HIG, while done all in good faith, will render creative pro software non‐competitive.

vorsprung durch technik

Rally cars contain custom parts, handmade from high performance materials like aluminium, titanium, carbon, etc. This is expensive and done because nothing off‐the‐shelf is sufficient.

Similarly creative pro software contains custom widgets, handmade at great expense—in design and development. For a decade I have witnessed that it is a force of nature to end up in that situation. Not for the sake of being cool or different, but all in the name of performance.

tough cookie

So, with loose laws and a natural tendency for custom widgets, can you do just what you like when you make creative pro software? Well no. It is tough, you still have to do the right thing. If this situation makes you feel rather lost, without guidance, then reach out and find yourself an interaction designer who really knows this type of material. Make them your compass.

picture show

To illustrate all this, let’s look at some of my designs for Metapolator.

of a glyph—surrounded by two others—all the points that make up its     skeleton are connected by outward radiating lines to big circular handles

Speed, baby! Big handles to select and move individual points on the skeleton of a glyph (i.e. direct control of the material). During a brainstorm session with Metapolator product visionary Simon Egli, he noticed how the points could be connected by rigid sticks to big handles.

I worked out the design with big (fast) handles available for furious working, but out of the way of the glyph, so it can be continuously evaluated (unbroken workflow).

four sliders for mixing fonts, one is reversed and has its thumb aligned     with another slider

This is a custom slider set for freely mixing master fonts—metapolation—to make new fonts. In this case four fonts, but it has been designed to easily scale up to nine or more; a Metapolator strength (vis‑à‐vis the competition).

One of the sliders—‘Alternate’—is in an “illegal” configuration; it is reversed. This is done to implement the rule that the mix of fonts has to always add up to 100%. There is special coupled behaviour between the sliders to ensure that.

The design of this part included a generous amount of exploration and several major revisions. Standard widgets and following the HIG would not deliver that every sliders setting maps to one unique font mix. Apart from a consistency goal, that is also about maximising input resolution. So I broke some rules and went custom.

a crossing 2-D axies system coupled to a single axis, with at least     3 fonts on each axis, with a font family and a single font instance placed     on them

This is also a metapolation control. In this case a three‐dimensional one involving eight master fonts. Working with that many fonts is really a pro thing; you have to know what you are doing and have the experience to set up, identify and pick the ‘good font’ results.

The long blue arrow is a font family, with nine or so fonts as members. The whole family can be manipulated as one entity (i.e. placed and spanned in this 3D space) as can each member font individually.

glyphs a, b and c set in 3 different fonts, with point selections across them

Final example: complex selections. Across three different fonts and three different glyphs, several points have been selected. Now they can be manipulated at the same time. That is definitely not consumer‑grade.

If that looks easy, I say ‘you’re welcome.’ It takes serious planning ahead in the design to allow this interaction; for the three fonts to appear, editable, at the same time; for deep selections within several glyphs to be possible and manageable—the big handles‑on‐sticks help also here.

vroom, vroom

In short: if there is one thing that I want you to take away from this blog post, then it is that image of the rally car. How different its construction, deployment and handling are. Making software for creative pros means making a product that is definitely not consumer‑grade.

That’s it for part two. Go straight to part three: 50–50, equal opportunities.

design lessons with Daft Punk

I am sure you have noticed the Daft Punk marketing master plan that is taking over all media channels at the moment. And I admit that I am happy to consume—and inhale—anything (semi‐)intelligent that is being written about them.

Yesterday I read this Observer interview with the ‘notoriously shy French duo.’ Afterwards, intuition told me there was something vaguely familiar about what they had said. I checked again and sure enough, plenty of it applies to (interaction) design.

punk rules, OK?

Below are Daft Punk quotes I lifted from the article, followed by what I associate with each. There are also a couple of cameo appearances by hit‑production legend Nile Rodgers.

‘The music that’s being done today has lost its magic and its poetry because it’s rooted in everyday life and is highly technological.’

Wow, not the most hands‑on quote to start with. But I swore that I’d present them in the order they appear in the article. With the mentioned ‘magic and poetry’, I associate fantastic design work. This means sweeping solutions, for which there needs to be at least one designer on the project with a big‐picture view.

Being constantly ‘rooted in everyday life’—e.g. relying on testing (A/B or usability); or working piecemeal, or driven by user requests, or in firefighter mode—shortens the horizons and shrinks the goals. It surely programs the project for mediocrity, i.e. humdrum, incremental solutions.

Every user has to deal every day with software that ‘is highly technological.’ Everybody thinks this sucks. Making software is highly technological when one is staring at code; when thinking about code; when taking prototyping capabilities into account; when technology informs the interaction, verbatim. Designing great interaction means not making any of these mistakes.

‘In early interviews they came across as suspicious and aloof. “It’s because you’re 18 and you feel maybe guilty: why are we chosen to do these things?” says Thomas. “There’s definitely reasons to feel less uncomfortable now. It’s one thing to say you’re going to do it and another to have done it for 20 years.”’

Now that is the voice of experience talking. The first part of it is this early phase; fresh out of school and real (work) life is starting. This suspicion of one’s own talents, entering a company, scene or industry and expecting the folks around you to be like you, see things like you. And then they don’t. Very confusing, who is wrong here?

The second part is having ‘done it for 20 years.’ If that involved a portfolio of successful work; continuous self‐development; the discovery of what a difference ‘being experienced’ makes and getting to know a few peers, then it has become more comfortable to be a designer. Just don’t get too comfortable; make sure every new project you take on challenges and develops you.

‘The only secret to being in control is to have it in the beginning. Retaining control is still hard, but obtaining control is virtually impossible.’

The first level where this holds is getting a design implemented. Quite often developers like to first put some temporary—and highly technological—interaction while they sort out the non‑UI code. The real design will be implemented later. Then time ticks away, the design lands in a drawer and the ‘temporary’ UI gets shipped.

I do not think this is a malicious trick, but it happens so often that I do not buy it anymore. The only secret to getting interaction design implemented is to do it in the beginning.

The second level is that of the overall collaboration; ‘obtaining control is virtually impossible,’ no matter how big a boost a designer has given the project. So one has to start out with control from the beginning, it has to be endowed by the project leadership. And then one has to work hard to retain it.

‘Guy‑Man, who designed the artwork, says that Thomas is the “hands‑on technician” while he is the “filter”: the man who stands back and says oui or non.’

Filter is the stuff designers are made of. In the case of interaction designers it means filtering out of all the things users say, the things they actually need. It means saying non to many things that are simply technologically possible, but useless, and oui to exactly that what realises the product, addresses users needs and is, yes, technologically possible.

Being the filter does not always make you friends, having to say non to cool‐sounding initiatives that in the bigger scheme of things are incredibly unhelpful. But being a yes‑man makes an ineffective designer, with non‑designed results.

Making software is not a game with unlimited time and resources; user interaction is not one with unlimited screen space and communication bandwidth. A filter is crucial.

‘“The genius is never in the writing, it’s in the rewriting,” says Rodgers. “Whenever they put out records I can hear the amount of work that’s gone into them—those microscopically small decisions that other people won’t even think about. It’s cool, but they massage it so it’s not just cool—it’s amazing.”’

I learned some years ago that it is not only the BIG plans and sweeping solutions that make a master designer. It is also in the details. All the tiny details.

All these ‘microscopically small decisions’ have to be taken in the way that strengthen the overall design, or else it will crumble to dust. This creates tension with all the collaborators, who ‘won’t even think about’ these details. They cannot see the point, the crumbling. Masters do.

‘We wish people could be influenced by our approach as much as our output. It’s about breaking the rules and doing something different rather than taking some arrangements we did 10 years ago that have now become a formula.’

Design is not a formula, not a sauce you pour over software. Design is a process, performed by those who can. A designer cannot tell upfront what the design will be like, but knows where to start, what to tackle and when it is done. That sounds trivial, but for non‑designers these four points work exactly opposite.

Apply the design process to a unique (i.e. non‑copycat) project and you will get an appropriate and unique design. Blindly applying this design to another project is by definition inappropriate.

‘“Computers aren’t really music instruments,” he sniffs. “And the only way to listen to it is on a computer as well. Human creativity is the ultimate interface. It’s much more powerful than the mouse or the touch screen.”’

This quote hits the nail on the head by setting the flow of creativity between humans as the baseline and then noting how computer interfaces are completely humbled by it. It is too easy to forget about this when your everyday world is making software.

The truth about software for designers (of music, graphics and other media) is that not much of it is designed—the interaction I mean, although it may look cool. Being software for a niche market makes it armpit of usability material: developers talking directly to users, implementing their feature request in a highly technological way.

To make an end to this sad state of affairs, a design process needs to be introduced that is rooted in a complete—but filtered—understanding of the activity called human creativity.

‘Enjoying the Hollywood analogy, Thomas says Daft Punk were the album’s screenwriters and directors while the guest performers were the actors, but actors who were given licence to write their own lines.’

I am also enjoying that analogy, and the delicate balance that is implied. On the one hand, interaction designers get to create the embodiment of the software ‘out of thin air’ and write it down in some form of specification, the screenplay. Being in the position of seeing how everything is connected, it also falls naturally to them to direct the further realisation, by developers and media designers.

If that sounds very bossy to you, it is balanced by the fact that these developers and media designers already have complete ‘licence to write their own lines.’ For developers every line of code they write is literary theirs.

The delicate balance depends on developers and media designers being able to contribute to the interaction design process—in both meanings of that phrase. And it depends on all written lines fitting the screenplay.

‘“What I worked on was quite bare bones and everything else grew up around me,” says Nile Rodgers. “They just wanted me to be free to play. That’s the way we used to make records back in the day. It almost felt like we’d moved back in time.”’

This is what design processes are about; to create a space where one is free to play. This in the dead‐serious context of solving the problem. Play is used to get around a wall or two that stand between the designer and the solution for making it work.

It takes a ‘quite bare‐bones’ environment to be free: pencil and paper in the case of interaction design. That may ‘feel like moving back in time’ but it is actually really liberating; it offers a great interface for human creativity. Once you got around those walls and hit upon the solution, every part of the design can grow up around what you played.

And on that note, I’ll finish today’s blog post.

12 things you can do to succeed, enterprise edition

Part three of the mini‐series I am running at the moment on the usual social channels—twitter, g+, linkedin and xing—called positive action ships successful products. There, for every wishful thought that persists in the (mobile) software industry, I supply a complementary positive action.

Today’s offering is enterprise grade; let’s turn water into wine. If you are a product maker, or manage a product‐shipping organisation, then you can initiate at least one of these today:

Better go for it, deploy user research and design to ensure that new is really better.
cf. ‘Better play it safe, because it has not been done before.’
Better go for it, ban meetings; get the makers to collaborate in (pairs of) pairs.
cf. ‘Better play it safe, so it won’t cause all these extra rounds of meetings.’
Better go for it, evangelise the new, listen carefully to any needs, ignore naysayers.
cf. ‘Better play it safe, because the first feedback was rather reserved.’
Better go for it, make it an offer that can’t be refused—if it gets nixed, go underground.
cf. ‘Better play it safe, so we get the OK.’
Better go for it, negotiate until you trust that the engineers can build the design.
cf. ‘Better play it safe, because the engineers say it cannot be done.’
Better go for it, it is faster to build a completely new core product from scratch.
cf. ‘Better play it safe, because that code base is spaghetti.’
Better go for it, and enjoy every minute; save time through structure, research + design.
cf. ‘Better play it safe, so we all can go home at five—and on 14:30 (D) / to the pub (GB) on friday.’
Better go for it, because the blame will fall on us anyway.
cf. ‘Better play it safe, because the blame will fall on us.’
Better go for it, once the core product blows away the competition, features can be added.
cf. ‘Better play it safe, so we have time for more features.’
Better go for it, use frequent user testing to debug the innovative design.
cf. ‘Better play it safe, to pass the usability test.’
Better go for it, define a new game, on your terms, and ditch them old millstones.
cf. ‘Better play it safe, to pass the regression test.’
Better go for it, model careers are built on delivering remarkable results.
cf. ‘Better play it safe, to not jeopardise my promotion.’

ask not what this blog can do for you…

Now, what else can you do? First of all, you can spread the word; share this blog post. Second, the series continues, so I invite you to connect via twitter, g+, linkedin, or xing, and get a fresh jolt of positive action every workday.

And third, if you able and willing to take some positive action, then email or call us. We will happy to help you ship successful products.

ps: you can check out part two if you missed it.

a half‑century of success

This is the final instalment of the mini‐series I ran on the usual social channels—twitter, g+, linkedin and xing—called positive action ships successful products. There, for every wishful thought that persists in the (mobile) software industry, I supplied a complementary positive action.

To complete the round number of fifty, I present the final dozen + two of these for your reference. If you are a product maker, or manage a product‐shipping organisation, then you can initiate at least one of these today:

Make the lead designers of your hard‐ and software work as a pair; make them inseparable.
cf. ‘The hardware specs are fixed, now we can start with the software design.’
Define your focus so tightly, it hurts (a bit); deploy it so you ship, instead of discuss.
cf. ‘We spent ages discussing this, trying to find a solution that pleased everyone.’
Make interaction design the backbone of your product realisation; or compete on low, low price.
cf. ‘We thought we could spend a couple of man‐days on the low‐hanging usability fruit.’
Deploy lightweight design and engineering documentation to keep everyone with the programme.
cf. ‘The source is the ultimate documentation.’
Ban hacks, at least from those who are supposed to shape your product for the long term.
cf. ‘There is no need to go for the gold‐taps solution.’
Set a ‘feature budget’ and set it way below bloat; be frugal, spend it on user value.
cf. ‘It does not hurt to have those features as well.’
Set the goal to be competitive on each platform you support—that starts with your interaction.
cf. ‘One code base; fully cross‐platform.’
Root out boilerplate thinking for any product aspect; your design process is your QA.
cf. ‘You have to pick your battles.’
Set up your designers for big impact on the internals of your software, instead of vice versa.
cf. ‘Once you get familiar with the internal workings of our software, it becomes easy to use.’
Define your target user group(s) so tightly, it hurts; focus on their needs, exclusively.
cf. ‘Our specific target user group is: everyone.’
Introduce this KPI: the more your developers think the UI is ‘on the wrong track,’ the better.
cf. ‘Our developers are very experienced; they make the UI of their modules as they see fit.’
Hire those who are able to take your interaction beyond the HIG, once you achieve compliance.
cf. ‘We religiously adhere to the HIG.’
Regularly analyse workarounds adopted by your users; distill from them additional user needs.’
cf. ‘You can do that by [writing, running] a script.’
Make the connection: product–users–tech. Design is the process, the solution and realisation.
cf. ‘What do you mean “it’s all connected”? we just don’t have the time for those bits and pieces.’

ask not what this blog can do for you…

Now, what else can you do? First of all, you can spread the word; share this blog post. Second, I invite you to connect via twitter, g+, linkedin, or xing.

And third, if you able and willing to take some positive action, then email or call us. We will happy to help you ship successful products.

ps: you can check out part three if you missed it.

Krita Lime PPA: always fresh versions for Ubuntu users!

A great piece of news for Ubuntu Krita users is coming today! We have just opened a repository with regular builds of Krita git master!

Link: https://launchpad.net/~dimula73/+archive/krita

The main purpose of this PPA is to provide everyone with an always fresh version of Krita, without the need to update the whole system. Now one can get all the latest Krita features without a delay.

At the moment git master version has at least three features, which are absent in Krita 2.7 Beta1 (and cannot be merged there due to code freeze):

  • New "New Image From Clipboard" dialog with a nice preview widget implemented by our new contributor Matjaž Rous
  • New "pseudo-infinite" canvas feature (read here) for dynamical image resizing
  • New "Overview Docker" which lets you see the whole image at a glance
To install the newest Krita you need to do a few steps:
  1. Check that you don't have any original calligra or krita packages provided by your distribution or project-neon (we don't check that automatically currently)
  2. Add the PPA to repositories list:
    sudo add-apt-repository ppa:dimula73/krita
  3. Update the cache: 
    sudo apt-get update 
  4. Install Krita: 
    sudo sudo apt-get install krita-testing krita-testing-dbg 
Update: (not needed anymore)
After installing this package you should restart X-server to get environment variables updated!


Of course, being based on git-master may sometimes result in a bit of instability, so make sure you report any problems so we can fix them! :)

Interim tally of Kickstarter votes

Only one week after we sent out our Kickstarter survey, 581 of the 661 15-euro-and-up backers (including the PayPal backers) have sent in their votes. This is a response rate of a whopping 87.90%! Here’s the current tally:

  Votes   Stretch goal
1 113 19.45% 10. Animated file formats export: animated gif, animated png and spritemaps
2 53 9.12% 8. Rulers and guides: drag out guides from the rulers and generate, save and load common sets of guides. Save guides with the document.
3 46 7.92% 1. Multiple layer selection improvements
4 45 7.75% 19. Make it possible to edit brush tips in Krita
5 38 6.54% 21. Implement a Heads-Up-Display to manipulate the common brush settings: opacity, size, flow and others.
6 37 6.37% 2. Update the look & feel of the layer docker panel
7 35 6.02% 22. Fuzzy strokes: make the stroke consistent, but add randomness between strokes.
8 30 5.16% 5. Improve grids: add a grid docker, add new grid definitions, snap to grid
9 29 4.99% 6. Manage palettes and color swatches
10 26 4.48% 18. Stacked brushes: stack two or more brushes together and use them in one stroke
11 21 3.61% 4. Select presets using keyboard shortcuts
12 18 3.10% 13. Scale from center pivot: right now, we transform from the corners, not the pivot point.
13 17 2.93% 9. Composition helps: vector objects that you can place and that help with creating rules of thirds, spiral, golden mean and other compositions.
14 17 2.93% 7. Implement a Heads-Up-]Display for easy manipulation of the view
15 15 2.58% 20. Select textures on the fly to use in textured brushes
16 9 1.55% 15. HDR gradients
17 9 1.55% 11. Add precision to the layer move tool
18 7 1.20% 17. Gradient map filter
19 5 0.86% 16. On-canvas gradient previews
20 5 0.86% 12. Show a tooltip when hovering over a layer with content to show which one you’re going to move.
21 3 0.52% 3. Improve feedback when using more than one color space in a single image
22 3 0.52% 14. Add a gradient editor for stop gradients

If you’re entitled to vote and haven’t done so yet, please do! Any vote received on or before July 6, a full month after sending out the survey, will count.

June 10, 2015

Krita 2.9.5 Released

The Kickstarter was a success, but that didn’t keep us from adding new features and fixing bugs! We made quite a bit of progress including adding pass-through mode to group layers, allowing inherit alpha to be used on all layer types, better PSD support, and adding an on-canvas preview of the color being picked. We even added a new brush preset history docker! You can see the full release notes below.

Krita 2.9.5 also fixes a critical bug in 2.9.4.7. Please upgrade if you experience crashes after restarting Krita.

New Features:

  • Add a lightness curve to the per-channel filter (bug 324332)
  • Add a brush preset history docker (bug 322425)
  • Add an all-files option to the file-open dialog
  • Add global light to the layer styles functionality (bug 348178)
  • Allow the user to choose a profile for untagged PNG images (bug 345913, 348014)
  • Add a built-in performance logger
  • Added a default set of paintop preset tags (these are not deletable yet!)
  • Add support for author profiles (default, anonymous, custom) to .kra files
  • Add buttons and actions for layer styles to the Layer docker
  • Add ctrl-f shortcut for re-applying the previously used filter (bug 348119)
  • Warn Intel users that they might have to update their display driver
  • Implement loading/saving of layer styles to PSD files
  • Add support for loading/saving patterns used in layer styles
  • Allow inherit alpha on all types of layers
  • Add a pass-through switch for group layers (bug 347746, 185448)
  • Implement saving of group layers to PSD
  • Add support for WebP (on Linux)
  • Add a shortcut (Ctrl-Shift-N) for edit/paste into New Image (bug 344750)
  • Add on-canvas preview of the current color when picking colors (bug 338128)
  • Add a mypaint-style circle brush outline.
  • Split the cursor configuration into outline selection and cursor selection
  • Add loading and saving transparancy masks to PSD groups

Performance improvements:

  • Remove delay on stroke start when using Krita with a translation

Bug fixes:

  • Fix view rotation menu by adding rotation actions
  • Fix crash when duplicating a global selection mask (bug 348461)
  • Improve the GUI for the advanced color selector settings (wrench icon on Advanced color selector)
  • Fix resetting the number of favorite presets in the popup (bug 344610)
  • Set proper activation flags for the Clear action (bug 34838)
  • Fix several bugs handling multiple documents, views and windows (bug 348341, bug 348162)
  • Fix the limits for document resolution (bug 348339)
  • Fix saving multiple layers with layer styles to .kra files (bug 348178)
  • Fix display of 16 bit/channel RGB images (bug 343765)
  • Fix the P_Graphite_Pencil_grain.gih brush tip file
  • Fix updating the projection when undoing removing a layer (bug 345600)
  • Improve handling of command-line arguments
  • Fix the autosave recovery dialog on Windows
  • Fix creating templates from the current image (bug 348021)
  • Fix layer styles and inherit alpha (bug 347120)
  • Work around crash in the Oxygen widget style when animations are enabled (bug 347367)
  • When loading JPEG files saved by Photoshop, also check the metadata for resolution information (bug 347572)
  • Don’t crash when trying to isolate a transform mask (transform masks cannot be painted on) (bug 347622)
  • Correctly load Burn, Color Burn blending modes from PSD (bug 333454)
  • Allow select-opaque on group layers (bug 347500)
  • Fix clone brush to show the outline even if it’s globally hidden (bug 288194)
  • Fix saving of gradients to layer styles
  • Improve the layout of the sliders in the toolbar
  • Fix loading floating point TIFF files (bug 344334)

Downloads

 

Role change: Now snappier

Happy to announce that I'm changing roles at Canonical, moving down the stack to join the Snappy team. It is in some ways a formalization of my most recent work which has been more on application lifecycle and containment than higher level stuff like indicators and other user services. I'll be working on the core snappy team to ensuring that snappy works for a wide variety of use cases, from small sensors embedded in your world to phones to services running in the cloud. For me Snappy formalizes a lot of trends that we're seeing all over computing today so I'm excited to get more involved with it.

To kick things off I'll be working on making Snaps easier to build and maintain using the native dependency systems that exist already for most languages. The beautiful part about bundling is that we no longer have to force our dependency system on others, they can choose what works best for them. But, we still need to make integrating with it easy.

New adventures bringing new challenges are where I like to roam. I'll still be around though, and might even contribute a patch or two to some of my old haunts.

June 09, 2015

Basic Landscape Exposure Blending with GIMP and G'MIC


Basic Landscape Exposure Blending with GIMP and G'MIC

Exploring exposure blending entirely in GIMP

Photographer Ian Hex had previously explored the topic of exposure blending with us by using luminosity masks in darktable. For his first video tutorial he’s revisiting the subject entirely in GIMP and G’MIC.

Have a look and let him know what you think in the forum. He’s promised more if he gets a good response from people - so let’s give him some encouragement!

released darktable 1.6.7

We are happy to announce that darktable 1.6.7 has been released.

The release notes and relevant downloads can be found attached to this git tag:
https://github.com/darktable-org/darktable/releases/tag/release-1.6.7
Please only use our provided packages ("darktable-1.6.7.*" tar.xz and dmg) not the auto-created tarballs from github ("Source code", zip and tar.gz). The latter are just git snapshots and will not work! Here are the direct links to tar.xz and dmg:
https://github.com/darktable-org/darktable/releases/download/release-1.6.7/darktable-1.6.7.tar.xz
https://github.com/darktable-org/darktable/releases/download/release-1.6.7/darktable-1.6.7.dmg

this is another point release in the stable 1.6.x series.

sha256sum darktable-1.6.7.tar.xz
a75073b49df0a30cd2686624feeb6210bc083bc37112ae6e045f8523db4c4c98
sha256sum darktable-1.6.7.dmg
6630230049e6d2c4cdfd39585f95fbd1ee439a8dad107f7332aefeb1dd75b831

security

miscellaneous

  • improvements to facebook export
  • interpolation fixups
  • demosaic code cleanups
  • slideshow should handle very small images better
  • improve Olympus lens detection
  • various minor memory leak fixes
  • various other fixes
  • Pentax (K-x) DNG old embedded preview left over is now removed
  • modern OSX display profile handling

camera support

  • Nikon D7200 (both 12bit and 14bit compressed NEFs)
  • Nikon Coolpix P340
  • Canon EOS 750D
  • Canon EOS 760D
  • Canon EOS M2
  • Panasonic DMC-CM1
  • Panasonic DMC-GF7 (4:3 only)
  • Olympus XZ-10
  • Olympus SP570UZ
  • Samsung NX500
  • Fuji F600EXR

aspect ratios

  • Pansonic DMC-G5
  • Panasonic DMC-GM5
  • Panasonic FZ200

white balance presets

  • Nikon D7200
  • Nikon Coolpix P340
  • Panasonic DMC-GM1
  • Panasonic DMC-GM5
  • Olympus E-M10 (updated)
  • Olympus E-PL7
  • Olympus XZ-10

noise profiles

  • Canon Powershot G9
  • Sony A350

basecurves

  • Nikon D7200
  • Nikon D7000
  • Nikon D750
  • Nikon D90

translations

  • Catalan
  • German
  • Spanish
  • Swedish

June 08, 2015

Adventure Dental

[Adventure Dental] This sign, in Santa Fe, always makes me do a double-take.

Would you go to a dentist or eye doctor named "Adventure Dental"?

Personally, I prefer that my dental and vision visits are as un-adventurous as possible.

June 06, 2015

Blender at SIGGRAPH 2015

Siggraph 2015 is in Los Angeles, downtown, from 9-13 August. This is the highlight of the year for everyone who’s into 3D Computer Graphics. As usual you can find Blender users/developers all over, but especially here:

  • Sunday 3-5 PM: Birds of a Feather (free access)
    – Presentation of last year’s work and upcoming projects by chairman Ton Roosendaal. Includes time for everyone to speak up and share!
    – Viewing of the Cosmos Laundromat open movie (12 minutes)
    – Artists/developers demos and showcase, including several people of the BI crew.
  • Tuesday, Wednesday, Thursday: Tradeshow booth #1111
    Great place to meet, hangout, get demos, or share your feedback with everyone. You can always find plenty of Blender developers and users here

Meeting Point: Triple 8, near Figueroa hotel.

  • Free Tradeshow tickets… gone
    The coupon code EXHBI9507 is now invalid… sorry. (July 31st).

Screen Shot 2015-07-31 at 16.57.50

Interesting Usertest and Incoming


Interesting Usertest and Incoming

A view of someone using the site and contributing

I ran across a neat website the other day for getting actual user feedback when viewing your website: UserTesting. They have a free option called peek that records a short (~5 min.) screencast of a user visiting the site and narrating their impressions.

Peek Logo

You can imagine this to be quite interesting to someone building a site.

It appears the service asks its testers to answer three specific questions (I am assuming this is for the free service mainly):

  • What is your first impression of this web page? What is this page for?
  • What is the first thing you would like to do on this page? Please go ahead and try to do that now. Please describe your experience.
  • What stood out to you on this website? What, if anything, frustrated you about this site? Please summarize your thoughts regarding this website.

Here’s the actual video they sent me (can also be found on their website):

I don’t have much to say about the testing. It was very insightful and helpful to hear someones view coming to the site fresh. I’m glad that my focus on simplicity is appreciated!

It was interesting that the navigation drawer wasn’t used, or found, until the very end of the session. It was also interesting to hear the testers thoughts around scrolling down the main page (is it so rare these days for content to be longer than a single screen - above the fold?).

Exposure Blended Panorama Coming Soon

The creator of new processing project PhotoFlow, Andrea Ferrero, is being kind enough to take a break from coding to write a new tutorial for us: “Exposure Blended Panoramas with Hugin and Photoflow”!

I’ve been collaborating with him on getting things in order to publish and this looks like it’s going to be a fun tutorial!

Submitting

We’ve been talking back and forth trying to find a good workflow for contributors to be able to provide submissions as easily as possible. At the moment I translate any submissions into Markdown/HTML as needed from whatever source the author decides to throw at me. This is less than ideal (but at least it’s nice and easy for authors - which is more important to me than having to port them manually).

Github Submissions

For those comfortable with Git and Github I have created a neat option to submit posts. You can fork my PIXLS.US repository from here:

https://github.com/patdavid/PIXLSUS

Just follow the instructions on that page, and issue a pull request when you’re done. Simple! :) You may want to communicate with me to let me know the status of the submission, in case you’re still working on it, or it’s ready to be published.

Any Old Files

Of course, if you want to submit some content, please don’t feel you have to use Github if you’re not comfortable with it. Feel free to write it any way that works best for you (as I said, my native build files are usually simple Markdown). You can also reach out to me and let me know what you may be thinking ahead of time, as I might be able to help out.

June 05, 2015

a half‑century of product fail

This is the final instalment of the mini‐series I ran on the usual social channels—twitter, g+, linkedin and xing—called wishful thinking breeds failed products. It distilled what I have witnessed and heard during 20 years in the (mobile) software industry.

To complete the round number of fifty, I present the final dozen + two wishful thoughts for future reference. I am curious if you recognise some of these:

‘The hardware specs are fixed, now we can start with the software design.’
‘We spent ages discussing this, trying to find a solution that pleased everyone.’
‘We thought we could spend a couple of man‐days on the low‐hanging usability fruit.’
‘The source is the ultimate documentation.’
‘There is no need to go for the gold‐taps solution.’
‘It does not hurt to have those features as well.’
‘One code base; fully cross‐platform.’
‘You have to pick your battles.’
‘Once you get familiar with the internal workings of our software, it becomes easy to use.’
‘Our specific target user group is: everyone.’
‘Our developers are very experienced; they make the UI of their modules as they see fit.’
‘We religiously adhere to the HIG.’
‘You can do that by [writing, running] a script.’
‘What do you mean “it’s all connected”? we just don’t have the time for those bits and pieces.’

ask not what this blog can do for you…

Now, what can you do? First of all, you can spread the word; share this blog post. Second, I invite you to connect via twitter, g+, linkedin, or xing; there is a new series starting, again with a thought every workday.

And third, if you recognise that some of the wishful thinking is practiced at your software project and you can and want to do something about it, then email or call us. We will treat your case in total confidence.

ps: you can check out part three if you missed it.

Time for some Kiki Fanart!

Kiki fanart is always welcome — so here is Banajune‘s take on Kiki!

krt09-kiki-color

June 04, 2015

designing interaction for creative pros /3

Part three of my LGM 2015 lecture (here is part one and two). It is about equal opportunities in creative‐pro interaction. To see what I mean, let’s make something: a square.

two‐way street

There are two ways for masters to get the job done. The first way is to start somewhere and to keep chipping away at it until it is right:

creating a square by starting with a rectangle, putting it bottom-left     corner into place, then size the top-right one to perfection heads up: animated gif

So let’s throw in some material, move and size it (bam, bam, bam)—right, done. That was quick and the result is perfect.

like putty

This is called free working; squeeze it until it feels right. It is always hands‑on and I always move both my hands in a moulding motion when I think of it, to remind me what it feels like.

Although done by feeling, it is still fast and furious. Don’t mistake this for ‘trying out’, ‘fiddling’ or ‘let’s see where we end up’; that is for dilettantes. When masters pick up their tools, it is with great confidence that the result they have in mind will be achieved in a predictable, and short, amount of time.

on the other hand…

The second way for masters to get the job done is to plan a bit and then create a precise, parametric, set‑up:

top, bottom, left and right guide lines that mark out the perfect square

This is called a jig. Now the master only has to ‘cut’ once and a perfect result is achieved:

top, bottom, left and right guide lines appear one by one, then the     perfect square appears between them another animated gif

measure twice, cut once

This is called measured working. It is an analytical approach and involves planning ahead. It delivers precise results, to the limits of the medium. You will find it in many places; everywhere where the hands‑on factor is zero, parameters are entered and—bam—the result is achieved in one stroke.

It might be tempting to think that setting up the jig always involves punching in numbers. However also making choices from discrete sets, e.g. picking a color from a set of swatches, is part of it. Thus it is better to talk in general of entering parameters.

old‐skool

I did not make up all this by myself. I am indebted to this very cool book that goes deep into the matter of free and measured working, as practiced for centuries by masters. Luckily it is back in print:

the cover of the book the nature and art of workmanship, by david pye

Once familiar with this duality in how masters work, it can be used to analyse their workflows. For instance while reading this article about Brian Eno working with the band James.

In the sidebar (Eno’s Gear) it says ‘I don’t think he even saves the sounds that he gets. He just knocks them up from scratch every time’ about using one piece of gear, and ‘It’s stuffed full of his own presets’ about another. Reading that, I thought: that has, respectively, the vibe of free and measured working.

I have looped that insight back into my designs of creative‐pro software from then on. That is, giving equal importance to users building a collection of presets and knocking‑it‑up‐from‐scratch for tool set‑ups, configuring the work environment and assets (brush shapes, patterns, gradients, et cetera).

(There are more nuggets of that’s‐how‐masters‐work in the Eno article; see if you can spot them.)

the point

And with that I have arrived at rule numero one of this blog post:

All masters work free and measured; the only thing predictable about it is that it occurs 50–50, with no patterns.

We cannot rely on a given master taking the same route—free or measured—for all the different tasks they perform. It’s a mix, and a different mix for every master. Thus design strategies based on ‘remember if this user tends to do things free or measured’ are flawed.

We cannot rely on a given task being predominantly performed via either route—free or measured—by masters. It’s a mix, a 50–50 mix. Thus design strategies based on ‘analyse the task; is it free or measured?’ are flawed.

same, not same

The same master doing the same task will pick a different route—free or measured—at different times, based on the context they are in. For instance how difficult the overall project is. And for sure their own mood plays a role; are they under stress, are they tired (that night shift meeting that deadline)?

Masters will guesstimate the shortest route to success under the circumstances—and then take it.

dig it

With this 50–50 mix and no patterns, software for creative pros has only one choice:

Equal opportunity: offer every operation that users can perform in—at least—two ways: one free, one measured.

If you now say either ‘man, this will double my software in size’, or ‘yeah, my software already does that’, then my reply is: experience says that once we really start checking, you will see that current creative‐pro software achieves 60–80% equal opportunity.

how low can you go?

The question is not how do we prevent this list of operations from ballooning. It is: are there any more innocent, boring, easy to overlook operations to go on our list? For instance: setting the document size. Yeah boring, but often enough key to the creative result. A crop tool is the free way to do that operation.

From the Brian Eno episode above we have seen that it is not enough to filter the operations list by ‘does it change the creative end result?’ There we saw that meta‐operations (set up tools, configuring the work environment and assets) are also fully in scope.

picture show

To illustrate all this, let’s look at some of my designs for Metapolator.

the parameters panel listing type parameters on both master and glyph level,     for each parameter values, modifications and effective values are listed.     a popup is used to add a math operator (+) to a parameter (tension) a final animated gif

This is measured central: the parameters panel. Literally here parameters are entered and—bam—applied. With the popup action shown the system is taken to the next level. Preferably for wide‐ranging selections, expressions of change (e.g. new value = A × old + B) can be built.

the curve of the blowl stroke of the b glyph is being edited with use of     some big handles

Most on‑canvas interaction is by nature of the free variety. The hands‑on factor is simply up for grabs. In Metapolator this interaction complements the parameter panel shown above to achieve equal opportunity.

a specimen is shown with a text generated out of all letter-pair     combinations out of the word adhesion

Specimens are a huge factor in the Metapolator design. It is the place to evaluate if the typefaces are right. That makes it also the logical place to squeeze it until it is right: free working.

All on‑canvas interaction is performed directly in the specimens for this reason. If that looks natural and normal to you, I say ‘you’re welcome.’ This is completely novel in the field of font design software.

four sliders for mixing fonts, above each slider blue markers, below     each a number equivalent to its setting

Here are these fellows again, the slider set for freely mixing master fonts to make new fonts. These new fonts are shown by the blue markers, so that users can feel the clustering and spread of these new fonts—clearly a component of free working.

The numbers you see are all editable, also quickly in a row. This supports measured working. That number input is straightforward and gives predictable and repeatable results was a big factor for me to choose the algorithm of these sliders over alternatives.

boom, boom

In short: software for creative pros has to offer every operation that users can perform in two ways: one free—squeeze it until it feels right—one measured—involving planning ahead, entering parameters and ‘cutting’ once.

That’s it for part three. Stay tuned for part four: how to be good.

June 03, 2015

We’ve done it!

We ended with €30,520 on kickstarter and €3108 through paypal — making for a grand total of €33,628, and that means LOD, Animation and nine stretch goal. We’re so happy. It’s really an amazing result. So, thanks and hugs to all our supporters! And we promise to make Krita better and better — and better!

We’re already working on the surveys, and if you backed through paypal and didn’t get a survey by next week, please mail us. For paypal, we have to do a bit of manual work!That’s all for now, we’re a bit tired after the most intense 30 days of the year :-)

June 02, 2015

Piñon cones!

[Baby piñon cones] I've been having fun wandering the yard looking at piñon cones. We went all last summer without seeing cones on any of our trees, which seemed very mysterious ... though the book I found on piñon pines said they follow a three-year cycle. This year, nearly all of our trees have little yellow-green cones developing.

[piñon spikes with no cones] A few of the trees look like most of our piñons last year: long spikes but no cones developing on any of them. I don't know if it's a difference in the weather this year, or that three-year cycle I read about in the book. I also see on the web that there's a 2-7 year interval between good piñon crops, so clearly there are other factors.

It's going to be fun to see them develop, and to monitor them over the next several years. Maybe we'll actually get some piñon nuts eventually (or piñon jays to steal the nuts). I don't know if baby cones now means nuts later this summer, or not until next summer. Time to check that book out of the library again ...

Fedora Design Team Update (Two for One!)

Fedora Design Team Logo

I have been very occupied in recent weeks with piggies of various shapes, sizes, and missions in life [1], so I missed posting the last design team meeting update. This is going to be a quick two-for-one with mostly links and not any summary at all. I’ve been trying hard to run the meetings so the auto-generated summaries are more usable, but I am always happy for tips on doing this even better from meetbot pros (like you? :) ?)

piggy2

Fedora Design Team Meeting 19 May 2015

Fedora Design Team Meeting 2 June 2015

See you next time?

Our meetings are every 2 weeks; we send reminders to the design-team mailing list and you can also find out if there is a meeting by checking out the design team category on FedoCal.

piggy

[1] Expect some explanation in a few weeks, or look for me or Dan Walsh at the Red Hat Summit later this month. :)

Twenty-four hours to go…

Kickstarter months are much longer than ordinary months. At least, so it seems to us! It’s also a really exciting time. But we’re nearing the finish line now.

The current score is €2675 donated through paypal and €28,463 pledged on Kickstarter! That’s a total if €31,138. That’s seven-and-half stretch goals! Two, however, are already claimed by the choose-your-stretch-goal award.

Big thanks to everyone who has joined to help make Krita better and better!

In any case, time for a last sprint! This time tomorrow morning, the campaign is over!

kickstarter-end

May 30, 2015

Google Photos - Can I get out?

Google Photos

Google Photos came out a couple of days ago and well, it looks great.

But it begs the question: what happens with my photos once I hand them over? Should I want to move elsewhere, what are my options?

Question 1: Does it take good care of my photos?

Good news: if you choose to backup originals (the non-free version), everything you put in will come back out unmodified. I tested this with a couple different file types: plain JPEGs, RAW files and movies.

Once uploaded, you can download each file one-by-one through the action buttons on the top-right of your screen:

Photo actions

Downloaded photos have matching checksums, so that’s positive. It does what it promises.

Update: not quite, see below

Question 2: Can I get my photos out?

As mentioned before there’s the download button. This gives you one photo at a time, which isn’t much of an option if you have a rather large library.

You can make a selection and download them as a zip file:

Bulk download

Only downside is that it doesn’t work. Once the selection is large enough, it silently fails.

There is another option, slightly more hidden:

Show in Google Drive

You can enable a magic “Google Photos” folder in the settings menu, which will then show up in Google Drive.

Combined with the desktop app, it allows you to sync back your collection to your machine.

I once again did my comparison test. See if you can spot the problem.

Original file:

$ ls -al _MG_1379.CR2 
-rwxr-xr-x@ 1 ruben  staff  16800206 Oct 10  2012 _MG_1379.CR2*
$ shasum -a 256 _MG_1379.CR2 
fbfb86dac6d24c6b25d931628d24b779f1bb95f9f93c99c5f8c95a8cd100e458  _MG_1379.CR2

File synced from Google Drive:

$ ls -al _MG_1379.CR2 
-rw-------  1 ruben  staff  1989894 May 30 18:38 _MG_1379.CR2
$ shasum -a 256 _MG_1379.CR2 
0769b7e68a092421c5b8176a9c098d4aa326dfae939518ad23d3d62d78d8979a  _MG_1379.CR2

My 16Mb RAW file has been compressed into something under 2Mb. That’s… bad.

Question 3: What about metadata?

Despite all the machine learning and computer vision technology, you’ll still want to label your events manually. There’s no way Google will know that “Trip to Thailand” should actually be labeled “Honeymoon”.

But once you do all that work, can you export the metadata?

As it stands, there doesn’t seem to be any way to do so. No API in sight (for now?).

Update: It’s supported in Google Takeout. But that’s still a manual (and painful) task. I’d love to be able to do continuous backups through an API.

Recommendations

The apps, the syncing, the sharing, it works really really well. But for now it seems to be a one-way story. If you use Google Photos, I highly recommend you keep a copy of your photos elsewhere. You might want them back one day.

What I’d really like to see:

  • A good API that allows access to all metadata. After all, it is my own data.
  • An explanation on why my RAW files were compressed. That’s exactly not what you want with RAW files.

Keeping an eye on it.


Comments | More on rocketeer.be | @rubenv on Twitter

May 29, 2015

Why aren't you using github?

Is a question we, Krita developers, get asked a lot. As in, many times a week. Some people are confused enough that they think that github is somehow the "official" place to put git repositories -- more official than projects.kde.org, phabricator.kde.org, git.gnome.org or where-ever else. Github, after all, is so much more convenient: you only need a github account or login with your social media account. It's so much more social, it's so cosy, and no worries about licensing either! So refreshing and modern.

So much better than, say, SourceForge ever was! Evil SourceForge, having failed to make a business out of hosting commercial software development projects is now descending to wrapping existing free software Windows installers in malware-distributing, ad-laden installer wrappers.

The thing is, though, Github might be the cool place to hack on code these days, the favourite place to host your projects: that is exactly what SourceForge was, too, back in the days. And Github's business model is exactly what SourceForge's was. And if that isn't a warning against giving your first-born children in the hands of a big, faceless, profit-oriented, venture-capital-backed company, then I don't know what is!

And yes, I have heard the arguments. Github is so familiar, so convenient, you can always remove your project (until Github decides to resurrect it, of course), it's git, so you're not losing your code revision history! But what about other artefacts: wiki, documents, bugs, tasks? Maybe you can export them now, I haven't checked, but what will you import it into?

I've spent over ten years of my life on Krita. I care about Krita. I don't want to run that sort of risk. One thing I've learned in the course of a mis-spent professional life is that you always should keep the core of your business in your own hands. You shouldn't outsource that!

So, one big reason for not moving Krita's development to github is that I simply do not trust them.

That's a negative reason, but there are also positive reasons. And they all have to do with KDE.

I know that a lot of people like to bitch about KDE -- they like to bitch about the layout of the forum, the performance of the repo browser, the size of the libraries, the releases of new versions of the Plasma Desktop, about fifteen year old conflicts with the FSF (which somehow proves to them that KDE isn't trustworthy...) The fact is that especially in the Linux world, a bunch of people decided ages ago they didn't like KDE, it wasn't their tribe and they apparently find it enjoyable to kick like a mule everytime we do something.

Well, shucks to them.

Then there are people for whom the free software world is a strange place. You don't see something like Corel Painter being hosted together with a bunch of other software on a bigger entity's website. It's confusing! But it's still strange, to many people, to see that Krita shares a bug tracker, a forum, a mailing list platform, a git repository platform with a bunch of other projects that they aren't interested in.

Well, I see that as a learning moment

And not as a hint that we should separate out and... Start using using github? Which would also mean sharing infra with a bunch of other projects, but without any sense of community?

Because that is what make KDE valuable for Krita: the community. KDE is a big community of people who are making free software for end users. All kinds of free software, a wild variety. But KDE as a community is extremely open. Anyone can get a KDE identity, and it doesn't take a lot of effort to actually get commit access to all the source code, to all projects. Once in, you can work on everything.

All the pieces needed to develop software are here: websites, forums, wiki's, bug trackers, repo hosting, mailing lists, continuous-integration, file hosting, todo management, calendaring, collaborative editing, file hosting. The system admin team does an incredible job keeping it all up and running, and the best thing is: we own it. We, the community, own our platforms and our data. We cannot be forced by a venture capitalist to monetize our projects by adding malware installers. We own our stuff, which means we can trust our stuff.

And we can improve our platform: try to improve a closed-source, company-owned platform like github! So suggestions for improvement are welcome: we're now looking into phabricator, which is a very nice platform giving a lot of the advantages of github (but with some weird limitations: it very clearly wasn't made for hosting hundreds of git repos and hundreds of projects!), we're looking into question-and-answers websites. Recently, the continuous integration system got improved a whole bunch. All awesome deveopments!

But moving development to github? Bad idea.

Interview with David Revoy

usermanual-800

Could you tell us something about yourself?

I’m a 33-year-old French CG artist. I worked for many industries: traditional-painting, illustration, concept-art, teaching. Maybe you’ve already come across some of my artwork while browsing the web, for example my work on open movies (Sintel, Tears of Steel, Cosmos Laundromat) or on various board games  (Philip Jose Farmer’s ‘The maker of universes’, Lutinfernal, BobbySitter) or book series (Fedeylin, Club of Magic Horse) and artworks like Alice in Wonderland or Yin Yang of World Hunger. Something I think specific about me is that I rarely accept ready-made ideas, I work to build my own opinions. This process leads me to reject many things accepted as normal by my contemporaries: TV, proprietary software, politics, religion… I despair when I hear someone saying “I do this or this because everyone does it”. I like independence, cats and deep blue color.

Do you paint professionally, as a hobby artist, or both?

I’m a happy artist doing both. Nowadays I work mainly on my own web comic, Pepper&Carrot. An open web comic done with Krita and supported by the readers. Managing everything on this project is hard and challenging, but extremely rewarding on a personal level. Pepper&Carrot is the project of my dreams.

What genre(s) do you work in?

I’ve worked in many genres, but currently I’m sticking to a homemade fantasy world for a general audience.

Whose work inspires you most — who are your role models as an artist?

I do not really have a role model, but I’m deeply impressed by artists able to melt the limits between industries, as Yoshitaka Amano did between concept art, illustration and painting.

How did you get to try digital painting for the first time?

My first real digital-painting contact was with Deluxe Paint II on MS-DOS in 1992. As a kid in the nineties, I was very lucky to have a computer at home. Fortunately, my parents and siblings were afraid of the home computer and I had it all to myself. For the younger generation reading this, just imagine: no internet, Windows 3.1, VGA graphics (640x480px, 256 colors).

What makes you choose digital over traditional painting?

I left the school system and my parents’ home at 18 years old. I was too much of a rebel to follow any type of studies and eager to start my own life far from any influence. I first worked as a street portraitist in Avignon then. Outside the tourist season I started to do traditional painting. What I remember was the stock -the physical size of it- over 100 canvases take up a lot of room in a small apartment. I also had long drying times for commissions in oil, and when something wasn’t accepted by a client, I had to start over…

I discovered modern digital painting thanks to my first internet connection around 2000 and the first forums about it. I was amazed: brilliant colors, rich gradients, a lot of fantasy artworks. Before 2000, you had to pay for a book or go to exhibitions to see new artworks. And suddenly many artists were on the internet, and you could see thousands of artworks daily. Forums where starting to open everywhere and CG artist shared tips, tutorials, work-in-progress threads. The internet of CG artist was new, full of hope and full of humanity…

I bought a tablet to start to paint digitally during this period. I didn’t know many things about software, so my first years of digital painting were made with Photoshop Elements (bundled with the tablet). With digital painting, I could experiment with many themes I could never have sold on canvas. Then I met online publishers interested in my digital art and started to work more and more as a digital painter with an official Photoshop licence, Corel Painter, etcetera. In 2003 I ended my career as a traditional painter when a client decided to buy my whole stock of canvas.

How did you find out about Krita?

I first heard about Krita on forum news, around 2007. Krita was a Linux-only program at this time and I was still a Windows user then (I moved to using Gnu/Linux full-time in 2009). I remember I spent time to try to install it and didn’t succeed. I had a dual-boot with Linux-Mint 4.0 and I was already enthusiastic about open-source technologies, especially Blender.

My first contact with drawing in Krita was in 2009 when I was preparing my work as art director on the Sintel project and studied all the open source painting applications on Linux (Gimp, Gimp-painter fork, Mypaint, Qaquarelle, Gogh, Krita, Drawpiles, and even the web-based paint-chat ones). I really wanted to use only open source on GNU/Linux for the concept art. This crazy idea was a big turn in my career, and more when I decided to stick to it after the Sintel project.

I started my first years of 100% GNU/Linux using a mix of Gimp-painter 2.6, Mypaint and Alchemy. I published many tutorials and open DVDs about it: Chaos&Evolutions or Blend&Paint to document all the new tips I found. But Gimp-painter 2.6 was harder and harder to install because all GNU/Linux distributions were pushing Gimp 2.8 as default, and the two versions couldn’t live side by side. I wasn’t happy with Gimp 2.8. It was impossible to paint with it when they released it, and the Gimp-painter features I liked were not merged into the official release. Mypaint on the other side was transitioning in pain to newer technologies and the main developer left the project… I remember I felt stuck for a while, asking myself if my rebel move to only GNU/Linux was worth it. Nothing was really evolving positively about digital painting on GNU/Linux at this time.

Then I decided to start following Krita actively and invest as much time as I could in it. Krita wasn’t popular at all back in the day: 2.2/2.3 wasn’t ready, not for production, and the first years that I used it I started out by accepting the various regressions. I adapted, bug-reported, helped other artists build it, showed the features of new releases, communicated about it and the most important: kept painting with it. It was a good choice. I was convinced by three factors:

  1. the project vision, clearly set up to be a digital-painting application.
  2. the massive energy, passion and time put on it by Boudewijn Rempt, Dmitry Kazakov, Lukáš Tvrdý, Sven Langkamp and many other developers.
  3. the friendly community.

What was your first impression?

It was in 2009, and it was impossible to paint smoothly on a 1000x1000px canvas. Krita already had a lot of features: CMYK, rich brush engines, a single windows interface, selections, transform tool, etcetera… but most of those were half working or broken when you wanted to make real use of it. The project was missing users, beta-testers. I’m proud to have already reported over 200 bugs to the Krita bug tracker since then. Nowadays, I’m sort of part of the Krita team, I even made my first official commit last week.

What do you love about Krita?

This will sound geeky, but probably my favourite feature is the command-line exporter.

~$ krita in.kra –export-filename out.png

This feature is a central key to speed-up my workflow, I included this command in bash scripts to batch transform Krita files to low-res JPGs, hi-res PNG, and so on. It allows me to keep only a single source file in my work folder ; all derived versions (internet version, publisher version) are auto-generated when the .kra source file is changed. This way I’m never afraid that I have to export everything again when I do a single change on a painting source file, or when one of the 16 languages of Pepper&Carrot gets an update. I just do it, save, and everything else, generation, watermarking, ftp upload and so on are automatised.

Check the Source files of Pepper&Carrot if you are curious to see what automatised export output looks like. I wish the command-line export could do a bit more, for example adding the possibility to export top-level groups to multiple files.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Stability needs improvement.

I invite all Krita users who wants to help making Krita more stable to report their bugs, and not expect someone else will do it for them or expect the developers will see it.

But there is one big issue in this process ; the bug-report website is not user friendly at all and not visual. It has many limited features (formatting, inserting pictures or videos). If the Krita project wants to keep trusting the userbase only to do volunteer beta-testing at a professional level, I think the project will need to make the life of the beta-testers easier.

It make me remember how the Mypaint project was also affected by this with the old bug-tracker. When the project moved the bug tracker to Github the amount of new issues reported just went insane. Much discussion happens on it now; avatars of users, formatting with title/bold/italic, inserting pictures make it way more friendly and human. Look at this type of bug-report with image and all: it’s a lot more adapted to how artists, the general audience or visually driven persons might want to report a bug. And I’m also pretty sure it would help developers to better see and solve the issues.

What sets Krita apart from the other tools that you use?

Krita (and digital-painting apps in general, or digital sculpting) are really a different thing from other software. Krita really needs to be realtime between me and the computer. Painting is a realtime experience and, when I paint, I can really feel when Krita doesn’t follow the rhythm. I think that’s why everyone is so happy to see the Krita team working on the performance topic in the kickstarter campaign.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It would be the the latest episode of Pepper&Carrot. As an artist constantly evolving and changing, the latest piece is probably the one telling more things about where I am right now. Older artworks like the portrait of Charles Darwin or Lecture tell different stories, more near to where I was in 2012.

What techniques and brushes did you use in it?

I used my brush kit on it, and tried to paint directly what I had in mind using almost no extra layers. I painted it flat as I would do for a production concept-art speed painting. Then I refined on the top the level of details and constrained myself to not smooth the result too much.

Where can people see more of your work?

Probably on my portfolio www.davidrevoy.com.

Anything else you’d like to share?

I invite you to network with me on twitter, google+, or deviantArt if you want to chat about Krita or follow my new artwork, tutorials and resources. I also started a Youtube channel with video tutorials about Krita. Do not hesitate to comment and also share your tips or suggestions in the comments, I read them all and often reply. I’m also often connected to the IRC #krita channel on freenode. I’m using the nickname ‘deevad’. See you there!