August 30, 2016

Back from Krita Sprint 2016

Last week, I spent 4 days at the Krita Sprint in Deventer, where several contributors gathered to discuss the current hot topics, draw and hack together.

You can read a global report of the event on krita.org news.

On my side, besides meeting old and new friends, and discussing animation, brushes and vector stuff, I made three commits:
-replace some duplicate icons by aliases in qrc files
-update the default workspaces
-add a new “Eraser Switch Opacity” feature (this one is on a separate branch for now)

I also filed new tasks on phabricator for two feature requests to improve some color and animation workflow:

https://phabricator.kde.org/T3542

https://phabricator.kde.org/T3543

Once again, I feel it’s been a great and productive meeting for everyone. A lot of cool things are ready for next Krita version, this is exciting! So much thanks to KDE e.V. for the travel support, and to the Krita foundation for hosting the event and providing accomodation and food.

August 29, 2016

Happy Porting!

Last year, I wrote about how library authors should pretty darn well never ever make their users spend time on "porting". Porting is always a waste of time. No matter how important the library author thinks his newly fashionable way of doing stuff is, it is never ever as important as the time porting takes away from the application author's real mission: the work on their applications. I care foremost about my users; I expect a library author to care about their users, i.e, people like me.

So, today I was surprised by Goodbye, Q_FOREACH by Marc Mutz. (Well known for his quixotic crusade to de-Qt Qt.)

Well, fuck.

Marc, none, not a single one of all of the reasons you want to deprecate Q_FOREACH is a reason I care even a little bit about. It's going to be deprecated? Well, that's a decision, and a dumb one. It doesn't work on std containers, QVarLengthArray or C arrays? I don't use it on those. It adds 100 bytes of text size? Piffle. It makes it hard to reason about the loop for you? I don't care.

What I do care is the 1559 places where we use Q_FOREACH in Krita. Porting this will take weeks.

Marc, I hope that you will have a patch ready for us on phabricator soon: you can add it to this project and keep iterating until you've fixed all the bugs.

Happy porting, Marc!

Come into the real world and learn how well this let's-depracate-and-let-the-poor-shmuck-port-their-code attitude works out.

August 26, 2016

More map file conversions: ESRI Shapefiles and GeoJSON

I recently wrote about Translating track files between mapping formats like GPX, KML, KMZ and UTM But there's one common mapping format that keeps coming up that's hard to handle using free software, and tricky to translate to other formats: ESRI shapefiles.

ArcGIS shapefiles are crazy. Typically they come as an archive that includes many different files, with the same base name but different extensions: filename.sbn, filename.shx, filename.cpg, filename.sbx, filename.dbf, filename.shp, filename.prj, and so forth. Which of these are important and which aren't?

To be honest, I don't know. I found this description in my searches: "A shape file map consists of the geometry (.shp), the spatial index (.shx), the attribute table (.dbf) and the projection metadata file (.prj)." Poking around, I found that most of the interesting metadata (trail name, description, type, access restrictions and so on) was in the .dbf file.

You can convert the whole mess into other formats using the ogr2ogr program. On Debian it's part of the gdal-bin package. Pass it the .shp filename, and it will look in the same directory for files with the same basename and other shapefile-related extensions. For instance, to convert to KML:

 ogr2ogr -f KML output.kml input.shp

Unfortunately, most of the metadata -- comments on trail conditions and access restrictions that were in the .dbf file -- didn't make it into the KML.

GPX was even worse. ogr2ogr knows how to convert directly to GPX, but that printed a lot of errors like "Field of name 'foo' is not supported in GPX schema. Use GPX_USE_EXTENSIONS creation option to allow use of the <extensions> element." So I tried ogr2ogr -f "GPX" -dsco GPX_USE_EXTENSIONS=YES output.gpx input.shp but that just led to more errors. It did produce a GPX file, but it had almost no useful data in it, far less than the KML did. I got a better GPX file by using ogr2ogr to convert to KML, then using gpsbabel to convert that KML to GPX.

Use GeoJSON instead to preserve the metadata

But there is a better way: GeoJSON.

ogr2ogr -f "GeoJSON" -t_srs crs:84 output.geojson input.shp

That preserved most, maybe all, of the metadata the .dbf file and gave me a nicely formatted file. The only problem was that I didn't have any programs that could read GeoJSON ...

[PyTopo showing metadata from GeoJSON converted from a shapefile]

But JSON is a nice straightforward format, easy to read and easy to parse, and it took surprisingly little work to add GeoJSON parsing to PyTopo. Now, at least, I have a way to view the maps converted from shapefiles, click on a trail and see the metadata from the original shapefile.

See also:

August 25, 2016

Summer Talks, PurpleEgg

I recently gave talks at Flock in Krakow and GUADEC in Karlsruhe:

Flock: What’s Fedora’s Alternative to vi httpd.conf Video Slides: PDF ODP
GUADEC: Reworking the desktop distribution Video Slides: PDF ODP

The topics were different but related: The Flock talk talked about how to make things better for a developer using Fedora Workstation as their development workstation, while the GUADEC talk was about the work we are doing to move Fedora to a model where the OS is immutable and separate from applications. A shared idea of the two talks is that your workstation is not your development environment environment. Installing development tools, language runtimes, and header files as part of your base operating system implies that every project you are developing wants the same development environment, and that simply is not the case.

At both talks, I demo’ed a small project I’ve been working on with codename of PurpleEgg (I didn’t have that codename yet at Flock – the talk instead talks about “NewTerm” and “fedenv”.) PurpleEgg is about creating easily creating containerized environments dedicated to a project, and about integrating those projects into the desktop user interface in a natural, slick way.

The command line client to PurpleEgg is called pegg:

[otaylor@localhost ~]$ pegg create django mydjangosite
[otaylor@localhost ~]$ cd ~/Projects/mydjangosite
[otaylor@localhost mydangjosite]$  pegg shell
[[mydjangosite]]$ python manage.py runserver
August 24, 2016 - 19:11:36
Django version 1.9.8, using settings 'mydjangosite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

“pegg create” step did the following steps:

  • Created a directory ~/Projects/mydjangosite
  • Created a file pegg.yaml with the following contents:
base: fedora:24
packages:
- python3-virtualenv
- python3-django
  • Created a Docker image is the Fedora 24 base image plus the specified packages
  • Created a venv/ directory in the specified directory and initialized a virtual environment there
  • Ran ‘django-admin startproject’ to create the standard Django project

pegg shell

  • Checked to see if the Docker image needed updating
  • Ran a bash prompt inside the Docker image with a customized prompt
  • Activated the virtual environment

The end result is that, without changing the configuration of the host machine at all, in a few simple commands we got to a place where we can work on a Django project just as it is documented upstream.

But with the PurpleEgg application installed, you get more: you get search results in the GNOME Activities Overview for your projects, and when you activate a search result, you see a window like:

PurpleEgg-screenshot

We have a terminal interface specialized for our project:

  • We already have the pegg environment activated
  • New tabs also open within that environment
  • The prompt is uncluttered with relevant information moved to the header bar
  • If the project is checked into Git, the header bar also tracks the Git branch

There’s a fair bit more that could be done: a GUI for creating and importing projects as in GNOME Builder, GUI integration for Vagrant and Docker, configuring frequently used commands in pegg.yaml, etc.

At the most basic, the idea is that server-side development is terminal-centric and also somewhat specialized – different languages and frameworks have different ways of doing things. PurpleEgg embraces working like that, but adds just enough conventions so that we can make things better for the developer – just because the developer wants a terminal doesn’t mean that all we can give them is a big pile of terminals.

PurpleEgg codedump is here. Not warrantied to be fit for any purpose.


August 24, 2016

Getting S3 Statistics using S3stat

I’ve been using Amazon S3 as a CDN for the LVFS metadata for a few weeks now. It’s been working really well and we’ve shifted a huge number of files in that time already. One thing that made me very anxious was the bill that I was going to get sent by Amazon, as it’s kinda hard to work out the total when you’re serving cough millions of small files rather than a few large files to a few people. I also needed to keep track of which files were being downloaded for various reasons and the Amazon tools make this needlessly tricky.

I signed up for the free trial of S3stat and so far I’ve been pleasantly surprised. It seems to do a really good job of graphing the spend per day and also allowing me to drill down into any areas that need attention, e.g. looking at the list of 404 codes various people are causing. It was fairly easy to set up, although did take a couple of days to start processing logs (which is all explained in the set up). Amazon really should be providing something similar.

Screenshot from 2016-08-24 11-29-51

For people providing less than 200,000 hits per day it’s only $10, which seems pretty reasonable. For my use case (bazillions of small files) it rises to a little-harder-to-justify $50/month.

I can’t justify the $50/month for the LVFS, but luckily for me they have a Cheap Bastard Plan (their words, not mine!) which swaps a bit of advertising for a free unlimited license. Sounds like a fair swap, and means it’s available for a lot of projects where $600/yr is better spent elsewhere.

Devo Firmware Updating

Does anybody have a Devo RC transmitter I can borrow for a few weeks? I need model 6, 6S, 7E, 8, 8S, 10, 12, 12S, F7 or F12E — it doesn’t actually have to work, I just need the firmware upload feature for testing various things. Please reshare/repost if you’re in any UK RC groups that could help. Thanks!

August 18, 2016

Updating Firmware on 8Bitdo Game Controllers

I’ve spent a few days adding support for upgrading the firmware of the various wireless 8Bitdo controllers into fwupd. In my opinion, the 8Bitdo hardware is very well made and reasonably priced, and also really good retro fun.

Although they use a custom file format for firmware, and also use a custom flashing protocol (seriously hardware people, just use DFU!) it was quite straightforward to integrate into fwupd. I’ve created a few things to make this all work:

  • a small libebitdo library in fwupd
  • a small ebitdo-tool binary that talks to the device and can flash a vendor supplied .dat file
  • a ebitdo fwupd provider that uses libebitdo to flash the device
  • a firmware repo that contains all the extra metadata for the LVFS

I guess I need to thank the guys at 8Bitdo; after asking a huge number of questions they open sourced their OS-X and Windows flashing tools, and also allowed me to distribute the firmware binary on the LVFS. Doing both of those things made it easy to support the hardware.

Screenshot from 2016-08-18 10-36-56

The result of all this is that you can now do fwupd update when the game-pad is plugged in using the USB cable (not just connected via bluetooth) and the firmware will be updated to the latest version. Updates will show in GNOME Software, and the world is one step being closer to being awesome.

August 17, 2016

Making New Map Tracks with Google Earth

A few days ago I wrote about track files in maps, specifically Translating track files between mapping formats. I promised to follow up with information on how to create new tracks.

For instance, I have some scans of old maps from the 60s and 70s showing the trails in the local neighborhood. There's no newer version. (In many cases, the trails have disappeared from lack of use -- no one knows where they're supposed to be even though they're legally trails where you're allowed to walk.) I wanted a way to turn trails from the old map into GPX tracks.

My first thought was to trace the old PDF map. A lot of web searching found a grand total of one page that talks about that: How to convert image of map into vector format?. It involves using GIMP to make an image containing just black lines on a white background, saving as uncompressed TIFF, then using a series of commands in GRASS. I made a start on that, but it was looking like it might be a big job that way. Since a lot of the old trails are still visible as faint traces in satellite photos, I decided to investigate tracing satellite photos in a map editor first, before trying the GRASS method.

But finding a working open source map editor turns out to be basically impossible. (Opportunity alert: it actually wouldn't be that hard to add that to PyTopo. Some day I'll try that, but now I was trying to solve a problem and hoping not to get sidetracked.)

The only open source map editor I've found is called Viking, and it's terrible. The user interface is complicated and poorly documented, and I could input only two or three trail segments before it crashed and I had to restart. Saving often, I did build up part of the trail network that way, but it was so slow and tedious restoring between crashes that I gave up.

OpenStreetMap has several editors available, and some of them are quite good, but they're (quite understandably) oriented toward defining roads that you're going to upload to the OpenStreetMap world map. I do that for real trails that I've walked myself, but it doesn't seem appropriate for historical paths between houses, some of which are now fenced off and few of which I've actually tried walking yet.

Editing a track in Google Earth

In the end, the only reasonable map editor I found was Google Earth -- free as in beer, not speech. It's actually quite a good track editor once I figured out how to use it -- the documentation is sketchy and no one who writes about it tells you the important parts, which were, for me:

Click on "My Places" in the sidebar before starting, assuming you'll want to keep these tracks around.

Right-click on My Places and choose Add->Folder if you're going to be creating more than one path. That way you can have a single KML file (Google Earth creates KML/KMZ, not GPX) with all your tracks together.

Move and zoom the map to where you can see the starting point for your path.

Click the "Add Path" button in the toolbar. This brings up a dialog where you can name the path and choose a color that will stand out against the map. Do not hit Return after typing the name -- that will immediately dismiss the dialog and take you out of path editing mode, leaving you with an empty named object in your sidebar. If you forget, like I kept doing, you'll have to right-click it and choose Properties to get back into editing mode.

Iconify, shade or do whatever your window manager allows to get that large, intrusive dialog out of the way of the map you're trying to edit. Shade worked well for me in Openbox.

Click on the starting point for your path. If you forgot to move the map so that this point is visible, you're out of luck: there's no way I've found to move the map at this point. (You might expect something like dragging with the middle mouse button, but you'd be wrong.) Do not in any circumstances be tempted to drag with the left button to move the map: this will draw lots of path points.

If you added points you don't want -- for instance, if you dragged on the map trying to move it -- Ctrl-Z doesn't undo, and there's no Undo in the menus, but Delete removes previous points. Whew.

Once you've started adding points, you can move the map using the arrow keys on your keyboard. And you can always zoom with the mousewheel.

When you finish one path, click OK in its properties dialog to end it.

Save periodically: click on the folder you created in My Places and choose Save Place As... Google Earth is a lot less crashy than Viking, but I have seen crashes.

When you're done for the day, be sure to File->Save->Save My Places. Google Earth apparently doesn't do this automatically; I was forever being confused why it didn't remember things I had done, and why every time I started it it would give me syntax errors on My Places saying it was about to correct the problem, then the next time I'd get the exact same error. Save My Places finally fixed that, so I guess it's something we're expected to do now and then in Google Earth.

Once I'd learned those tricks, the map-making went fairly quickly. I had intended only to trace a few trails then stop for the night, but when I realized I was more than halfway through I decided to push through, and ended up with a nice set of KML tracks which I converted to GPX and loaded onto my phone. Now I'm ready to explore.

August 16, 2016

Design Team Fedora Activity Day (FAD) Event Report

Fedora Design Team Logo

design team fad attendees portrait

From left to right: Mo Duffy, Marie Nordin, Masha Leonova, Chris Roberts, Radhika Kolathumani, Sirko Kemter (photo credit: Sirko Kemter)

Two weekends ago now, we had a 2-day Fedora Activity Day (heh, a 2-day day) for the Fedora Design Team. We had three main goals for this FAD, although one of them we didn’t cover (:-() :

  • Hold a one-day badges hackfest – the full event report is available for this event – we have wanted to do an outreach activity for some time so this was a great start.
  • Work out design team logistics – some of our members have changed location causing some meeting time issues despite a few different attempts to work around them. We had a few other issues to tackle too (list to come later in this post.) We were able to work through all points and come up with solutions except for one (we ran out of time.)
  • Usability test / brainstorm on the Design Team Hub on Fedora Hubs – so the plan was that the Design Team Hub would be nearly ready for the Flock demo the next week, but this wasn’t exactly the case so we couldn’t test it. With all of the last-minute prep for the workshop event, we didn’t have any time to have much discussion on hubs, either. We did, however, discuss some related hub needs in going through our own workflow in our team logistics discussion, so we did hit on this briefly.

So I’m going to cover the topics discussed aside from the workshop (which already has a full event report), but first I want to talk a little bit about the logistics of planning a FAD and how that worked out first since I totally nerded out on that aspect and learned a lot I want to share. Then, I’ll talk about our design team discussion, the conclusions we reached, and the loose ends we need to tie up still.

Logistics

I had already planned an earlier Design Team FAD for January 2015, so I wasn’t totally new to the process. There were definitely challenges though.

Budget

First, we requested funding from the Fedora Council in late March. We found out 6 weeks later (early May, a little less than 3 months before the event) that we had funding approval, although the details about how that would work weren’t solidified until less than 4 weeks before the event.

Happily, I assumed it’d be approved, filed a request to use the Red Hat Westford facility for the event. There were two types of tickets I had to file for this – a GWS Special Event Request and a GWS Meeting Support Request. The special event request was the first one – I filed that on June 1 (2 months ahead) and it was approved June 21 (took about 3 weeks.) Then, on 7/25 the week before the event, I filed the meeting support request to have the room arranged in classroom style as well as open up the wall between the two medium-sized conference rooms so we had one big room for the community event. I also set up a meeting with the A/V support guy, Malcolm, to get a quick run through of how to get that working. It was good I went ahead and filed the initial request since it took 3 weeks to go through.

The reason it took a while to work out the details on the budget was because we scheduled the event for right before Flock, which meant coordinating / sharing budgets. We did this both to save money and also to make sure we could discuss design-team related Flock stuff before heading to Flock. While this saved some money ultimately, IMHO the complications weren’t worth it:

  • We had to wait for the Flock talk proposals to be reviewed and processed before we knew which FAD attendees would also be funded for Flock, which delayed things.
  • Since things were delayed from that, we ended up missing on some great flight pricing, which meant Ryan Lerch wasn’t able to come 🙁
  • To be able to afford the attendees we had with less than 4 weeks to go, we had to do this weird flight nesting trick jzb figured out. Basically, we booked home<=>BOS round trip tickets, then BOS<=>KRK round trip tickets. This meant Sirko had to fly to Boston after Flock before he could head home to PNH, but it saved a *ton* of money.
fad budget spreadsheet screenshot

behold, our budget

Another complication: we maxed out my corporate card limit before everything was booked. 🙂 I now have a credit increase, so hopefully next event this won’t happen!

The biggest positive budget-wise for this event was the venue cost – free. 🙂 Red Hat Westford kindly hosted us.

I filed the expense reports for the event this past week, and although the entire event was under budget, we had some unanticipated costs as well as a small overage in food budget:

  • Our original food budget was $660. We spent $685.28. We were $25.28 over. (Pretty good IMHO. I used an online pizza calculator to figure out budget for the community event and was overly generous in how much pizza people would consume. 🙂 )
  • We spent $185.83 in unanticipated costs. This included tolls (it costs $3.50 to leave Logan Airport), parking fees, gas, and hotel taxes ($90 in hotel taxes!)

Lessons Learned:

  • Sharing budget with other events slows your timeline down – proceed with caution!
  • Co-location with another event is a better way to share costs logistically.
  • Pizza calculators are a good tool for figuring out food budget. 🙂
  • Budget in a tank of gas if you’ve got a rental.
  • Figure out what tolls you’ll encounter. Oh and PAY CASH, in the US EzPass with a rental car is a ripoff.
  • Ask the hotel for price estimates including taxes/fees.

Transportation

I rented a minivan to get folks between Westford and the airport as well as between the hotel and the office. I carpool with my husband to work, so I picked it up near the Red Hat Westford office and set up the booking so I was able to leave it at Logan Airport after the last airport run.

Our chariot. I cropped him out of the portrait. Sorry, Toyota Sienna! It has nice pickup. I still am never buying a minivan ever, even if I have more kids. Never minivan, never!

Our chariot. I cropped him out of the portrait. Sorry, Toyota Sienna! It has nice pickup. I still am never buying a minivan ever, even if I have more kids. Never minivan, never!

With international flights and folks coming in on different nights, and the fact I actually live much closer to the airport than the hotel up in Westford (1 hour apart) – by the time the FAD started, I was really worn down as I had 3 nights in a row leading up to the FAD where I wasn’t getting home until midnight at the earliest and I had logged many hours driving, particularly in brutal Boston rush hour traffic. For dropoffs, it was not as bad as everybody left on the same day and there were only 2 airport trips then. Still – not getting home before my kids went to bed and my lack of sleep was a definite strain on my family.

So we had a free venue, but at a cost. For future FAD event planners, I would recommend either trying to get flights coming in on the same day as much as possible and/or sharing the load of airport pickups. Even better, would be to hold the event closer to the airport, but this wasn’t an option for us because of the cost that would entail and the fact we have such a geographically-distributed team.

The transportation situation - those time estimates aren't rush hour yet!

The transportation situation – those time estimates aren’t rush hour yet!

One thing that went very well that is common sense but bears repeating anyway – if you’re picking folks up from the airport, get their phone #’s ahead of time. Having folks phone numbers made pickup logistics waaaaay easier. If you have international numbers, look up how to dial them ahead of time. 🙂

Lessons Learned:

  • Try hard to cluster flights when possible to make for less pickups if the distance between airport / venue is great.
  • If possible, share responsibility for driving with someone to spread the load.
  • Closer to the airport logistically means spending less time in a car and less road trips, leaving more time for hacking.
  • Don’t burn yourself out before the event even starts. 🙂
  • Collect the phone numbers of everyone you’re picking up, or provide them some way of contacting you just in case you can’t find each other.
We're dispersed...

We’re dispersed… (original list of attendees’ locations or origin)

Food

This one went pretty smoothly. Westford has a lot of restaurants; actually, we have a lot more restaurants in Westford with vegetarian options than we did less than 2 years ago at the last Design Team FAD.

For the community event, the invite mentioned that we’d be providing pizzas. We had some special dietary requests from that, so I looked up pizza place that could accommodate, would deliver, and had good ratings. There were two that met the criteria so I went with the one that had the best ratings.

Since the Fedora design team FAD participants were leading / teaching the session, I went over the menu with them the day before the community event, took their orders for non-pizza sandwiches/salads, and called the order in right then and there. (I worried placing the order too far in advance would mean it’d get lost in the shuffle. Lesson learned from the 2015 FAD where Panera forgot our order!) Delivery was a must, because of the ease of not having to go and pick it up.

For snacks, we stopped by a local supermarket either before or after lunch on the first day and grabbed whatever appealed to us. Total bill: $30, and we had tons of drinks and yummy snacks (including fresh blueberries) that kept us tided over the whole weekend and were gone by the end.

We were pretty casual with other meals. Folks at the hotel had breakfast at the hotel, which meant less receipts to track for me. We just drove to places close by for lunch and dinner, and being a local + vegetarian meant we had options for everybody. I agonized way too much about lunch and dinner last FAD (although there were less options then.) Keeping it casual worked well this time; one night we tried to have dinner at a local Indian place and found out they had recently been evicted! (Luckily, there was a good Indian place right down the road.)

Lessons Learned:

  • For large orders, call in the day before and not so far in advance that the restaurant forgets your order.
  • Supermarkets are a cheap way to get a snack supply. Making it a group run ensures everyone has something they can enjoy.
  • Having a local with dietary restrictions can help make sure food options are available for everyone.

Okay, enough for logistics nerdery. Let’s move on to the meat here!

Design Team Planning

We spent most of the first day on Fedora Design team planning with a bit of logistics work for the workshop the following day. First, we started by opening up an Inkscape session up on the projector and calling out the stuff we wanted to work on. It ended up like this:

Screenshot of FAD brainstorming session from Inkscape

Screenshot of FAD brainstorming session from Inkscape

But let’s break it down because I suspect you had to be there to get this. Our high-level list of things to discuss broke down like this:

Discussion Topics

  • Newcomers
    – how can we better welcome newcomers to the team?
  • Pagure migration
    Fedora Trac is going to be sunset in favor of Pagure. How will we manage this transition?
  • Meeting times
    – we’ve been struggling to find a meeting time that works for everyone because we are so dispersed. What to do?
  • Status of our ticket queue
    – namely, our ticket system doesn’t have enough tickets for newbies to take!
  • Badges
    – conversely, we have SO MANY badge tickets needing artwork. How to manage?
  • Distro-related design
    – we need to create release artwork every release, but there’s no tickets for it so we end up forgetting about it. What to do?
  • Commops Thread
    – this point refers to Justin’s design-team list post about ambassadors working with the design team – how can we better work with ambassadors to get nice swag out without compromising the Fedora brand?

Let’s dive into each one.

Newcomers

This is the only topic I don’t think we fully explored. We did have some ideas here though:

  • Fedora Hubs will definitely help provide a single landing page for new comers to see what we’re working on in one place to get a feel for the projects we have going on – right now our work is scattered. Having a badge mission for joining the design team should make for a better onboarding experience – we need to work out what badges would be on that path though. One of the pain points we talked about was how incoming newbies go straight to design team members instead of looking at the ticket queue, which makes the process more manual and thus slower. We’re hoping Hubs can make it more self-service.
  • We had the idea to have something like whatcanidoforfedora.org, but specifically for the design team. One of the things we talked about is having it serve up tickets tagged with a ‘newbie’ tag from both the design-team and badges ticket systems, and have the tickets displayed by category. (E.g., are you interested in UX? Here’s a UX ticket.) The tricky part – our data wouldn’t be static as whatcanidoforfedora.org’s is – we wouldn’t want to present people with a ticket that was already assigned, for example. We’d only want to present tickets that were open and unassigned. Chris did quite a bit of investigation into this and seems to think it might be possible to modify asknot-ng to support this.
  • A Fedora Hubs widget that integrated with team-specific asknot instances was a natural idea that came out of this.
  • We do regular ticket triage during meetings. We decided as part of that effort, we should tag tickets with a difficulty level so it’s easier to find tickets for newbies, and maybe even try to have regular contributors avoid the easy ones to leave them open for newbies. We had some discussion about ticket difficulty level scales that we didn’t get to finish – at one point we were thinking:
    • Easy (1 point) (e.g., a simple text-replacement badge.)
    • Moderate (3 points) (e.g., a fresh badge concept with new illustration work.)
    • Difficult / Complex (10 points) (e.g., a minor UX project or a full badge series of 4-5 badges with original artwork.)

    Or something like this, and have a required number of points. This is a discussion we really need to finish.

  • Membership aspects we talked about – what level of work do we want to require for team emembership? Once a member, how much work do we want to require (if any) to stay “current?” How long should a membership be inactive before we retire? (Not to take anything away from someone – but it’s handy to have a list of active members and a handle on how many active folks there are to try to delegate tasks and plan things like this FAD or meetups at Flock.) No answers, but a lot of hard questions. This came up naturally thinking about membership from the beginning to the end.
  • We talked about potentially clearing inactive accounts out of the design-team group and doing this regularly. (By inactive, we mean FAS account has not been logged into from any Fedora service for ~1 year.)
  • Have a formal mentor process, so as folks sign up to join the team, they are assigned a mentor, similar to the ambassador process. Right now, we’re a bit all over the place. It’d be nice for incoming folks to have one person to contact (and this has worked well in the past, e.g., Mo mentoring interns, and Marie mentoring new badgers.)

Pagure migration

We talked about what features we really needed to be able to migrate:

  • The ability to export the data, since we use our trac tickets for design asset storage. We found out this is being worked on, so this concern is somewhat allayed.
  • The ability to generate reports for ticket review in meetings. (We rely on the custom reports Chris and Paul Frields created for us at the last FAD.) We talked through this and decided we wanted a few things:
    • We’d like to be able to do an “anti-tag” in pagure. So we’d want to view a list of tickets that did not have the “triage” tag on them, so we could go through them and triage them, and add a ‘triage’ tag as we completed triage. That would help us keep track of what new tickets needed to be assessed and which had already been evaluated.
    • We’d like some time-based automation of tag application, but don’t know how that would work. For example, right now if a reporter hasn’t responded for 4 weeks, we classify that ticket as “stalled.” So we’d want tickets where the reporter hasn’t responded in 4 weeks to be marked as “stalled.” Similarly, tickets that haven’t had activity for 2 weeks or more are considered “aging”, so we’d like an “aging” tag applied to them. So on and so forth.
    • We need attachment support for tickets – we discovered this was being worked on too. Currently pagure supports PNG image attachments but we have a wider range of asset types we need to attach – PDFs, Scribus SLAs, SVGs, etc. We tested these out in pagure and they didn’t work.

We agreed we need to follow up with pingou on our needs and our ideas here to see if any of these RFEs (or other solutions) could be worked out in Pagure. We were pretty excited that work was already happening on some of the items we thought would help meet our needs in being able to migrate over.

We don’t have enough tickets! (AKA we are too awesome)

We tend to grab tickets and finish them (or at least hold on to them) pretty quickly on the design team these days. This makes it harder for newbies to find things to work on to meet our membership requirement. We talked about a couple of things here, in addition to related topics already covered in the newbie discussion summary:

  • We need to be more strict about removing assignees from tickets with inactivity. If we’ve pinged the ticket owner twice (which should happen in at least a 4 week period of inactivity from the assignee) and had no response, we should unapologetically just reopen up the ticket for others to take. No hard feelings! Would be even better if we could automate this….
  • We should fill out the ticket queue with our regular release tasks. Which leads to another topic…

Distro-related design (Release Artwork)

Our meetings are very ticket-driven, so we don’t end up covering release artwork during them. Which leads to a scramble… we’ve been getting it done, but it’d be nice for it to involve less stress!

Ideally, we’d like some kind of solution that would automatically create tickets in our system for each work item per release once a new release cycle begins… but we don’t want to create a new system for trac since we’ll be migrating to pagure anyway. So we’ll create these tickets manually now, and hope to automate this once we’ve migrated to pagure.

We also reviewed our release deliverables and talked through each. A to-do item that came up here: We should talk to Jan Kurik and have him remove the splash tasks (we don’t create those splash screens anymore) and add social media banner tasks (we’ve started getting requests for these.) We should also drop CD, DVD, and DVD for multi, and DVD for workstation (transcribing this now I wonder if it’s right.) We also should talk to bproffitt about which social media Fedora users the most and what kind of banners we should create for those accounts for each release. So in summary: we need to drop some unnecessary items from the release schedule that we don’t create anymore, and we should do more research about social media banners and have them added to the schedule.

Another thing I forgot when I initially posted this – we need some kind of entropy / inspiration to keep our default wallpapers going. For the past few releases, we’ve gotten a lot of positive feedback and very few complaints, but we need more inspiration. An idea we came up with was to have a design-team internal ‘theme scheme’ where we go through the letters of the alphabet and draw some inspiration from an innovator related to that letter. We haven’t picked one for F25 yet and need to soon!

Finally, we talked about wallpapers. We’d like for the Fedora supplemental wallpapers to be installed by default – they tend to be popular but many users also don’t know they are there. We thought a good solution might be to propose an internship (maybe Outreachy, maybe GSoC?) to revive an old desktop team idea of wallpaper channels, and we could configure the Fedora supplementals to be part of the channel by default and maybe Nuancier could serve them up.

Badges

We never seem to have time to talk through the badges tickets during our meetings, and there are an awful lot of them. We talked about starting to hold a monthly badge meeting to see if this will address it, with the same kind of ticket triage approach we use for the main design team meetings. Overall, Marie and Maria have been doing a great job mentoring baby badgers!

Commops Thread

We also covered Justin’s design-team list post about ambassadors working with the design team, particularly about swag as that tends to be a hot-button issue. For reasons inexplicable to me except for perhaps that I am spaz, I stopped taking notes in Inkscape and started using the whiteboard on this one:

photo of whiteboard (contents described below)

Swag discussion whiteboard (with wifi password scrubbed 🙂 )

We had a few issues we were looking to address here:

  • Sometimes swag is produced too cheaply and doesn’t come out correctly. For example, recently Fedora DVDs were produced with sleeves where Fedora blue came out… black. (For visuals of some good examples compared to bad examples with these sorts of mistakes, check this out.)
  • Sometimes ambassadors don’t understand which types of files to send to printers – they grab a small size bitmap off of the wiki without asking for the print-ready version and things come out pixelated or distorted.
  • Sometimes files are used that don’t have a layer for die cutting – which results in sticker sheets with no cuts that you have to manually cut out with scissors (a waste!)
  • Sometimes files are sent to the printer with no bleeds – and the printer ends up going into the file and manipulating it, sometimes with disastrous results. If a design team member had been involved, they would have known to set the bleeds before sending to the printer.
  • Generally, printers sometimes have no clue, and without a designer working with them they make guesses that are oftentimes wrong and result in poor output.
  • Different regions have different price points and quality per type of item. For example, DVD production in Cambodia is very, very expensive – but print and embroidery items are high-quality and cheap.

Overall, we had concerns about money getting wasted on swag when – with a little coordination – we could produce higher-quality products and save money.

We brainstormed some ideas that we thought might help:

  • Swag quality oversight – Goods produced too cheaply hurt our brand. Could we come up with an approved vendor list, so we have some assurances of a base level of quality? This can be an open process, so we can add additional vendors at any time, but we’ll need some samples of work before they can be approved, and keep logs of our experience with them.
  • Swag design oversight – Ambassadors enjoy their autonomy. We recognize that’s important, but at a certain point sometimes overenthusiastic folks without design knowledge can end up spending a lot of money on items that don’t reflect our brand too well. We thought about setting some kind of cap – if you’re spending more than say $100 on swag, you need design team signoff – a designer will work with you to produce print-ready files and talk to the vendor to make sure everything comes out with a base quality level.
  • Control regional differences – Could we suggest one base swag producer per ambassador region, and indicate what types of products we use them for by default? Per product, we should have a base quality level requirement – e.g., DVDs cannot be burnt – they must be pressed.
  • Okay, I hope this is a fair summary of the discussion. I feel like we could have an entire FAD that focused just on swag. I think we had a lot of ideas here, and it could use more discussion too.

    Meeting Times

    We talked about meeting times. There is no way to get a meeting time that works for everybody, so we decided to split into North America / EMEA / LATAM, and APAC regions. Sirko, Ryan Lerch, and Yogi will lead the APAC time (as of yet to be determined.) And the North America / LATAM / EMEA time will be the traditional design team time – Thursdays at 10 AM ET. Each region will meet on a rotating basis, so one week it’ll be region #1, the next region #2. Each region will meet at least 2x a month then.

    How do we stay coordinated? We came up with a cool idea – the first item of each meeting will be to review the meetbot logs from the other region’s last meeting. That way, we’ll be able to keep up with what the other region is doing, and any questions/concerns we have, they’ll see when they review our minutes the next week. We haven’t had a chance to test this out yet, but I’m curious to see how it works in practice!

    Fun

    Chris’ flight left on Sunday morning, but everybody else had flights over to Poland which left in the evening, so before we went to the airport, we spent some time exploring Boston. First we went to the Isabella Stewart Gardener Museum, as it was a rainy day. (We’d wanted to do a walking tour.) We had lunch at Boloco, a cool Boston burrito-chain, then the sun decided to come out so we found a parking spot by Long Wharf and I gave everybody a walking tour of Quincy Market and the North End. Then we headed to the airport and said our goodbyes. 🙂

    From left to right: Mo, Masha, Marie, Radhika

    From left to right: Mo, Masha, Marie, Radhika

    What’s Next?

    There’s a lot of little action items embedded here. We covered a lot of ground, but we have a lot more work to do! OK, it’s taken me two weeks to get to this point and I don’t want this blog post delayed anymore, so I’m just going for it and posting now. 🙂 Enjoy!

    August 14, 2016

    Translating track files between mapping formats

    I use map tracks quite a bit. On my Android phone, I use OsmAnd, an excellent open-source mapping tool that can download map data generated from free OpenStreetMap, then display the maps offline, so I can use them in places where there's no cellphone signal (like nearly any hiking trail). At my computer, I never found a decent open-source mapping program, so I wrote my own, PyTopo, which downloads tiles from OpenStreetMap.

    In OsmAnd, I record tracks from all my hikes, upload the GPX files, and view them in PyTopo. But it's nice to go the other way, too, and take tracks or waypoints from other people or from the web and view them in my own mapping programs, or use them to find them when hiking.

    Translating between KML, KMZ and GPX

    Both OsmAnd and PyTopo can show Garmin track files in the GPX format. PyTopo can also show KML and KMZ files, Google's more complicated mapping format, but OsmAnd can't. A lot of track files are distributed in Google formats, and I find I have to translate them fairly often -- for instance, lists of trails or lists of waypoints on a new hike I plan to do may be distributed as KML or KMZ.

    The command-line gpsbabel program does a fine job translating KML to GPX. But I find its syntax hard to remember, so I wrote a shell alias:

    kml2gpx () {
            gpsbabel -i kml -f $1 -o gpx -F $1:t:r.gpx
    }
    
    so I can just type kml2gpx file.kml and it will create a file.gpx for me.

    More often, people distribute KMZ files, because they're smaller. They're just gzipped KML files, so the shell alias is only a little bit longer:

    kmz2gpx () {
            kmlfile=/tmp/$1:t:r.kml 
            gunzip -c $1 > $kmlfile
            gpsbabel -i kml -f $kmlfile -o gpx -F $kmlfile:t:r.gpx
    }
    

    Of course, if you ever have a need to go from GPX to KML, you can reverse the gpsbabel arguments appropriately; and if you need KMZ, run gzip afterward.

    UTM coordinates

    A couple of people I know use a different format, called UTM, which stands for Universal Transverse Mercator, for waypoints, and there are some secret lists of interesting local features passed around in that format.

    It's a strange system. Instead of using latitude and longitude like most world mapping coordinate systems, UTM breaks the world into 60 longitudinal zones. UTM coordinates don't usually specify their zone (at least, none of the ones I've been given ever have), so if someone gives you a UTM coordinate, you need to know what zone you're in before you can translate it to a latitude and longitude. Then a pair of UTM coordinates specifies easting, and northing which tells you where you are inside the zone. Wikipedia has a map of UTM zones.

    Note that UTM isn't a file format: it's just a way of specifying two (really three, if you count the zone) coordinates. So if you're given a list of UTM coordinate pairs, gpsbabel doesn't have a ready-made way to translate them into a GPX file. Fortunately, it allows a "universal CSV" (comma separated values) format, where the first line specifies which field goes where. So you can define a UTM UniCSV format that looks like this:

    name,utm_z,utm_e,utm_n,comment
    Trailhead,13,0395145,3966291,Trailhead on Buckman Rd
    Sierra Club TH,13,0396210,3966597,Alternate trailhead in the arroyo
    
    then translate it like this:
    gpsbabel -i unicsv -f filename.csv -o gpx -F filename.gpx
    
    I (and all the UTM coordinates I've had to deal with) are in zone 13, so that's what I used for that example and I hardwired that into my alias, but if you're near a zone boundary, you'll need to figure out which zone to use for each coordinate.

    I also know someone who tends to send me single UTM coordinate pairs, because that's what she has her Garmin configured to show her. For instance, "We'll be using the trailhead at 0395145 3966291". This happened often enough, and I got tired of looking up the UTM UniCSV format every time, that I made another shell function just for that.

    utm2gpx () {
            unicsv=`mktemp /tmp/point-XXXXX.csv` 
            gpxfile=$unicsv:r.gpx 
            echo "name,utm_z,utm_e,utm_n,comment" >> $unicsv
            printf "Point,13,%s,%s,point" $1 $2 >> $unicsv
            gpsbabel -i unicsv -f $unicsv -o gpx -F $gpxfile
            echo Created $gpxfile
    }
    
    So I can say utm2gpx 0395145 3966291, pasting the two coordinates from her email, and get a nice GPX file that I can push to my phone.

    What if all you have is a printed map, or a scan of an old map from the pre-digital days? That's part 2, which I'll post in a few days.

    August 11, 2016

    LVFS has a new CDN

    Now that we’re hitting cough Cough COUGH1 million users a month the LVFS is getting slower and slower. It’s really just a flask app that’s handling the admin panel and then apache is serving a set of small files to a lot of people. As switching to a HA server is taking longer than I hoped2, I’m in the process of switching to using S3 as a CDN to take the load off. I’ve pushed a commit that changes the default in the fwupd.conf file. If you want to help test this, you can do a substitution of secure-lvfs.rhcloud.com to s3.amazonaws.com/lvfsbucket in /etc/fwupd.conf although the old CDN will be running for a long time indeed for compatibility.

    1. Various vendors have sworn me to secrecy
    2. I can’t believe GPGME and python-gpg is the best we have…

    Flatpak cross-compilation support

    A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.

    Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.

    At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.

    How to compile for a different architecture

    There are four possible solutions to compile programs for a different architecture:

    • Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices.
    • Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite.
    • Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation.
    The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.

    Using the QEMU user-space emulator

    If you want to run just the one command, you'd do something like:

    qemu-static-arm myarmbinary

    Easy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.

    One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.

    To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say.

    Running QEmu user-space emulator in a container

    We have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.

    The Flatpak hack

    This commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.

    Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.

    $ TARGET=arm ./build.sh
    [...]
    $ ls org.gnu.hello.arm.xdgapp
    918k org.gnu.hello.arm.xdgapp

    Ready to install on your device!

    The proper way

    The above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.

    Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.

    In short

    With the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.

    Get cross-compiling!

    Adding suggestions to AppData files

    An oft-requested feature is to show suggestions for other apps to install. This is useful if the apps are part of a larger suite of application, or if the apps are in some way complimentary to each other. A good example might be that we want to recommend libreoffice-writer when the user is looking at the details (or perhaps had just installed) libreoffice-calc.

    At the moment we’ve got got any UI using this kind of data, as simply put, there isn’t much data to use. Using the ODRS I can kinda correlate things that the same people look at (i.e. user A got review for B and C, so B+C are possibly related) but it’s not as good as actual upstream information.

    Those familiar with my history will be unsuprised: AppData to the rescue! By adding lines like this in the foo.appdata.xml file you can provide some information to the software center:

    <suggests>
    <id>libreoffice-draw.desktop</id>
    <id>libreoffice-calc.desktop</id>
    </suggests>

    You don’t have to specify the parent app (e.g. libreoffice-writer.desktop in this case) and is the only tag that’s accepted. If isn’t found in the AppStream metadata then it’s just ignored, so it’s quite safe to add things that might not be in stable distros.

    If enough upstreams do this then we can look at what UI makes sense. If you make use of this feature, please let me know and I can make sure we discuss the use-case in the design discussions.

    August 10, 2016

    Double Rainbow, with Hummingbirds

    A couple of days ago we had a spectacular afternoon double rainbow. I was out planting grama grass seeds, hoping to take take advantage of a rainy week, but I cut the planting short to run up and get my camera.

    [Double rainbow]

    [Hummingbirds and rainbow] And then after shooting rainbow shots with the fisheye lens, it occurred to me that I could switch to the zoom and take some hummingbird shots with the rainbow in the background. How often do you get a chance to do that? (Not to mention a great excuse not to go back to planting grass seeds.)

    (Actually, here, it isn't all that uncommon since we get a lot of afternoon rainbows. But it's the first time I thought of trying it.)

    Focus is always chancy when you're standing next to the feeder, waiting for birds to fly by and shooting whatever you can. Next time maybe I'll have time to set up a tripod and remote shutter release. But I was pretty happy with what I got.

    Photos: Double rainbow, with hummingbirds.

    August 09, 2016

    compressing dynamic range with exposure fusion

    modern sensor capture an astonishing dynamic range, namely some sony sensors or canon with magic lantern's dual iso feature.

    this is in a range where the image has to be processed carefully to display it in pleasing ways on a monitor, let alone the limited dynamic range of print media.

    example images

    use graduated density filter to brighten foreground

    original

    graduated density filter

    using the graudated density iop works well in this case since the horizon here is more or less straight, so we can easily mask it out with a simple gradient in the graduated density module. now
    what if the objects can't be masked out so easily?

    more complex example

    this image needed to be substantially underexposed in order not to clip the interesting highlight detail in the clouds.

    original image, then extreme settings in the shadows and highlights iop (heavy fringing despite bilateral filter used for smoothing). also note how the shadow detail is still very dark. third one is tone mapped (drago) and fourth is default darktable processing with +6ev exposure.

    original

    shadows/highlights

    tonemap

    +6ev

    tone mapping also flattens a lot of details why this version already has some local contrast enhancement applied to it. this can quickly result in unnatural results. similar applies to colour saturation (for reasons of good taste, no link to examples at this point..).

    the last image in the set is just a regular default base curve pushed by six stops using the exposure module. the green colours of the grass look much more natural than in any of the other approaches taken so far (including graduated density filters, these need some fiddling in the colour saturation..). unfortunately we lose a lot of detail in the highlights (to say the least).

    this can be observed for most images, here is another example (original, then pushed +6ev):

    original

    +6ev

    exposure fusion

    this is precisely the motivation behind the great paper entitled Exposure Fusion: what if we develop the image a couple of times, each time exposing for a different feature (highlights, mid-tones, shadows), and then merge the results where they look best?

    this has been available in software for a while in enfuse
    even with a gui called EnfuseGUI.
    we now have this feature in darktable, too.

    find the new fusion combo box in the darktable base curve module:

    gui

    options are to merge the image with itself two or three times. each extra copy of the image will be boosted by an additional three stops (+3ev and +6ev), then the base curve will be applied to it and the laplacian pyramids of the resulting images will be merged.

    results

    this is a list of input images and the corresponding result of exposure fusion:

    0ev,+3ev,+6ev:

    original

    0ev,+3ev,+6ev

    0ev,+3ev:

    original

    0ev,+3ev

    0ev,+3ev,+6ev:

    original

    0ev,+3ev,+6ev

    0ev,+3ev,+6ev:

    original

    fusion

    0ev,+3ev:

    original

    fusion

    conclusion

    image from beginning:

    fusion

    note that the feature is currently merged to git master, but unreleased.

    links

    Blog backlog, Post 4, Headset fixes for Dell machines

    At the bottom of the release notes for GNOME 3.20, you might have seen the line:
    If you plug in an audio device (such as a headset, headphones or microphone) and it cannot be identified, you will now be asked what kind of device it is. This addresses an issue that prevented headsets and microphones being used on many Dell computers.
    Before I start explaining what this does, as a picture is worth a thousand words:


    This selection dialogue is one you will get on some laptops and desktop machines when the hardware is not able to detect whether the plugged in device is headphones, a microphone, or a combination of both, probably because it doesn't have an impedance detection circuit to figure that out.

    This functionality was integrated into Unity's gnome-settings-daemon version a couple of years ago, written by David Henningsson.

    The code that existed for this functionality was completely independent, not using any of the facilities available in the media-keys plugin to volume keys, and it could probably have been split as an external binary with very little effort.

    After a bit of to and fro, most of the sound backend functionality was merged into libgnome-volume-control, leaving just 2 entry points, one to signal that something was plugged into the jack, and another to select which type of device was plugged in, in response to the user selection. This means that the functionality should be easily implementable in other desktop environments that use libgnome-volume-control to interact with PulseAudio.

    Many thanks to David Henningsson for the original code, and his help integrating the functionality into GNOME, Bednet for providing hardware to test and maintain this functionality, and Allan, Florian and Rui for working on the UI notification part of the functionality, and wiring it all up after I abandoned them to go on holidays ;)

    August 07, 2016

    SIGGRAPH 2016 report

    Anaheim, 23 – 28 July 2016

    This year was the 25th anniversary of my SIGGRAPH membership (I am a proud member since ’91)! It was also my 18th visit in a row to the annual convention (since ’99). We didn’t have a booth on the trade show this year though. Expenses are so high! Since 2002 we exhibited 7 times, we skipped years more often, but since 2011 we were there every year. The positive side of not exhibiting was that I finally had time and energy to have meetings and participate in other events.

    Friday 22 – Saturday 23: Toronto

    MS_Ozzy_pose04_comp_paint.0001

    But first: an unexpected last minute change in the planning. Originally I was going to Anaheim to also meet with the owners of Tangent Animation about their (near 100% Blender) feature film studio. Instead they suggested it would be much more practical to rebook my flight and have a day stopover in Toronto to see the studio and have more time to meet.

    I spent two half days with them, and it was really blown away by the work they do there. I saw the opening 10 minutes of their current feature film (“Run Ozzy Run”). The film is nearly finished, currently being processed for grading and sound. The character designs are adorable, the story is engaging and funny, and they pulled off surprising good quality animation and visuals – especially knowing it’s still a low budget project made with all the constraints associated with it. And they used Blender! Very impressive how they managed to get quite massive scenes work. They hired a good team of technical artists and developers to support them. Their Cycles coder is a former Mental-Ray engineer, who will become a frequent contributor to Cycles.

    I also had a sneak peek of the excellent concept art of the new feature that’s in development – more budget, and much more ambitious even. For that project they offer to invest substantially in Blender, we spent the 2nd day on outlining a deal. In short that is:

    • Tangent will sponsor two developers to work in Blender Institute on 2.8 targets (defined by us)
    • Tangent will sponsor one Cycles developer, either to work in Blender Institute or in Toronto.
    • All of this full time and decently paid positions, for at least 1 year. Can be effective in September.

    Sunday 24: SIGGRAPH Anaheim

    blenderbof162 PM: Blender Birds of a Feather, community meeting

    As usual we start the meeting with giving everyone a short moment to say who they are what they do with Blender (or want to see happen). This takes 25+ minutes! There were visitors from Boeing, BMW, Pixar, Autodesk, Microsoft, etc.

    The rest of the time I did my usual presentation (talk about who we are, what we did last year, and the plans for next year).

    You can download the pdf of the slides here.

    3:30 PM : Blender Birds of a Feather, Spotlight event

    Theory Animation’s David Andrade offered to organise this ‘open stage’ event, giving artists or developers 5 minutes of time to show the work they did with Blender. It was great to see this organised so well! There was a huge line-up even, lasting 90 minutes even. Some highlights from my memory:

    • Theory Animation showed work they did for the famous TV show “Silicon Valley”. The hilarious “Pipey” animation is theirs.
    • Sean Kennedy is doing a lot of Blender vfx for tv series. Amazing work (can’t share here, sorry), and he gave a warm plea for more development attention for the Compositor in Blender.
    • Director Stephen Norrington (Blade, League of Extraordinary Gentlemen) is using Blender! He showed vfx work he did for a stop motion / puppet short film.
    • JT Nelson showed results of Martin Felke’s Blender Fracture Branch. Example.
    • Nimble Collective premiered their first “Animal Facts” short, The Chicken.

    Afterwards we went for drinks and food to one of the many bar/restaurants close by. (Well close, on the map it looked like 2 blocks, but in Anaheim these blocks were half a mile! Made the beer taste even better though :)

    Monday 25: the SIGGRAPH Animation Festival, Jury Award!

    Selfie with badge + ribbon

    Aside of all these interesting encounters you can have in LA (I met with people from Paramount Animation), the absolute highlight of Monday was picking up the Jury prize for Cosmos Laundromat. Still vividly remembering 25 years ago, struggling with the basics of CG, I never thought I’d be cheered on and applauded by 1000+ people in the Siggraph Electronic Theater!

    Clearly the award is not just mine, it’s for director Mathieu Auvray and writer Esther Wouda, the team of artists and developers who worked on the film, and most of all for everyone who contributed to Blender and to Blender Cloud in one way or another.

    Wait… but the amount of surprises weren’t over that day. I sneaked away from the festival screening and went to AMD’s launch party. I was pleasantly surprised to watch corporate VP Roy Taylor spending quite some time talking about Blender, ending with “We love Blender, we love the Blender Community!” AMD is very serious about focusing on 3D creators online, to serve the creative CG communities of which Blender users are one of the biggest now. If AMD could win back the hearts of Blender artists…

    Theory Animation guys!

    After the event I met with Roy Taylor, he confirmed the support they already give to Blender developer Mike Erwin (to upgrade OpenGL). Roy said AMD is committed to help us in many more ways, so I asked for one more full time Cycles coder. Deal! Support for 1 year full time developer on Cycles to finish the ‘OpenCL split kernel’ project is being processed now. I’ll be busy hiring people the coming period!

    Later in the evening I met with several Blender artists. They got the award handed over by me to show my appreciation. Big fun :)

    Tuesday 26 – Wednesday 27, SIGGRAPH tradeshow and meetings

    Not having a booth was a blessing (at least for once!). I could freely move around and plan the days with meetings and time to attend the activities outside of the trade show as well. Here’s a summary of activities and highlights

    • Tradeshow impression
      This year’s show seemed a bit smaller than last year, but on both days it felt crowded in most places, the attendance was very good. Best highlights are still the presentations by artists to show their work on larger booths such as Nvidia or Foundry. Also for having an original Vive experience it was worth the visit. Google’s Tango was there, but the marketing team failed to impress demoing it – 3d scanning the booth failed completely all the time (don’t put tv screens on walls if you want to scan!).
    • USD-1Pixar USD launch lunch
      Pixar presented the official launch of the Universal Scene Description format, a set of formats with a software library to manage your entire pipeline. The  USD design is very inviting for Blender to align well with – we already share some of the core design decisions, but USD is quite more advanced. It will be interesting to see whether USD will be used for pipeline IO (file exchange) among applications as well.
    • Autodesk meeting
      Autodesk has appointed a director open source strategy, he couldn’t attend but connected me with Marc Stevens and Chris Vienneau, executives in the M&E department. They also brought in Arnold’s creator Marcos Fajardo.
      Marcos expressed their interest in having Arnold support for Blender. We discussed the (legal, licensing) technicalities of this a bit more, but for as long they stick to data transport between the programs (like PRman and VRay do now using Blender’s render API) there’s no issue. With Marc and Chris I had a lengthy discussion about Autodesk’s (lack of) commitment to open source and openly accessible production pipelines. They said that Autodesk is changing their strategy though and they will show this with actively sharing sources or participating in open projects as well. I invited them to publish the FBX spec doc (needs to get blessings from board, but they’ll try) and to work with Pixar on getting the character module for USD fleshed out (make it work for Maya + Max, in open license). The latter suggestion was met with quite some enthusiasm. Would make the whole FBX issue go away mostly. 
    • Nvidia
      It was very cool to meet with Ross Cunniff, Technology Lead at NVIDIA. He is nice down-to-earth and practical. With his connections it’ll be easier to get a regular seed of GTX cards to developers. I’ve asked for a handful 1080ies right away! Nvidia will also actively work on getting Blender Cycles files in the official benchmarking suites.
    • Massive Software
      David Andrade (Theory Animation) setup a meeting with me and industry legend Stephen Regelous, founder of Massive Software and the genius behind the epic Lord of the Rings battle scenes. Stephen said that at Massive user meetings there’s an increasing demand for Blender support. He explained me how they do it; basically everything’s low poly and usually gets rendered in 1 pass! The Massive API has a hook into the render engine to generate the geometry on the fly, to prevent huge file or caching bottlenecks. In order to get this work for Blender Cycles, a similar hook should be written. They currently don’t have the engineers to do this, but they’d be happy to support someone for it.
    • Khronos
      I further attended the WebGL meeting (with demos by Blend4web team) and the Khronos party. Was big fun, a lot of Blender users and fans there! The Khronos initiative remains incredibly important – they are keeping the graphics standards open (like OpenGL, glTF) and make innovation available for everyone (WebGL and Vulkan).

    Friday 29, San Francisco and Bay Area

    on-highway1Wednesday evening and Thursday I took my time driving the touristic route north to San Francisco. I wanted to meet some friends there (loyal Blender supporter David Jeske, director/layout artist Colin Levy, CG industry consultants Jon and Kathleen Peddie, Google engineer Keir Mierle) and visit two business contacts.

    • Nimble Collective
      Located in a lovely office in Mountain View (looks like it’s always sunny and pleasant there!) this startup is also heavily investing in Blender and using it for a couple of short film projects. I leave it them to release the info on the films :) but it’s going to be amazing good! I also had a demo of their platform, which is like a ‘virtual’ animation production workstation, which you can use in a browser. The Blender demo on their platform was feeling very responsive, including fast Cycles renders.
      The visit ended participating in their “weekly”. Just like the Blender Institute weekly! An encouraging and enthusiast gathering to celebrate results and work that’s been done.
    • Netflix
      netflix_cosmoslaundromatThe technical department from Netflix contacted us a while ago, they were looking for high quality HDR content to do streaming and other tests. We then sent them the OpenEXR files of Cosmos Laundromat, which is unclipped high resolution color. Netflix took it to a specialist HDR grading company and they showed me the result – M I N D blowing! Really awesome to see how the dynamics of Cycles renders (like the hard morning light) works on a screen that allows a dynamic ‘more than white’ display. Cosmos Laundromat is now on Netflix, as one of the first HDR films.
      We then discussed how Netflix could do more with our work. Obviously they’re happy to share the graded HDR film, but they’re especially interested in getting more content – especially in 4k. A proposal for sponsoring our work is being evaluated internally now.

    Sunday 31 July, Back home

    I was gone for 9 days, with 24 hours spent in airplanes. But it was worth it :) Jetlag usually kicks in then, took a week to resolve. In the coming weeks there’s a lot of work waiting, especially setting up all the projects around Blender 2.8. A new design/planning doc on 2.8 is first priority.

    Please feel invited to discuss the topics in our channels and talk to me in person in IRC about Blender 2.8 and Cycles development work. Or send me a mail with feedback. That’s ton at blender.org, as usual.

    Ton Roosendaal
    August 7, 2016

    August 06, 2016

    Adding a Back button in Python Webkit-GTK

    I have a little browser script in Python, called quickbrowse, based on Python-Webkit-GTK. I use it for things like quickly calling up an anonymous window with full javascript and cookies, for when I hit a page that doesn't work with Firefox and privacy blocking; and as a quick solution for calling up HTML conversions of doc and pdf email attachments.

    Python-webkit comes with a simple browser as an example -- on Debian it's installed in /usr/share/doc/python-webkit/examples/browser.py. But it's very minimal, and lacks important basic features like command-line arguments. One of those basic features I've been meaning to add is Back and Forward buttons.

    Should be easy, right? Of course webkit has a go_back() method, so I just have to add a button and call that, right? Ha. It turned out to be a lot more difficult than I expected, and although I found a fair number of pages asking about it, I didn't find many working examples. So here's how to do it.

    Add a toolbar button

    In the WebToolbar class (derived from gtk.Toolbar): In __init__(), after initializing the parent class and before creating the location text entry (assuming you want your buttons left of the location bar), create the two buttons:

            backButton = gtk.ToolButton(gtk.STOCK_GO_BACK)
            backButton.connect("clicked", self.back_cb)
            self.insert(backButton, -1)
            backButton.show()
    
            forwardButton = gtk.ToolButton(gtk.STOCK_GO_FORWARD)
            forwardButton.connect("clicked", self.forward_cb)
            self.insert(forwardButton, -1)
            forwardButton.show()
    

    Now create those callbacks you just referenced:

       def back_cb(self, w):
            self.emit("go-back-requested")
    
        def forward_cb(self, w):
            self.emit("go-forward-requested")
    

    That's right, you can't just call go_back on the web view, because GtkToolbar doesn't know anything about the window containing it. All it can do is pass signals up the chain.

    But wait -- it can't even pass signals unless you define them. There's a __gsignals__ object defined at the beginning of the class that needs all its signals spelled out. In this case, what you need is

           "go-back-requested": (gobject.SIGNAL_RUN_FIRST,
                                  gobject.TYPE_NONE, ()),
           "go-forward-requested": (gobject.SIGNAL_RUN_FIRST,
                                  gobject.TYPE_NONE, ()),
    
    Now these signals will bubble up to the window containing the toolbar.

    Handle the signals in the containing window

    So now you have to handle those signals in the window. In WebBrowserWindow (derived from gtk.Window), in __init__ after creating the toolbar:

            toolbar.connect("go-back-requested", self.go_back_requested_cb,
                            self.content_tabs)
            toolbar.connect("go-forward-requested", self.go_forward_requested_cb,
                            self.content_tabs)
    

    And then of course you have to define those callbacks:

    def go_back_requested_cb (self, widget, content_pane):
        # Oops! What goes here?
    def go_forward_requested_cb (self, widget, content_pane):
        # Oops! What goes here?
    

    But whoops! What do we put there? It turns out that WebBrowserWindow has no better idea than WebToolbar did of where its content is or how to tell it to go back or forward. What it does have is a ContentPane (derived from gtk.Notebook), which is basically just a container with no exposed methods that have anything to do with web browsing.

    Get the BrowserView for the current tab

    Fortunately we can fix that. In ContentPane, you can get the current page (meaning the current browser tab, in this case); and each page has a child, which turns out to be a BrowserView. So you can add this function to ContentPane to help other classes get the current BrowserView:

        def current_view(self):
            return self.get_nth_page(self.get_current_page()).get_child()
    

    And now, using that, we can define those callbacks in WebBrowserWindow:

    def go_back_requested_cb (self, widget, content_pane):
        content_pane.current_view().go_back()
    def go_forward_requested_cb (self, widget, content_pane):
        content_pane.current_view().go_forward()
    

    Whew! That's a lot of steps for something I thought was going to be just adding two buttons and two callbacks.

    August 03, 2016

    The Fedora Design Team’s Inkscape/Badges Workshop!

    Fedora Design Team Logo

    This past weekend, the Fedora Design Team held an Inkscape and Fedora Badges workshop at Red Hat’s office in Westford, Massachusetts. (You can see our public announcement here.)

    Badges Workshop

    Why did the Fedora Design Team hold this event?

    At our January 2015 FAD, one of the major themes of things we wanted to do as a team was outreach, to both help teach Fedora and the FLOSS creative tools set as a platform for would-be future designers, as well as to bring more designers into our team. We planned to do a badges workshop at some future point to try to achieve that goal, and this workshop (which was part of a longer Design FAD event I’ll detail in another post) was it. We collectively feel that designing artwork for badges is a great “gateway contribution” for Fedora contributors because:

    • The badges artwork standards and process is extremely well-documented.
    • The artwork for a badge is a small, atomic unit of contribution that does not take up too much of a contributor’s time to create.
    • Badges individually touch on varying areas of the Fedora project, so by making a single badge you could learn (in a rather gentle way) how a particular aspect of how the Fedora project works (as a first step towards learning more about Fedora.)
    • The process of creating badge artwork and submitting it from start to finish is achievable during a one-day event, and being able to walk away from such an event having submitted your first open source contribution is pretty motivating!

    This is the first event of this kind the Fedora Design team has held, and perhaps any Fedora group? We aimed for a general, local community audience rather than attaching this event to a larger technology-focused conference event or release party. We explicitly wanted to bring folks not currently affiliated with Fedora or even the open source community into our world with this event.

    Preparing for the event

    Photo of event handouts

    There was a lot we had to do in order to prepare for this event. Here’s a rough breakdown:

    Marketing (AKA getting people to show up!)

    We wanted to outreach to folks in the general area of Red Hat’s Westford Office. Originally, we had wanted to have the event located closer to Boston and partner with a university, but for various reasons we needed to have this event in the summer – a poor time for recruiting university students. Red Hat Westford graciously offered us space for free, but without something like a university community, we weren’t sure how to go about advertising the event to get people to sign up.

    Here’s what we ended up doing:

    • We created an event page on EventBrite (free to use for free events.) That gave us a bit of marketing exposure – we got 2 signups from on Event Brite referrals. The site also helped us with event logistics (see next section for more on that.)
    • We advertised the event on Red Hat’s Westford employee list – Red Hat has local office mailing lists for each office, so we advertised the event on there asking area employees to spread the word about the event to friends and family. We got many referrals this way.
    • We advertised the event on a public Westford community Facebook page – I don’t know about other areas, but in the Boston area, many of the individual towns have public town bulletin boards set up as Facebook groups, and event listings are allowed and even encouraged on many of these sites. I was able to get access to one of the more popular Westford groups and posted about our event there – first about a month out, then a reminder the week before. We received a number of referrals this way as well.

    Photo of the event

    Logistics

    We had to formally reserve the space, and also figure out how many people were coming so we knew how much and what kinds of food to order – among many other little logistical things. Here’s how we tackled that:

    • Booking the space – I filed a ticket with Red Hat’s Global Workplace Services group to book the space. We decided to open up 30 slots for the workshop, which required booking two conference rooms on the first floor of the office (generally considered the space we offer for public events) and also requesting those rooms be set up classroom-style with a partition opened up between them to make one large classroom. The GWS team was easy to work with and a huge help in making things run smoothly.
    • Managing headcount – As mentioned earlier, we set up an EventBrite page for the event, which allowed us to set up the 30 slots and allow people to sign up to reserve a slot in the class. This was extremely helpful in the days leading up to the event, because it provided me a final head count for ordering food and also a way to communicate with attendees before the event (as registration requires providing an email address.) We had a last-minute cancellation of two slots, and we were able to push out information to the three channels we’d marketed the event to and get those slots filled the day before the event so we had a full house day of.
    • Ordering food – I called the day before the event to order the food. We went with a local Italian place that did delivery and ordered pizzas and soda for the guests and sandwiches / salads for the instructors (I gathered instructor orders right before making the call.) We had a couple of attendees who had special dietary needs, so I made sure to order from a place that could accommodate.
    • Session video recording – During the event, we used BlueJeans to wirelessly project our slides to the projectors. Consequentially, this also resulted in recordings being taken of the sessions. On my to-do list is to edit those down to just the useful bits and post them, sending the link to attendees.
    • Surveying attendees – After the event, Event Brite helpfully allowed us to send out a survey (via Survey Monkey) to the attendees to see how it went.
    • Making slides available – Several attendees asked for us to send out the slides we used (I just sent them out this afternoon, and have provided them here as well!)
    • Getting permission – I knew we were going to be writing up an event report like this, so I did get the permission/consent of everyone in the room before taking pictures and hitting record on the BlueJeans session.
    • Parking / Access – I realized too late that we probably should have provided parking information up front to attendees, but luckily it was pretty straightforward and we had plenty of spots up front. Radhika helpfully stood by the front entrance as attendees arrived to allow them in the front door and escort them to the classroom.
    • Audio/Video training – Red Hat somewhat recently got a new A/V system I wasn’t familiar with, and there are specific things you need to know about getting the two projectors in the two rooms in sync when the partition is open, so I was lucky to book a meeting with one of Red Hat’s extremely helpful media folks to meet with me the day before and teach me how to run the A/V system.

    28055771984_35dfc9fdd9_k

    Inkscape / Badges Prep Work

    We also needed to prepare for the sessions themselves, of course:

    • Working out an agenda – We talked about the agenda for the event on our mailing list as well as during team meetings, but the rough agenda was basically to offer an Inkscape install fest followed by a basic Inkscape class (mizmo), run through an Inkscape tutorial (gnokii), and then do a badges workshop (riecatnor & mleonova.) We’ll talk about how well this worked later in this post. 🙂
    • Prepare slides / talking points – riecatnor, mleonova, and myself prepped some slides for our sessions; gnokii prepared a tutorial.
    • Prepare handouts – You can see in one of the photos above that we provided attendees with handouts. There were two keyboard shortcut printouts – one for basic / most frequently used ones, the other a more extended / full list we found provided by Michael van der Nest. We also provided a help sheet on how to install Inkscape. We printed them the morning of and distributed them at each seat in the classroom.
    • Prepare badges – riecatnor and mleonova very carefully combed through open badge requests in need of artwork and put together a list of those most appropriate for newbies, filling in ideas for artwork concepts and tips/hints for the would-be badgers who’d pick up the tickets at the event. They also provided the list of ticket numbers for these badges on the whiteboard at the event.

    Marie explaining the anatomy of a badge

    The Agenda / Materials

    Here’s a rough outline of our agenda, with planned and actual times:

    Here’s the materials we used:

    As mentioned elsewhere in this post, we did record the sessions, but I’ve got to go through the recordings to see how usable they are and edit them down if they are. I’ll do another post if that’s the case with links to the videos.

    How did the event go?

    Unfortunately, despite our best efforts (and a massive amount of prep work,) I don’t think any of us would qualify the event as a home run. We ran into a number of challenges, some of our own (um, mine, actually!) making, some out of our control. That being said, thus far our survey results have been very positive – basically attendees agreed with our self-analysis and felt it was a good-to-very good, useful event that could have been even better with a few tweaks.

    graph showing attendees rated the presentation good-to-excellent

    The Good

    • Generally attendees enjoyed the sessions and found them useful. As you can see in the chart above, of 8 survey respondents, 2 thought it was excellent, 3 thought it was very good, and 3 thought it was good. I’ll talk more about the survey results later on, but enjoy this respondent’s quote: “I’m an Adobe person and I’ve never used other design softwares, so I’m happy I learned about a free open source software that will help me become more of an asset when I finish college and begin looking for a career.”
    • The event was sold out – interest in what we had to say and teach is high!. We had all 30 slots filled over a week before the event; when we had 2 last-minute dropouts, we were able to quickly re-fill those slots. I don’t know if every single person who signed up attended, but we weren’t left with any extra seats in the room at the peak of attendance.
    • The A/V system worked well. We had a couple of mysterious drops from BlueJeans that lead some some furious reconnecting to continue the presentation, but overall, our A/V setup worked well. I
    • The food was good. There was something to eat for everyone, and it all arrived on time. For close to 40 people, it cost $190. This included 11 pizzas (9 large, 2 medium gluten free), 4 salads, 2 sandwiches, and 5 2-liter bottles of soda. (Roughly $5.30/person.) Maybe a silly point to make, but food is important too, especially since the event ran right through lunch (10 AM – 3 PM.)
    • We didn’t frighten newbies away (at least, not right away.) About half of the attendees came with Inkscape preinstalled, half didn’t. We divided them into different halves of the room. The non-preinstallers (who we classified as “newbies,”) stayed until a little past lunch, which I consider a victory – they were able to follow at least the first long session, stayed for food, and completed most of gnokii’s tutorial.
    • Inkscape worked great, even cross-platform. Inkscape worked like a champ – there were no catastrophic crashes and generally people seemed to enjoy using it. We had everyone installed by about 20 minutes into the first session – one OS X laptop had some issues due to some settings in the OS X control panel relating to XQuartz, but we were able to solve them. Everyone left the event with a working copy of Inkscape on their system! I would guesstimate we had about 1/3 OS X, 1/3 Windows, and 1/3 Linux machines (the latter RH employees + family mostly. 🙂 )
    • No hardware issues. We instructed attendees to bring their own hardware and all did, with the exception of one attendee who contacted me ahead of time – I was able to arrange to provide a loaner laptop for her. Some folks forgot to bring a computer mouse and I had enough available to lend.

    survey results about event length - too long

    The Bad

    • We ran too long. We originally planned the workshop to last from 10 AM to 2 PM. We actually ran until about 4 PM; although we officially ended at 3 PM with everyone in the room’s consent around 1:30. This is almost entirely my fault; I covered the Inkscape Bootcamp slides too slowly. We had a range of skill levels in the room, and while I was able to keep the newbies on board during my session, the more advanced folks were bored until gnokii ran his (much more advanced) tutorial. The survey results also provided evidence for this, as folks felt the event ran too long and some respondents felt it moved too slow, others too fast.
    • We covered too much material. Going hand-in-hand with running too long, we also tried to do too much. We tried to provide instruction for everyone from the absolute beginner, to Adobe convert, to more experienced attendee, and lost folks along the way as the pacing and level of detail needed for each different audience is too different to pull off successfully in one event. In our post-event session, the Fedora Design Team members running the event agreed we should cut a lot of the basic Inkscape instruction and instead focus on badges as the conduit for more (perhaps one-on-one lab session style) Inkscape instruction to better focus the event.
    • We lost people after lunch. We lost about half of our attendees not long after lunch. I believe this is for a number of reasons, not the least of which we covered so much material to start, they simply needed to go decompress (one survey respondent: “I ended up having to leave before the badges part because my brain hurt from the button tutorial. Maybe don’t do quite so many things next time?”) Another interesting thing to note is the half of the room that was less experienced (they didn’t come with Inkscape pre-installed and along the way tended to need more instructor help,) is the half that pretty much cleared out, while the more experienced half of the room was still full by the official end of the event. This helps support the notion that the newbies were overwhelmed and the more experienced folks hungry for more information.
    • FAS account creation was painful. We should have given the Fedora admins a heads up that we’d be signing 30 folks up for FAS accounts all at the same time – we didn’t, oops! Luckily we got in touch via IRC, so folks were finally able to sign up for accounts without being blocked due to getting flagged as potential spammers. The general workflow for FAS account signup (as we all know) is really clunky and definitely made things more difficult than it needed to be.
    • We should have been more clear about the agenda / had slides available. This one came up multiple times on the survey – folks wanted a local copy of the slides / agenda at the event so when they got lost they could try to help themselves. We were surprised by unwilling folks seemed to be to ask for help, despite our attempts to set a laid back, audience-participation heavy environment. In chatting with some of the attendees over lunch and after the event, both newbie and experienced folks expressed a desire to avoid ‘slowing everybody else down’ by asking a question and wanting to try to ‘figure it out myself first.’
    • No OSD keypress guides. We forgot to run an app that showed our keypresses while we demoed stuff, which would have made our instructions easier to follow. One of the survey respondents pointed this one out.
    • We didn’t have name badges Another survey comment – we weren’t wearing name badges and our names weren’t written anywhere, so some folks forgot our names and didn’t know how to call for us.
    • We weren’t super-organized on assisting folks around the room. We should have set a game plan before starting and assigned some of the other staff to stand in particular corners of the room and kind of assign them that area to help people one-on-one. This would have helped because as just mentioned, people were reluctant to ask for help. Pacing behind them as they worked and taking note of their screens when they seemed stuck and offering help worked well.

    Workshop participants working on their projects

    Survey results so far

    Thus far we’ve had 8 respondents out of the 30 attendees, which is actually not an awful response rate. Here’s a quick rundown of the results:

    1. How likely is it that you would recommend the event to a friend or colleague? 2 detractors, 3 passives, 2 promoters; net promoter score 0 (eek)
    2. Overall, how would you rate the event? Excellent (2), Very Good (3), Good (3), Fair (0), Poor (o)
    3. What did you like about the event? This was a freeform text field. Some responses:
      • “I think the individuals running the event did a great job catering to the inexperience of some of the audience members. The guy that ran the button making lab was incredibly knowledgeable and he helped me learn a lot of new tools in a software I’ve never used before that I may not have found on my own.”
      • “The first Inkscape walk through of short cut keys and their use. Presenter was confident, well prepared and easy to follow. Everyone was very helpful later as we tried “Evil Computer” mods with assistance from knowledgeable artists.”
      • “I enjoyed learning about Inkscape. Once I understood all the basic commands it made it very easy to render cool-looking logos.”
      • “It was a good learning experience. It taught me some things about graphics that I did not know.”
    4. What did you dislike about the event? This was a freeform text field. Some responses:
      • “I wish there was more of an agenda that went out. I tried installing Inkscape at my home before going, but I ran into some issues so I went to the office early to get help. Then I found out that the first hour of the workshop was actually designed to help people instal it. It also went much later than originally indicated and although it didn’t bother me, many people left at the time it was supposed to end, therefore not being able to see how to be an open source contributor.”
      • “The button explanation was very fast and confusing. I’m hoping the video helps because I can pause it and looking away for a moment won’t mean I miss something important.”
      • “Hard to follow directions, too fast paced”
      • “The pace was sometimes too slow.”
      • “While the pace felt good, it can be hard to follow what specific keypresses/mouse movements produced an effect on the projector. When it’s time to do it yourself, you may have forgotten or just get confused. A handout outlining the steps for each assignment would have been helpful.”
    5. How organized was the event? Extremely organized (0), Very organized (5), Somewhat organized (3), Not so organized (0), Not at all organized (0)
    6. How friendly was the staff? Extremely friendly (4), Very friendly (4), Somewhat friendly (0), Not so friendly (0), Not at all friendly (0)
    7. How helpful was the staff? Extremely helpful (2), Very helpful (3), Somewhat helpful (3), not so helpful (0), not at all helpful (0).
    8. How much of the information you were hoping to get from this event did you walk away with? All of the information (4), most of the information (2), some of the information (2), a little of the information (0), none of the information (0)
    9. Was the event length too long, too short, or about right? Much too long (0), somewhat too long (3), slightly too long (3), about right (2), slightly too short (0), somewhat too short (0), much too short (0).
    10. Freeform Feedback: Some example things people wrote:
      • “I’m an Adobe person and I’ve never used other design softwares, so I’m happy I learned about a free open source software that will help me become more of an asset when I finish college and begin looking for a career.”
      • “Overall fantastic event. I hope I’m able to find out if another workshop like this is ever held because I’d definitely go.”
      • “If you are willing to make the slides available and focus on tool flow it would help as I am still looking for how BADGE is obtained and distributed.”

    mleonova showing off our badges

    Looking forward!

    Despite some of the hiccups, it is clear attendees got a lot out of the event and enjoyed it. There are a lot of recommendations / suggestions documented in this post for improving the next event, should one of us decide to run another one.

    In general, in our post-event discussion we agreed that future events should have a tighter experience level pre-requisite; for example, absolute beginners tended to like the Inkscape bootcamp material, so maybe have a separate Inkscape bootcamp event for them. The more experienced users enjoyed gnokii’s project-style, fast-paced tutorial and the badges workshop, so having an event that included just that material and had a pre-requisite (perhaps you must be able to install Inkscape on your own and be at least a little comfortable using it) would probably work well.

    Setting a time limit of 3-4 hours and sticking to it, with check-ins, would be ideal. I think an event like this with this many attendees needs 2-3 people minimum running it to work smoothly. If there were 2-3 Fedorans co-located and comfortable with the material, it could be run fairly cheaply; if the facility is free, you could do it for around $200 if you provide food.

    Anyway I hope this event summary is useful, and helps folks run events like this in the future! A big thanks to the Fedora Council for funding the Fedora Design Team FAD and this event!

    July 31, 2016

    New Stellarium User Guide is available

    Dear all,

    while we were working on new features for the 0.15 release, we have also thoroughly reworked the Stellarium User Guide (SUG). This should now include all changes introduced since the 0.12 series and be up-to-date with the 0.15 series. It includes many details about landscape creation, skyculture creation, telescope control, putting your deep-sky photos among the stars, how to start scripting, creation of 3D sceneries for Stellarium, and much more.

    The SUG is now almost 300 pages and available for download as hyperlinked PDF from stellarium.org. It is also packed in the Windows install package download, so you don't need a separate download.

    The online user guide on the wiki will no longer be updated, and may even go away if we do not hear a major outcry from you.

    Clear skies for observing, and now you have something to read for the cloudy nights as well ;-)

    Kind regards,
    Georg

    Stellarium 0.15.0

    In memory of our team member Barry Gerdes.

    Version 0.15.0 is based on Qt5.6. Starting with this version, some graphics cards have been blacklisted by Qt and are automatically forced to use ANGLE on Windows.
    We introduce a major internal change with the StelProperty system.
    This allows simpler access to internal variables and therefore more ways of operation.
    Most notably this version introduces an alternative control option via RemoteControl, a new webserver interface plugin.
    We also introduce another milestone towards providing better astronomical accuracy for historical applications: experimental support of getting planetary positions from JPL DE430 and DE431 ephemerides. This feature is however not fully tested yet.
    The major changes:
    - Added StelProperty system
    - Added new plugin for exhibitions and planetariums - Remote Control
    - Added new skycultures: Macedonian, Ojibwe, Dakota/Lakota/Nakota,
    Kamilaroi/Euahlayi
    - Updated code of plugins
    - Added Bookmarks tool and updated AstroCalc tool
    - Added new functions for Scripting Engine and new scripts
    - Added Miller Cylindrical Projection
    - Added updates and improvements in DSO and star catalogues (include initial
    support of The Washington Double Star Catalog)
    - azimuth lines (also targetting geographic locations) in ArchaeoLines plugin
    - Many fixes and improvements...

    In addition, we prepared a new user guide.

    A huge thanks to our community whose contributions help to make Stellarium better!

    Full list of changes:
    - Added getting planetary positions from JPL DE430 and DE431 ephemerides (SoCiS2015 project)
    - Added RemoteControl and preliminary RemoteSync plugins (SoCiS2015 project)
    - Added StelProperty system (SoCiS2015 project)
    - Added immediate saving of settings for plugins (Angle Measure, Archeo Lines, Compass Marks)
    - Added Belarusian translation for landscapes and sky cultures (LP: #1520303)
    - Added Bengali description for landscapes and sky cultures (LP: #1548627)
    - Added new skycultures: Macedonian, Ojibwe, Dakota/Lakota/Nakota, Kamilaroi/Euahlayi
    - Added support Off-Axis Guider feature in Oculars plugin (LP: #1354427)
    - Added support permanent rotation angle for CCD in Oculars plugin
    - Added type of mount for telescopes in Oculars plugin
    - Added improvements for displaying data in decimal format
    - Added possibility to drawing of permanent orbits of the planets (disables of hiding orbits for planets, when they are out of field of view). (LP: #1509674)
    - Added tentative support for screens with 4K resolution for Windows packages (LP: #1372781)
    - Enabled support for side-by-side assembly technology for Windows packages (LP: #1400045)
    - Added CLI options --angle-d3d9, --angle-d3d11, --angle-warp for fine-tuning ANGLE flavour selection on Windows.
    - Added improvements in Stellarium's installer on Windows
    - Added improvements in Telescope Control plugin
    - Added feature for build a dependency graphs of various characteristics of exoplanets (Exoplanets plugin)
    - Added support of the proper names for exoplanets and their host stars (Exoplanets plugin)
    - Added improvement for Search Tool
    - Added improvement for scripting engine
    - Added their Bayer designation for some stars in Scorpius (LP: #1518437)
    - Added updates and improvements in Stellarium DSO Catalog
    - Added initial support of subset of The Washington Double Star Catalog (LP: #1537449)
    - Added Prime Vertical and Colures lines
    - Added new functions for Scripting Engine
    - Added new DSO textures
    - Finished migration from Phonon to QtMultimedia (LP: #1260108)
    - Added scripting function to block tracking or centering for special installations.
    - Added visualization of ephemerides
    - Added config option for animation speed of pointers (gui/pointer_animation_speed = 1.0)
    - Added implementation of semi-transparent mask in the Oculars plugin (LP: #1511393)
    - Added hiding the halo when inner planet between Sun and observer (or moon between planet and observer) (LP: #1533647)
    - Added a tool for fill custom settings of position of Great Red Spot on Jupiter
    - Added Bookmarks tool (LP: #1106779)
    - Added new scripts: Best objects in the New General Catalog, The Jack Bennett Catalog, Binosky: Deep Sky Objects for Binoculars, Herschel 400 Tour, Binocular Highlights, 20 Fun Naked-Eye Double Stars, List of largest known stars
    - Added Circumpolar Circles (LP: #1590785)
    - Added Miller Cylindrical Projection
    - Allow viewport offset change in scripts.
    - Allow centering zenith or pole via scripting (LP: #1068529)
    - Allow freezing/unfreezing average atmospheric brightness (e.g. for balanced-brightness image export scripts.)
    - Allow saving of output.txt to another file so that it can be read by other programs on Windows while Stellarium is still open.
    - Allow min/max values and wraparound settings for AngleSpinBox
    - Allow configurable speed and script speed buttons
    - Allow storing and retrieval of screen location for StelDialogs (LP: #1249251)
    - Allow polygonal horizons with many negative values (LP: #1554639)
    - Allow altitude-dependent twinkling for stars (LP: #1594065)
    - Allow display of sun's halo if sun is just outside viewport (LP: #1294498)
    - Reconfigure viewDialog GUI to put constellation switches to skylore tab.
    - Limit location coordinate spinboxes to useful coordinates
    - Apply Fluctuations in the Moon's Mean Longitude in DeltaT calculations (Source: Spencer Jones, H., 'The Rotation of the Earth, and the Secular Accelerations of the Sun, Moon and Planets', MNRAS, 99 (1939), 541-558 [http://adsabs.harvard.edu/abs/1939MNRAS..99..541S])
    - Applying device pixel ratio to the pixmap, so that it displays correctly on Mac's.
    - Added improvements for Paste and Search feature (Search Tool)
    - Added ecliptical coordinates info for objects in scripting engine
    - Added exit pupil calculation in the Oculars plugin (LP: #1500225)
    - Added support MSVC2015
    - Added automatic reloading catalogs after updating for some plugins
    - Added a tour of Messier Objects
    - Added fix to circumvent text rendering bug (CLI option: -t)
    - Introduce env variable STEL_OPTS to allow preconfiguring default CLI options.
    - Added option for hide background under buttons on bottom toolbar (LP: #1204639)
    - Added check position on the screen for orbits of satellites (LP: #1510530)
    - Added new option to changing behaviour of displaying of the labels of DSO on the screen (LP: #1600283)
    - Star catalogues has been updated from 'XHIP: An Extended Hipparcos Compilation' data.
    - Fixed validation of day in Date and Time dialog (LP: #1206284)
    - Fixed display of sidereal time (mod24), show apparent sidereal time only if nutation is used.
    - Fixed issue of saving some setting from the View window (LP: #1509639)
    - Fixed issue for reset of number of satellite orbit segments (LP: #1510592)
    - Fixed bug in download of stars catalogs in debug mode (LP: #1514542)
    - Fixed issue with smooth blending/fading in ArchaeoLines plugin
    - Fixed loading scenes for Scenery 3D plugin (LP: #1533069)
    - Fixed connection troubles in Telescope Control Plugin on Windows (LP: #1530372)
    - Fixed wrong altitude of culmination in Observability plugin (LP: #1531561)
    - Fixed the meteor radiants movements when time is switched manually (LP: #1535950)
    - Fixed misbehaving zoom out to initial view position (LP: #1537446)
    - Fixed format for declination in AstroCalc
    - Fixed value of ecliptic obliquity and ecliptic coordinates of date (LP: #1520792)
    - Fixed zoom/art brightness handling (LP: #1520783)
    - Fixed perspective mode with offset viewport in scenery3d (LP: #1509728)
    - Fixed drawing reticle for telescope (LP: #1526348)
    - Fixed wrong altitudes for some locations (LP: #1530759)
    - Fixed window location having offscreen frame when leaving fullscreen (LP: #1471954)
    - Fixed core.moveToAltAzi(90,XX) issue (LP: #1068529)
    - Fixed some skyculture links
    - Fixed issue of sidereal time: sidereal time is no longer displayed negative in the Western timezones.
    - Fixed online search tool for MPC website
    - Fixed translation of Egyptian planet names (LP: #1548008)
    - Fixed bug about wrong rise/set times in Observability for years far in the past
    - Fixed issue for resets flip buttons in Oculars plugin (LP: #1511389)
    - Fixed proper detection of GLSL ES version on Raspberry Pi with VC4 driver (and maybe other devices).
    - Fixed odd DateTimeDialog behavior during daylight saving change
    - Fixed key handling issue on Mac OS X in Scenery3D (LP: #1566805)
    - Fixed omission in documentation (LP: #1574583, #1575059)
    - Fixed a loss of focus in the sky when you click on the button (LP: #1578773)
    - Fixed issue of getting location from network.
    - Fixed bug in visualization of opposition/conjunction longitude
    - Fixed crash of Navigational Stars plugin (LP: #1598375)
    - Fixed satellites mutual occultation (LP: #1389765)
    - Fixed NaN in landscape brightness computation (LP: #1597129)
    - Fixed oversized corona (LP: #1599513)
    - Fixed displaying common names of DSO after changes filters of catalogs (LP: #1600283)
    - Ensure Large File Support for DE431 also for ARM boards.
    - Changed behaviour for drawing of the planet orbits (LP: #1509673)
    - Make moon halo visible again even when below -45 degrees (LP: #1586796)
    - Reduce planet brightness in daylight (LP: #1503248)
    - Updated AstroCalc tool
    - Updated icons for View dialog
    - Updated ssystem.ini (LP: #1509693, #1509692)
    - Updated names of stars (LP: #1550642)
    - Updated the search rules in the search dialog (LP: #1593965)
    - Avoid false display of tiny eclipse factor (rounding error).
    - Avoid issues around GLdouble in GLES2/ARM boards.
    - Reduce brightness of stars for ocular and CCD views
    - Hide displaying markers for meteor radiants during daylight
    - Cosmetic updates in Equation Of Time plugin
    - Enabled permanent visualization of position angles for galaxies
    - Updated bookmarks in Solar System Editor plugin
    - Updated default config options
    - Updated scripts
    - Updated shortcuts for scripts
    - Updated norwegian skyculture descriptions
    - Updated connection behaviour for autodiscovery location through network (FreeGeoIP)
    - Updated and optimized GUI
    - Updated Navigational Stars plugin
    - Implementation of quick turning to different directions (examples: CdC, HNSKY)
    - Important optimizations of planet position computation
    - Refactoring coloring markers of the DSO
    - Refactoring of the generating parts of the infrastructure (LP: #1571391)
    - Refactoring Telescope Control plugin
    - Removed info about Moon phases (avoid inconsistency for strings).
    - Removed rotation of movement by convergence angle correction in Scenery 3D plugin.

    July 28, 2016

    E-Interiores: Next-generation interior design with Blender

    By: Dalai Felinto, Blender Developer

    Meet e-interiores. This Brazilian interior design e-commerce startup transformed their creation process into an entire new fashion. This tale will show you how Blender made this possible, and how far we got.

    We developed a new platform based on a semi-vanilla Blender, Fluid Designer, and our own pipelines. Thanks to the accomplished results, e-interiores was able to consolidate a partnership with the giant Tok&Stok providing a complete design of a room, in 72 hours.

    A long time ago in a galaxy far far away

    During its initial years, e-interiores focused on delivering top-notch projects, with state of the art 3d rendering. Back then, this would involve a pantheon of software, namely: AutoCAD, SketchUp, VRay, Photoshop.

    All those mainstream tools were responsible for producing technical drawings, 3D studies, final renderings, and the presentation boards. Although nothing could be said about the final quality of their deliverables, the overall process was “artisanal” at most and extremely time consuming.

    Would it be possible to handle those steps inside a single tool? How much time could be saved from handling the non-essential tasks to the computer itself?

    New times require new tools

    The benefits of automatization in a pipeline are known and easily measured. But how much thought does a studio give to customization? How much can a studio gain from a custom-tailored tool?

    It was clear that we had to minimize the time spent on the preparation, rendering and presentation. This would leave the creators free to dedicate their time and sweat over what really matters: which furnitures to use and how to arrange them, which colors and materials to employ, the interior design itself.

    A fresh start

    The development paradigm was as such:

    • Vanilla Blender: The underneath software should stay as close to its consumer version as possible
    • Addon: The core of the project would be to create a Python script to control the end to end user experience
    • Low entry barrier: the users should not have to be skilled in any previous 3D software, specially not in Blender

    The development started by cleaning up the Blender Interface completely. I wanted the user to be unaware of the software being used underneath. We took a few hints from Fluid Designer (the theme is literally their startup file), but we focused on making the interface tied to the specifics of e-interiores working steps.

    You have the tools to create the unchanged elements of the space – walls, floor, …, the render point of views, the dynamic elements of the project, and the library. Besides that, there are a whole different set of tools dedicated to create the final boards, add notations, measurements, …

    A little bit about coding

    Although I wanted to keep Blender as close to its pristine release condition as possible, there were some changes in Blender that were necessary. They mostly orbited around the Font objects functionality which we use extensively in the boards preparations.

    The simplest solution in this case was to make the required modifications myself, and contribute them back to Blender. The following contributions are all part of the official Blender code, helping not only our project, but anyone that requires a more robust all-around text editing functionality:

    With this out of the way we have a total 18,443 lines of code for the core system, 1,458 of model conversion and 2,407 for database. All of this ammounts to over 22 thousand lines of Python scripting.

    Infrastructure barebones

    The first tools we drafted are what we call the skeleton. We have parametric walls, doors, windows. We can make floor and ceilings. We can adjust their measurements later. We can play with their style and materials.

    Objects library

    We have over 12,000 3d models made available to us by Tok&Stok. The challenge was to batch convert them into a format Cycles could use. The files were originally in Collada, and modelled and textured for realtime usage. We then ditched the lightmaps, removed the support meshes, and assigned Cycles-hand-made materials based on the object category.

    Part of this was only possible thanks to the support of Blender developer and Collada functionality maintainer Gaia Clary. Many thanks!

    More dynamic elements

    Curtains, Mirrors, marbles, blindex . . . there a few components of a project that are custom-made and adjusted on an individual case basis.

    Boards

    This is where the system shines. The moment an object is on the scene we can automatically generate the lighting layout, the descriptive memorial, and the product list.

    The boards are the final deliverable to the clients. This is where the perspectives, the project lists, the blueprints all come together. The following animation illustrates the few steps involved in creating a board with all the used products, with their info gathered from our database.

    Miscellaneous results

    Finally you can see a sample of the generated result of the initial projects done with this platform. Thanks to Blender’s script possibilities and customization we put together an end-to-end experience to our designer and architects.

    July 27, 2016

    A Chiaroscuro Portrait


    A Chiaroscuro Portrait

    Following the Old Masters

    Introduction (Concept/Theory)

    The term Chiaroscuro is derived from the Italian chiaro meaning ‘clear, bright’ and oscuro meaning ‘dark, obscure’. In art the term has come to refer to the use of bold contrasts between light and shadow, particularly across an entire composition, where they are a prominent feature of the work.

    This interplay of shadow and light is particularly important in allowing the viewer to extrapolate volume from a flat image. The use of a single light source helps to accentuate the perception of volume as well as adding drama and dynamics to the scene.

    Historically the use of chiaroscuro can often be associated with the works of old masters such as Rembrandt and Caravaggio. The use of such extreme lighting immediately evokes a sense of shape and volume, while focusing the attention of the viewer.

    Rembrandt Self Portrait Self Portrait with Gorget by Rembrandt
    Girl with a Pearl Earring Girl with a Pearl Earring by Johannes Vermeer

    The aim of this tutorial will be to emulate the lighting characteristics of chiaroscuro in producing a portrait to evoke the feeling of an old master painting.

    Equipment

    In examining chiaroscuro portraiture, it becomes apparent that a strong characteristic of the images is the use of single light source on the scene. So this tutorial will focus on using a single source to illuminate the portrait.

    Getting the keylight off the camera is essential. The closer the keylight is to the axis of the camera the larger the reduction in shadows. This is counter to the intention of this workflow. Shadows are an essential component in producing this look, and on-camera lighting simply will not work.

    The reason to choose a softbox versus the myriad of other light modifiers available is simple: control. Umbrellas can soften the light, but due to their open nature have a tendency to spill light everywhere while doing so. A softbox allows the light to be softened while also retaining a higher level of spill control.

    Light spill can still occur with a softbox, so the best option is to bring the light in as close as possible to the subject. Due to the inverse square nature of light attenuation, this will help to drop the background very dark (or black) when exposing properly for the subject.

    Inverse Square Light Fall Off

    Left
    For example, in the sample images above, a 20 inch softbox was initially located about 18 inches away from the subject (first). The rear wall was approximately 48 inches away from the subject or just over twice the distance from the softbox. Thus, on a proper exposure for the subject, the background would be around 3 stops lower in light. This is seen as the background in the first image has dropped to a dark gray.

    Middle
    When the light distance to the subject is doubled and the light distance to the rear wall stays the same, the ratio is not as extreme between them. The light distance from the subject is now 36 inches, while the light distance to the rear wall is still 48 inches. When properly exposing for the subject, the rear wall is now only about 1 stop lower in light.

    Right
    In the final example, the distance from the light to both the subject and the rear wall are very close. As such, a proper exposure for the subject almost brings the wall to a middle exposure.

    What this example provides is a good visual guide for how to position the subject and light relative to the surroundings to create the desired look. To accentuate the ratio between dark and light in the image it would be best to move the light as close to the subject as possible.

    If there is nothing to reflect light on the shadow side of the subject, then the shadows would fall to very dark or black. Usually, there are at least walls and ceilings in a space that will reflect some light, and the amount falling on the shadow side can be attenuated by either moving the subject nearer to a wall on that side, or using a bounce/reflector as desired.

    Shooting

    Planning

    The setup for the shot would be to push the key light in very close to the model, while still allowing some bounce to slightly fill the shadows.

    Mairi Light Setup

    As noted previously, having the key light close to the model would allow the rest of the scene to become much darker. The softbox is arranged such that the face is almost completely vertical and the bottom edge is just above the models eyes. This was to feather the lower edge of the light falloff along the front of the model.

    There are 2 main adjustments that can be made to fine-tune the image result with this setup.

    The first is the key light distance/orientation to the subject. This will dictate the proper exposure for the subject. For this image the intention is to push the key light in as close as possible without being in frame. There is also the option of angling the key light relative to the subject. In the diagram above, the softbox is actually angled away from the subject. The intention here was to feather the edge of the light in order to control spill onto the rest of the model (putting more emphasis on her face).

    The second adjustment, once the key light is in a good location, is the distance from the key light and subject together, to the surrounding walls (or a reflector if one is being used). Moving both subject and keylight closer to the side wall will increase the amount of reflected light being bounced into the shadows.

    Mood Board

    If possible, it can be extremely helpful to both the model and photographer to have a Mood Board available. This is usually just a collection or collage of images that help to convey the desired feeling or desired result from the session. For help in directing the model, the images do not necessarily need the same lighting setup. The intention is to help the model understand what your vision is for the pose and facial expressions.

    The Shoot

    The lighting is set up and the model understands what type of look is desired, so all that’s left is to shoot the image!

    Mairi Contact Sheet

    In the end, I favored the last image in the sequence for a combination of the models head position/body language and the slight smile she has.

    Postprocessing

    Having chosen the final image from the contact sheet, it’s now time to proceed with developing the image and retouching as needed.

    If you’d like to follow along you can download the raw .ORF file:

    Mairi_Troisieme.ORF (13MB)

    This file is licensed (Creative Commons, By-Attribution, Non-Commercial, Share-Alike), and is the same image that I shared with everyone on the forums for a PlayRaw processing practice. You can see how other folks approached processing this image in the topic on discuss. If you decide to try this out for yourself, come share your results with us!

    Raw Development

    There are various Free raw processing tools available and for this tutorial I will be using the wonderful darktable.

    darktable logo

    Base Curve

    Not surprisingly the initial image loaded without any modifications is a bit dark and rather flat looking. By default darktable should have recognized that the file is from Olympus, and attempted to apply a sane base curve to the linear raw data. If it doesn’t you can choose the preset “olympus like alternate”.

    I found that the preset tended to crush the darkest tones a bit too much, and instead opted for a simple curve with a single point as seen here:

    darktable base curve

    Resist the temptation to try and adjust overall exposure and contrast with the base curve. These parameters will be adjusted shortly in the appropriate modules. The base curve is only intended to transform the linear raw rgb to something that looks good on your output device. The base curve will affect how the contrasts, colors, and saturation all relate in the final output. For the purposes of this tutorial, it is enough to simply choose a preset.

    The next series of steps focus on adjusting various exposure parameters for the image. Conceptually they start with the most broad adjustment, exposure, then to slightly more targeted adjustments such as contrast, brightness, and saturation, then finish with targeted tonal adjustments in tone curves.

    darktable manual: base curve

    Exposure

    Once the base curve is set, the next module to adjust would be the overall exposure of the image (and the black point). This is done in the “exposure” module (below the base curve).

    darktable exposure

    The important area to watch while adjusting the exposure for the image is the histogram. The image was exposed a little dark, so increase the exposure overall for the image. In the histogram, avoid clipping any channels by allowing them to be pushed outside the range. In this case, the desire is to provide a nice mid-level brightness to the models face. The exposure can be raised until the channels begin to clip on the far right of the histogram, then brought back down a bit to leave some headroom.

    The darkest areas of the histogram on the left are clipped a bit, so raising the black level brings the detail back in the darkest shadows. When in doubt try to let the histogram guide you with data from the image. Particularly around the highest and lowest values (avoid clipping if possible).

    An easy way to think of the exposure module is that it allows the entire image exposure to be shifted along with compressing/expanding the overall range by modifying the black point.

    darktable manual: exposure

    Contrast Brightness Saturation

    Where the Exposure module shifts the overall image values from a global perspective, modules such as the “contrast brightness saturation” allow finer tuning of the image within the range of the exposure.

    To emphasis the models face, while also strengthening the interplay of shadow and light on the image, drop the brightness down to taste. I brought the brightness levels down quite a bit (-0.31) to push almost all of the image below medium brightness.

    darktable contrast brightness saturation

    Overall this helps to emphasis the models face over the rest of the image initially. While the rest of the image is comprised of various dark/neutral tones, the models face is not. Pushing the saturation down as well will remove much of the color from the scene and face. This is done to bring the skin tones back down to something slightly more natural looking, while also muting some of those tones.

    darktable contrast brightness saturation

    The skin now looks a bit more natural but muted. The background tones have become more neutral as well. A very slight bump in contrast to taste finishes out this module.

    darktable manual: contrast brightness saturation

    Tone Curve

    A final modification to the exposure of the image is through a tone curve adjustment. This gives us the ability to make some slight changes to particular tonal ranges. In this case pushing the darker tones down a bit more while boosting the upper mid and high tones.

    darktable tone curve

    This is actually a type of contrast increase, but controlled to specific tones based on the curve. The darkest darks (bottom of the curve) get pushed a little bit darker, which will include most of the sweater, background, and shadow side of the models face. The very slight rolling boost to the lighter tones primarily helps to allow the face to brighten up against the background even more.

    The changes are very slight and to taste. The tone curve is very sensitive to changes, and often only very small modifications are required to achieve a given result.

    darktable manual: tone curve

    Sharpen

    By default the sharpen module will apply a small amount of sharpening to the image. The module uses unsharp mask for sharpening, so the radius parameter is the blur radius into the unsharp mask. I wanted to sharpen lightly very fine details, so set the radius to ~1, with an amount around 0.9 and no threshold. This produced results that are very hard to distinguish from the default settings, but appears to sharpen smaller structures just slightly more.

    darktable exposure

    I personally include a final sharpening step as a side effect of using wavelet decompose for skin retouching later in the process with GIMP. As such I am not usually as concerned about sharpening here as much. If I were, there are better modules for adjusting sharpening from wavelets using the equalizer module.

    darktable manual: sharpen

    Denoise (profiled)

    The darktable team and its users profiled many different cameras for noise profiles at various ISOs to build a statistical model with brightness across the three color channels. Using these profiles, darktable can then do a better job at efficiently denoising images. In the case of my camera (Olympus OM-D E-M5), there was a profile already captured for ISO200.

    darktable denoise profiled

    In this case, the chroma noise wasn’t too bad, and a very slight reduction in luma noise would be sufficient for the image. As such, I used a non-local means with a large patch size (to retain sharpness) and a low strength. This was all applied uniformly against the HSV lightness option.

    darktable manual: denoise - profiled

    Export

    Finally! The image tones and exposure are in a desirable state, so export the results to a new file. I tend to use either TIF or PNG at 16 bit. This is in case I want to work in a full 16 bit workflow with the latest GIMP, or may want to in the future.

    GIMP

    When there are still some pixel-level modifications that need to be done to the image, the go-to software is GIMP.

    • Skin retouching
    • spot healing/touchups
    • Background rebuild
    GIMP - GNU Image Manipulation Program <3

    Skin Retouching with Wavelet Decompose

    This step is not always needed, but who doesn’t want their skin to look a little nicer if possible?

    The ability to modify an image based on detail scales isolated on their own layers is a very powerful tool. The approach is similar to frequency separation, but has the advantage of providing multiple frequencies to modify simultaneously of progressively larger and larger detail scales. This offers a large range of flexibility and an easier workflow vs. frequency separation (you can work on any detail scale simply by switching to a different layer).

    I used to use the wonderful Wavelet Decompose plugin from marcor on the GIMP plugin registry. I have since switched to using the same result from G’MIC once David Tschumperlé added it in for me. It can be found in G’MIC under:

    Details → Split details [wavelets]

    Running Split details [wavelets] against the image to produce 5 wavelet scales and a residual layer yields (cropped):

    Wavelet scales example decompose

    The plugin (or script) will produce 5 layers of isolated details plus a residual layer of low-frequency color information. Seen here in ascending size of detail scales. The finest scales (1 & 2) will be hard to discern the details as they are quite fine.

    To help visualizing what the different scale levels look like here is a view of the same levels above, normalized:

    Wavelet scales normalized

    The normalized view shows clearly the various types of detail scales on each layer.

    There are various types of changes that can be made to the final image from these details scales. In this image, we are going to focus on evening out the skin tones overall. The scales with the biggest impact on even skin tones for this image are 4 and 5.

    A good workflow when smoothing overall skin tones and using wavelet scales is to work on smoothing from the largest detail scales and working down to finer scales. Usually, a nice amount of pleasing tonal smoothing can be accomplished in the first couple of coarse detail scales.

    Skin Retouching Zones

    Different portions of a face will often require different levels of smoothing. Below is a rough map of facial contours to consider when retouching. Not all faces will require the exact same regions, but it is a good starting point to consider when approaching a new image.

    Skin retouching by zones

    The selections are made with the Free Select Tool with the “Feather edges” option on and set to roughly 30px.

    Smoothing

    A good starting point to consider is the forehead on the largest detail scale (5). The basic workflow is to select a region of interest and a layer of detail, then to suppress the features on that detail level. The method of suppressing features is a matter of personal taste but is usually done across the entire selection using a blur filter of some sort.

    A good first choice would be to use a gaussian blur (or Selective Gaussian Blur) to smooth the selection. A better choice, if G’MIC is installed, is to use a bilateral blur for its edge-preserving properties. The rest of these examples will use the bilateral blur for smoothing.

    Considering the forehead region:

    Sking retouching wavelet scales forehead

    The first image is the original. The second image is after running a bilateral blur (in G’MIC: Smooth [bilateral]), with the default parameter values:

    • Spatial variance: 10
    • Value variance: 7
    • Iterations: 2

    These values were chosen from experience using this filter for the same purpose across many, many images. The results of running a single blur on the largest wavelet scale is immediately obvious. The unevenness of the skin and tones overall are smoothed in a pleasing way, while still retaining the finer details that allow the eye to see a realistic skin texture.

    The last image is the result of working on the next detail scale layer down (Wavelet scale 4), with much softer blur parameters:

    • Spatial variance: 5
    • Value variance: 2
    • Iterations: 1

    This pass does a good job of finishing off the skin tones globally. The overall impression of the skin is much smoother than the original, but crucial fine details are all left intact (wrinkles, pores) to keep the it looking realistic.

    This same process is repeated for each of the facial regions described. In some cases the results of running the first bilateral blur on the largest scale level is enough to even out the tones (the cheeks and upper lip for example). The chin got the same treatment as the forehead. The process is entirely subjective, and will vary from person to person for the parameters. Experimentation is encouraged here.

    More importantly, the key word to consider while working on skin tones is moderation. It is also important to check your results zoomed out, as this will give you an impression of the image as seen when scaled to something more web-sized. A good rule of thumb might be:

    “If it looks good to you, go back and reduce the effect more”.

    The original vs. results after wavelet smoothing:

    Mairi Face Wavelet Wavelet Smoothed.
    Click to compare original
    <noscript> <figure> <img alt="Mairi Face Original" height="741" src="https://pixls.us/articles/a-chiaroscuro-portrait/face-original.jpg" width="640" /> Original </figure> </noscript>

    When the work is finished on the wavelet scales, a new layer from all of the visible layers can be created to continue touching up spot areas that may need it.

    Layer → New from Visible

    Spot Touchups

    The use of wavelets is good for a large-scale selection area smoothing but a different set of tools is required for spot touchups where needed. For example, there is a stray hair that runs across the models forehead that can be removed using the Heal tool.

    For best results when using the Heal tool, use a hard edged brush. Soft edges can sometimes lead to a slight smearing in the feathered edge of a brush that is undesirable. Due to the nature of the heal algorithm sampling, it is also advisable to avoid trying to heal across hard/contrasty edges.

    This is also a good tool to use for small blemishes that might have been tedious to repair across all of the wavelet scales from the previous section. This is also a good time to repair hot-spots, fly-away hairs, or other small details.

    Sweater Enhancement

    The model is wearing a nicely textured sweater but the details and texture are a slightly muted. A small increase in contrast and local details will help to bring some enhancement to the textures and tones. One method of enhancing local details would be to use the Unsharp Mask enhancement with a high radius and low amount (HiRaLoAm is an acronym some might use for this).

    Create a duplicate of the “Spot Healing” layer that was worked on in the previous step, and apply an Unsharp Mask to the layer using HiRaLoAm values.

    For example, a good starting point for parameters might be:

    • Radius: 200
    • Amount: 0.25

    With these parameters the sharpen function will instead tend to increase local contrast more, providing more “presence” or “pop” to the sweater texture.

    Background Rebuild

    The background of the image is a little too uniformly dark and could benefit from some lightening and variation. A nice lighter background gradient will enhance the subject a little.

    Normally this could be obtained through the use of a second strobe (probably gridded or with a snoot) firing at the background. In our case we will have to fake the same result through some masking.

    First, a crop is chosen to focus the composition a little stronger on the subject. I placed the center of the models face along the right-side golden section vertical and tried to place things near the center of the frame:

    Mairi cropped

    The slight-centered crop is to emulate the type of crop that might be expected from a classical painting (thereby strengthening the overall theme of the portrait further).

    Subject Isolation

    There are a few different methods to approach the background modification. The method I describe here is simply one of them.

    The image at this point is duplicated and the duplicate has the levels raised to brighten it up considerably. In this way, a simple layer mask can control the brightness and where it occurs in the image at this point.

    Mairi isolation
    Mairi isolation layers

    This is what will give our background a gradient of light. To get our subject back to dark will require masking the subject on a layer mask again. A quick way to get a mask to work from is to add a layer mask to the “Over” layer, letting the background show through, but turning the subject opaque.

    Add a layer mask to the “Over” layer as a “Grayscale copy of layer”, and check the “Invert mask” option:

    Mairi isolation add layer mask

    With an initial mask in place, a quick use of the tool:

    Colors → Threshold

    will allow you to modify the mask to define the shoulder of the model as a good transition. The mask will be quite narrow. Adjust the threshold until the lighter background is speckle-free and there is a good definition of the edge of the sweater against the background.

    Mairi threshold

    Once the initial mask is in place it can be cleaned up further by making the subject entirely opaque (white on the mask), and the background fully transparent (black on the mask). This can be done with paint tools easily. For not much work a decent mask and result can be had:

    Mairi isolation final

    This provides a nice contrast of the background being lighter behind the darker portions of the model and the opposite on the lighter subjects face.

    Lighten Face Highlights

    Speaking of the subjects face, there’s a nice simple method for applying a small accent on the highlighted portions of the models face in order to draw more attention to her.

    Duplicate the lightened layer that was used to create the background gradient, move it to the top of the layer stack, and remove the layer mask from it.

    Mairi Lighten Face Layers

    Set the layer mode of the copied layer to “Lighten only.

    As before, add a new layer mask to it, “Grayscale copy of layer”, but don’t check the “Invert mask” option. This time use the Levels tool:

    Colors → Levels

    to raise the blacks of the mask up to about mid-way or more. This will isolate the lightening mask to the brightest tones in the image that happen to correspond to the models face. You should see your adjustments modify the mask on-canvas in real-time. When you are happy with the highlights, apply.

    Mairi Lighten Highlights

    Last Sharpening Pass + Grain

    Finally, using I like to apply a last pass of sharpening to the image, and to overlay some grain from a grain field I have to help add some structure to the image as well as mask any gradient issues when rebuilding the background. For this particular image the grain step isn’t really needed as there’s already sufficient luma noise to provide its own structure.

    Usually, I will use the smallest of the wavelet scales from the prior steps and sometimes the next largest scale as well (Wavelet scale 1 & 2). I’ll leave Wavelet scale 1 at 100% opacity, and scale 2 usually around 50% opacity (to taste, of course).

    Mairi Final

    Minor touchups that could still be done might include darkening the chair in the bottom right corner, darkening the gradient in the bottom left corner, and possibly adding a slight white overlay to the eyes to subtly give them a small pop.

    As it stands now I think the image is a decent representation of a chiaroscuro portrait that mimics the style of a classical composition and interplay between light and shadows across the subject.

    July 25, 2016

    I hate deals

    One of my favourite tech-writers, Paul Miller from The Verge, has articulated something I've always felt, but have never been able to express well: I hate deals.

    From Why I'm a Prime Day Grinch: I hate deals by Paul Miller:

    Deals aren't about you. They're about improving profits for the store, and the businesses who distribute products through that store. Amazon's Prime Day isn't about giving back to the community. It's about unloading stale inventory and making a killing.

    But what about when you decide you really do want / need something, and it just happens to be on sale? Well, lucky you. I guess I've grown too bitter and skeptical. I just assume automatically that if something's on sale AND I want to buy it, I must've messed up in my decision making process somewhere along the way.

    I also hate parties and fun.

    July 24, 2016

    Preparation to release of version 0.15.0

    Greetings all!

    We plan to release Stellarium 0.15.0 at the end of next week (31 July).

    This is another major release, who has many changes in code and few new skycultures. If you can assist with translation to any of the 136 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

    Thank you!

    July 19, 2016

    GUADEC Flatpak contest

    I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.

    Contest

    To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

    You will have to send me an email about the location of that repository.

    I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

    You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.

    Prize

    A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

    Good luck to one and all!

    July 18, 2016

    Breeze everywhere

    The first half of this year, I had the chance to work on icon and design for two big free-software projects.

    First, I’ve been hired to work on Mageia. I had to refresh the look for Mageia 6, which mostly meant making new icons for the Mageia Control Center and all the internal tools.

    mageia-MCC

    I proposed to replace the oxygen-like icons with some breeze-like icons.
    This way it integrates much better with modern desktop, and of course it looks especially good with plasma.

    mageia-MCC01

    The result is around 1/3 of icons directly imported from breeze, 1/3 are modified versions and 1/3 are created from scratch. I tried to follow as much as possible the breeze guidelines, but had to adapt some rules to the context.

    mageia-MCC02

    I also made a wallpaper to go with it, which will be in the extra wallpaper package so not used by default:

    Mageia-Default-1920x1200
    available in different sizes on this link.

    And another funny wallpaper for people that are both mageia users and Pepper & Carrot fans:

    Extra-Background-01-PepperAndCarrot-1080
    available in different sizes on this link
    (but I’m not sure yet if this one will be packaged at all…)

    Note that we still have some visual issues with the applets.
    It seems to be a problem with how gtkcreate_pixbuf is used. But more important, those applet don’t even react to clic in plasma (while this seems at least to work fine in all other desktop).
    Since no one seems to have an easy fix or workaround yet, if someone has an idea to help…

    Soon after I finished my work on Mageia, I’ve been hired to work on fusiondirectory.
    I had to create a new theme for the web interface, and again I proposed to base it on breeze, similar to what I did for Mageia but in another different context. Also, I modified the CSS to look like breeze-light interface theme. The result theme is called breezy, and is now used by default since the last release.

    FD-Breezy01
    FD-Breezy02

    I had a lot of positive feedback on this new theme, people seem to really like it.

    Before to finish, a special side note for the breeze team: Thank you so much for all the great work! It has been a pleasure to start from it. Feel free to look at the mageia and fusiondirectory git repositories to see if there are icons that could be interesting to push upstream to breeze icon set.

    July 15, 2016

    Fri 2016/Jul/15

    • Update from La Mapería

      La Mapería is working reasonably well for now. Here are some example maps for your perusal. All of these images link to a rather large PDF that you can print on a medium-format plotter — all of these are printable on a 61 cm wide roll of paper (or one that can put out US Arch D sheets).

      Valladolid
      Valladolid, Yucatán, México, 1:10,000

      Ciudad de México
      Centro de la Ciudad de México, 1:10,000

      Ajusco
      Ajusco y Sur de la Ciudad de México, 1:50,000

      Victoria, BC
      Victoria, British Columbia, Canada, 1:50,000

      Boston
      Boston, Massachusetts, USA, 1:10,000

      Walnut Creek
      Walnut Creek, California, USA, 1:50,000

      Butano State Park
      Butano State Park and Pescadero, California, USA, 1:20,000

      Provo
      Provo, Utah, USA, 1:50,000

      Nürnberg
      Nürnberg, Germany, 1:10,000

      Karlsruhe
      Karlsruhe, Germany, 1:10,000

      That last one, for Karlsruhe, is where GUADEC will happen this year, so enjoy!

      Next steps

      La Mapería exists right now as a Python program that downloads raster tiles from Mapbox Studio. This is great in that I don't have to worry about setting up an OpenStreetMap stack, and I can just worry about the map stylesheet itself (this is the important part!) and a little code to render the map's scale and frame with arc-minute markings.

      I would prefer to have a client-side renderer, though. Vector tiles are the hot new thing; in theory I should be able to download vector tiles and render them with Memphis, a Cairo-based renderer. I haven't investigated how to move my Mapbox Studio stylesheet to something that Memphis can use (... or that any other map renderer can use, for that matter).

      Also, right now making each map with La Mapería involves extracting geographical coordinates by hand, and rendering the map several times while tweaking it to obtain just the right area I want. I'd prefer a graphical version where one can just mouse around.

      Finally, the map style itself needs improvements. It works reasonably well for 1:10,000 and 1:50,000 right now; 1:20,000 is a bit broken but easy to fix. It needs tweaks to map elements that are not very common, like tunnels. I want to make it work for 1:100,000 for full-day or multi-day bike trips, an possibly even smaller scales for motorists and just for general completeness.

      So far two of my friends in Mexico have provided pull requests for La Mapería — to fix my not-quite-Pythonic code, and to make the program easier to use the first time. Thanks to them! Contributions are appreciated.

    July 13, 2016

    Too much of a good thing

    So the last couple of months, after our return from Italy, were nicely busy. At the day job, we were getting ready to create an image to send to the production facility for the QML-based embedded application we had been developing, and besides, there were four reorganizations in one month, ending with the teams being reshuffled in the last week before said image had to be ready. It was enough pressure that I decided to take last week off from the day job, just to decompress a bit and focus on Krita stuff that was heaping up.

    Then, since April, Krita-wise, there was the Kickstarter, the kick-off for the artbook, the Krita 3.0 release... The 3.0 release doubled the flow of bugs, donations, comments, mails to the foundation, questions on irc, reddit, forum and everywhere else. (There's this guy who has sent me over fifty mails asking for Krita to be released for Windows XP, OSX 10.5 and Ubuntu 12.02, for example). And Google Summer of Code kicked off, with three students working on Krita.

    And, of course, daily life didn't stop, though more and more non-work, non-krita things got postponed or cut out. There were moments when I really wanted to cancel our bi-weekly RPG session just to have another Monday evening free for Krita-related work.

    I don't mind being busy, and I like being productive, and I especially like shipping: at too many day jobs we never shipped, which was extremely frustrating.

    But then last Wednesday evening, a week ago, I suddenly felt queer after dinner, just before we'd start he RPG session. A pressing, heavy pain on my breast, painful upper arms, sweating, nausea, dizziness... I spent the next day in hospital getting checked for heart problems. The conclusion was, it wasn't a heart attack, just was all the symptoms of one. No damage done, in any case, that the tests could figure out, and I am assured they are very acccurate.

    Still, I'm still tired and slow and have a hard time focusing, so I didn't have time to prepare Krita 3.0.1. I didn't manage to finish the video-export refactoring (that will also make it possible to pass file export configurations to Krita on the command line). I also didn't get through all the new bugs, though I managed to fix over a dozen. The final bugs in the spriter export plugin also are waiting to be squashed. Setting up builds for the master branch for three operating systems and two architectures was another thing I had to postpone to later. And there are now so many donations waiting for a personal thank-you mail that I have decided to just stop sending them. One thing I couldn't postpone or drop was creating a new WBSO application for an income tax rebate for the hours spent on the research for Krita's scripting plugin.

    I'm going forward with a bit of reduced todo list, so, in short, if you're waiting for me to do something for you, be aware that you might have to wait a bit longer or that I won't be able to do it. If you want your Krita bug fixed with priority, don't tell me to fix it NOW, because any kind of pressure will be answered with a firm nolle prosequi.

    July 12, 2016

    HD Photo Slideshow with Blender


    HD Photo Slideshow with Blender

    Because who doesn't love a challenge?

    While I was out at Texas Linux Fest this past weekend I got to watch a fun presentation from the one and only Brian Beck. He walked through an introduction to Blender, including an overview of creating his great The Lady in the Roses image that was a part of the 2015 Libre Calendar project.

    Coincidentally, during my trip home community member @Fotonut asked about software to create an HD slideshow with images. The first answer that jumped into my mind was to consider using Blender (a very close second was OpenShot because I had just spent some time talking with Jon Thomas about it).

    Brian Beck Roses The Lady in the Roses by Brian Beck cba

    I figured this much Blender being talked about deserved at least a post to answer @Fotonut‘s question in greater detail. I know that many community members likely abuse Blender in various ways as well – so please let me know if I get something way off!

    Enter Blender

    The reason that Blender was the first thing that popped into many folks minds when the question was posed is likely because it has been a go-to swiss-army knife of image and video creation for a long, long time. For some it was the only viable video editing application for heavy use (not that there weren’t other projects out there as well). This is partly due to to the fact that it integrates so much capability into a single project.

    The part that we’re interested in for the context of Fotonut’s original question is the Video Sequence Editor (VSE). This is a very powerful (though often neglected) part of Blender that lets you arrange audio and video (and image!) assets along a timeline for rendering and some simple effects. Which is actually perfect for creating a simple HD slideshow of images, as we’ll see.

    The Plan

    Blenders interface is likely to take some getting used to for newcomers (right-click!) but we’ll be focusing on a very small subset of the overall program—so hopefully nobody gets lost. The overall plan will be:

    1. Setup the environment for video sequence editing
    2. Include assets (images) and how to manipulate them on the timeline
    3. Add effects such as cross-fades between images
    4. Setup exporting options

    There’s also an option of using a very helpful add-on for automatically resizing images to the correct size to maintain their aspect ratios. Luckily, Blender’s add-on system makes it trivially easy to set up.

    Setup

    On opening Blender for the first time we’re presented with the comforting view of the default cube in 3D space. Don’t get too cozy, though. We’re about to switch up to a different screen layout that’s already been created for us by default for Video Editing.

    Blender default main window The main blender default view.

    The developers were nice enough to include various default “Screen Layout” options for different tasks, and one of them happens to be for Video Editing. We can click on the screen layout option on the top menu bar and choose the one we want from the list (Video Editing):

    Blender screen layout options Choosing a new Screen Layout option.

    Our screen will then change to the new layout where the top left pane is the F-curve window, the top right is the video preview, the large center section is the sequencer, and the very bottom is a timeline. Blender will let you arrange, combine, and collapse all the various panes into just about any layout that you might want, including changing what each of them are showing. For our example we will mostly leave it all as-is with the exception of the F-curve pane, which we won’t be using and don’t need.

    Blender video editing layout The Video Editing default layout.

    What we can do now is to define what the resolution and framerate of our project should be. This is done in the Properties pane, which isn’t shown right now. So we will change the F-Curve pane into the Properties pane by clicking on the button shown in red above to change the panel type. We want to choose Properties from the options in the list:

    Blender change pane to properties

    Which will turn the old F-Curve pane into the Properties pane:

    Blender properties

    You’ll want to set the appropriate X and Y resolution for your intended output (don’t forget to set the scaling from the default 50% to 100% now as well) as well as your intended framerate. Common rates might be 23.976 (23.98), 25, 30, or even 60 frames per second. If your intended target is something like YouTube or an HD television you can probably safely use 30 or 60 (just remember that a higher frame rate means a longer render time!).

    For our example I’m going to set the output resolution to 1920 × 1080 at 30fps.

    One Extra Thing

    Blender does need a little bit of help when it comes to using images on the sequence editor. It has a habit of scaling images to whatever the output resolution is set to (ignoring the original aspect ratios). This can be fixed by simply applying a transform to the images but normally requires us to manually compute and enter the correct scaling factors to get the images back to their original aspect ratios.

    I did find a nice small add-on on this thread at blenderartists.org that binds some handy shortcuts onto the VSE for us. The author kgeogeo has the add-on hosted on Github, and you can download the Python file directly from here: VSE Transform Tool (you can Right-Click and save the link). Save the .py file somewhere easy to find.

    To load the add-on manually we’re going to change the Properties panel to User Preferences:

    Blender change to preferences

    Click on the Add-ons tab to open that window and at the bottom of the panel is an option to “Install from File…”. Click that and navigate to the VSE_Transform_Tool.py file that you downloaded previously.

    Blender add-ons

    Once loaded, you’ll still need to Activate the plugin by clicking on the box:

    Blender adding add-ons

    That’s it! You’re now all set up to begin adding images and creating a slideshow. You can set the User Preferences pane back to Properties if you want to.

    Adding Images

    Let’s have a look at adding images onto the sequencer.

    You can add images by either choosing Add → Image from the VSE menu and navigating to your images location, choosing them:

    Blender VSE add image

    Or by drag-and-dropping your images onto the sequencer timeline from Nautilus, Finder, Explorer, etc…

    When you do, you’ll find that a strip now appears on the VSE window (purple in my case) that represents your image. You should also see a preview of your video in the top-right preview window (sorry for the subject).

    Blender VSE add image

    At this point we can use the handy add-on we installed previously by Right-Clicking on the purple strip to make sure it’s activated and then hitting the “T” key on the keyboard. This will automatically add a transform to the image that scales it to the correct aspect ratio for you. A small green Transform strip will appear above your purple image strip now:

    Blender VSE add transform strip

    Your image should now also be scaled to fit at the correct aspect ratio.

    Adjusting the Image

    If you scroll your mouse wheel in the VSE window, you will zoom in and out of time editor based on time (the x-axis in the sequencer window). You’ll notice that the time compresses or expands as you scroll the mouse wheel.

    The middle-mouse button will let you pan around the sequencer.

    The right-mouse button will select things. You can try this now by extending how long your image is displayed in the video. Right-Click on the small arrow on the end of the purple strip to activate it. A small number will appear above it indicating which frame it is currently on (26 in my example):

    Blender VSE

    With the right handle active you can now either press “G” on the keyboard and drag the mouse to re-position the end of the strip, or Right-Click and drag to do the same thing. The timeline in seconds is shown along the bottom of the window for reference. If we wanted to let the image be visible for 5 seconds total, we could drag the end to the 5+00 mark on the sequencer window.

    Since I set the framerate to 30 frames per second, I can also drag the end to frame 150 (30fps * 5s = 150 frames).

    Blender VSE five seconds

    When you drag the image strip, the transform strip will automatically adjust to fit (so you don’t have to worry about it).

    If you had selected the center of the image strip instead of the handle on one end and tried to move it, you would find that you can move the entire strip around instead of one end. This is how you can re-position image strips, which you may want to do when you add a second image to your sequencer.

    Add a new image to your sequencer now following the same steps as above.

    When I do, it adds a new strip back at the beginning of the timeline (basically where the current time is set):

    Blender VSE second image

    I want to move this new strip so that it overlaps my first image by about half a second (or 15 frames). Then I will pull the right handle to resize the display time to about 5 seconds also.

    Click on the new strip (center, not the ends), and press the “G” key to move it. Drag it right until the left side overlaps the previous image strip by a little bit:

    Blender VSE drag strip

    When you click on the strip right handle to modify it’s length, notice the window on the far right of the VSE. The Edit Strip window should also show the strip “Length” parameter in case you want to change it by manually inputting a value (like 150):

    Blender VSE adjust strip

    I forgot to use the add-on to automatically fix the aspect ratio. With the strip selected I can press “T” at any time to invoke the add-on and fix the aspect ratio.

    Adding a Transition Effect

    With the two image strips slightly overlapping, we now want to define a simple cross fade between the two images as a transition effect. This is actually something alreayd built into the Blender VSE for us, and is easy to add. We do need to be careful to select the right things to get the transition working correctly, though.

    Once you’ve added a transform effect to a strip, you’ll need to make sure that subsequent operations use the transform strip as opposed to the original image strip.

    For instance, to add a cross fade transition between these two images, click the first image strip transform (green), then Shift-Click on the second image transform strip (green). Now they are both selected, so add a Gamma Cross by using the Add menu in the VSE (Add → Effect Strip… → Gamma Cross):

    Blender VSE add gamma cross

    This will add a Gamma Cross effect as a new strip that is locked to the two images overlap. It will do a cross-fade between the two images for the duration of the overlap. You can Left-Click now and scrub over the cross-fade strip to see it rendered in the preview window if you’d like:

    Blender Gamma Cross

    At any time you can also use the hotkey “Alt-A” to view a render preview. This may run slow if your machine is not super-fast, but it should run enough to give you a general sense of what you’ll get.

    If you want to modify the transition effect by changing its length, you can just increase the overlap between the strips as desired (using the original image strip — if you try to drag the transform strip you’ll find it locked to the original image strip and won’t move).

    Repeat Repeat

    You can basically follow these same steps for as many images as you’d like to include.

    Exporting

    To generate your output you’ll still need to change a couple of things to get what you want…

    Render Length

    You may notice on the VSE that there are vertical lines outside of which things will appear slightly grayed out. This is a visual indicator of the total start/end of the output. This is controlled via the Start and End frame settings on the timeline (bottom pane):

    Blender VSE start and end

    You’ll need to set the End value to match your last output frame from your video sequence. You can find this value by selecting the last strip in your sequence and pressing the “G” key: the start/end frame numbers of that last strip will be visible (you’ll want the last frame value, of course).

    Blender VSE end frame Current last frame of my video is 284

    In my example above, my anticipated last frame should be 284, but the last render frame is currently set to 250. I would need to update that End frame to match my video to get output as expected.

    Render Format

    Back on the Properties panel (assuming you set the top-left panel back to Properties earlier—if not do so now), if we scroll down a bit we should see a section dedicated to Output.

    Blender Properties Output Options

    You can change the various output options here to do frame-by-frame dumps or to encode everything into a video container of some sort. You can set the output directory to be something different if you don’t want it rendered into /tmp here.

    For my example I will encode the video with H.264:

    Blender output h264

    By choosing this option, Blender will then expose a new section of the Properties panel for setting the Encoding options:

    Blender output encoding options

    I will often use the H264 preset and will enable the Lossless Output checkbox option. If I don’t have the disk space to spare I can also set different options to shrink the resulting filesize down further. The Bitrate option will have the largest effect on final file size and image quality.

    When everything is ready (or you just want to test it out), you can render your output by scrolling back to the top of the Properties window and pressing the Animation button, or by hitting Ctrl-F12.

    Blender Render Button

    The Results

    After adding portraits of all of the GIMP team from LGM London and adding gamma cross fade transitions, here are my results:


    In Summary

    This may seem overly complicated, but in reality much of what I covered here is the setup to get started and the settings for output. Once you’ve done this successfully it becomes pretty quick to use. One thing you can do is set up the environment the way you like it and then save the .blend file to use as a template for further work like this in the future. The next time you need to generate a slideshow you’ll have everything all ready to go and will only need to start adding images to the editor.

    While looking for information on some VSE shortcuts I did run across a really interesting looking set of functions that I want to try out: the Blender Velvets. I’m going to go off and give it a good look when I get a chance as there’s quite a few interesting additions available.

    For Blender users: did I miss anything?

    July 10, 2016

    How GNOME Software uses libflatpak

    It seems people are interested in adding support for flatpaks into other software centers, and I thought I might be useful to explain how I did this in gnome-software. I’m lucky enough to have a plugin architecture to make all the flatpak code be self contained in one file, but that’s certainly not a requirement.

    Flatpak generates AppStream metadata when you build desktop applications. This means it’s possible to use appstream-glib and a few tricks to just load all the enabled remotes into an existing system store. This makes searching the new applications using the (optionally stemmed) token cache trivial. Once per day gnome-software checks the age of the AppStream cache, and if required downloads a new copy using flatpak_installation_update_appstream_sync(). As if by magic, appstream-glib notices the file modification/creation and updates the internal AsStore with the new applications.

    When listing the installed applications, a simple call to flatpak_installation_list_installed_refs() returns us the list we need, on which we can easily set other flatpak-specific data like the runtime. This is matched against the AppStream data, which gives us a localized and beautiful application to display in the listview.

    At this point we also call flatpak_installation_list_installed_refs_for_update() and then do flatpak_installation_update() with the NO_DEPLOY flag set. This just downloads the data we need, and can be cancelled without anything bad happening. When populating the updates panel I can just call flatpak_installation_list_installed_refs() again to find installed applications that have downloaded updates ready to apply without network access.

    For the sources list I’m calling flatpak_installation_list_remotes() then ignoring any set as disabled or noenumerate. Most remotes have a name and title, and this makes the UI feature complete. When collecting information to show in the ui like the size we have the metadata already, but we also add the size of the runtime if it’s not already installed. This is the same idea as flatpak_installation_install(), where we also install any required runtime when installing the main application. There is a slight impedance mismatch between the flatpak many-installed-versions and the AppStream only-one-version model, but it seems to work well enough in the current code. Flatpak splits the deployment into a runtime containing common libraries that can be shared between apps (for instance, GNOME 3.20 or KDE5) and the application itself, so the software center always needs to install the runtime for the application to launch successfully. This is something that is not enforced by the CLI tool. Rather than installing everything for each app, we can also install other so-called extensions. These are typically non-essential like the various translations and any debug information, but are not strictly limited to those things. libflatpak automatically keeps the extensions up to date when updating, so gnome-software doesn’t have to do anything special at all.

    Updating single applications is trivial with flatpak_installation_update() and launching applications is just as easy with flatpak_installation_launch(), although we only support launching the newest installed version of an application at the moment. Reading local bundles works well with flatpak_bundle_ref_new(), although we do have to load the gzipped AppStream metadata and the icon ourselves. Reading a .flatpakrepo file is slightly more work, but the data is in keyfile format and trivial to parse with GKeyFile.

    Overall I’ve found libflatpak to be surprisingly easy to work with, requiring none of the kludges of all the different package-based systems I’ve worked on developing PackageKit. Full marks to Alex et al.

    July 08, 2016

    Railway gauges

    Episode 3 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

    The standard railway gauge (that is, the distance between train rails) for over half of the world’s railways (including the USA and UK)  is 4′ 8.5″, or 1.435m. While a few other railway gauges are in common use, including, to my surprise, in Ireland, where the gauge is 5′ 3″, or 1.6m. If you’re like me, you’ve wondered where these strange numbers came from.

    Your first guess might be that, similar to the QWERTY keyboard, it comes from the inventor of the first train, or the first successful commercial railway, and that there was simply no good reason to change it once the investment had been made in thbat first venture, in the interests of interoperability. There is some truth to this, as railways were first used in coal mines to extract coal by horse-drawn carriages, and in the English coal mines of the North East, the “standard” gauge of 4′ 8″ was used. When George Stephenson started his seminal work on the development of the first commercial railway and the invention of the Stephenson Rocket steam locomotive, his experience from the English coal mines led him to adopt this gauge of 4′ 8″. To allow for some wiggle room so that the train and carriages could more easily go around bends, he increased the gauge to 4′ 8.5″.

    But why was the standard gauge for horse-drawn carriages 4′ 8″? The first horse-drawn trams used the same gauge, and all of their tools were calibrated for that width. That’s because most wagons, built with the same tools, had that gauge at the time. But where did it come from in the first place? One popular theory, which I like even if Snopes says it’s probably false, is that the gauge was the standard width of horse-drawn carriages all the way back to Roman times. The 4′ 8.5″ gauge roughly matches the width required to comfortably accommodate a horse pulling a carriage, and has persisted well beyond the end of that constraint.

     

     

    July 07, 2016

    QWERTY keyboards

    Episode 2 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

    American or English computer users are familiar with the QWERTY keyboard layout – which takes its name from the layout of letters on the first row of the traditional us and en_gb keyboard layouts. There are other common layouts in other countries, mostly tweaks to this format like AZERTY (in France) or QWERTZ (in Germany). There are also non-QWERTY related keyboard layouts like Dvorak, designed to allow increased typing speed, but which have never really gained widespread adoption. But where does the QWERTY layout come from?

    The layout was first introduced with the Remington no. 1 typewriter (AKA the Scholes and Glidden typewriter) in 1874. The typewriter had a set of typebars which would strike the page with a single character, and these were arranged around a circular “basket”. The page was then moved laterally by one letter-width, ready for the next keystrike. The first attempt laid out the keys in alphabetical order, in two rows, like a piano keyboard. Unfortunately, this mechanical system had some issues – if two typebars situated close together were struck in rapid succession, they would occasionally jam the mechanism. To avoid this issue, common bigrams were distributed around the circle, to minimise the risk of jams.

    The keyboard layout was directly related to the layout of typebars around the basket, since the keyboard was purely mechanical – pushing a key activated a lever system to swing out the correct typebar. As a result, the keyboard layout the company settled on, after much trial and error, had the familiar QWERTY layout we use today. At this point, too much is invested in everything from touch-type lessons and sunk costs of the population who have already learned to type for any other keyboard format to become viable, even though the original constraint which led to this format obviously no longer applies.

    Edit: A commenter pointed me to an article on The Atlantic called “The Lies You’ve Been Told About the QWERTY Keyboard” which suggests an alternate theory. The layout changed to better serve the earliest users of the new typewriter, morse code transcribing telegraph operators. A fascinating lesson in listening to your early users, for sure, but also perhaps a warning on imposing early-user requirements on later adopters?

    Cosmos Laundromat wins SIGGRAPH 2016 Computer Animation Festival Jury’s Choice Award

    A few days ago we wrote about three Blender-made films being selected for the SIGGRAPH 43rd annual Computer Animation Festival. Today we are happy to announce that Cosmos Laundromat Open Movie (by Blender Institute) has won the Jury’s Choice Award!

    Producer Ton Roosendaal says:

    SIGGRAPH always brings the best content together for the Computer Animation Festival from the most talented artists and we are honoured to be acknowledged in this way for all our hard work and dedication.

    et_16_winner

    Get ready to see more and more pictures of Victor and Frank as Cosmos Laundromat takes over SIGGRAPH 2016!



    Google Expeditions – Education in VR

    By: Mike Pan, Lead Artist at Vida Systems

    The concept of virtual-reality has been around for many decades now. However it is only in the last few years that technology has matured enough for VR to really take off. At Vida Systems, we have been at the forefront of this VR resurgence every step of the way.

    vida_16_Vida

    Vida Systems had the amazing opportunity to work with Google on their Expeditions project. Google Expeditions is a VR learning experience designed for classrooms. With a simple smartphone and a Cardboard viewer, students can journey to far-away places and feel completely immersed in the environment. This level of immersion not only delights the students, it actually helps learning as they are able to experience places in a much more tangible way.

    vida_16_Landscape

    To fulfill the challenge of creating stunning visuals, we rely on Blender and the Cycles rendering engine. First, each topic is carefully researched. Then the 3D artists work to create a scene based on the layout set by the designer. With Cycles, it is incredibly easy to create photorealistic artwork in a short period of time. Lighting, shading and effects can all be done with realtime preview.

    vida_16_Blender

    With the built-in VR rendering features including stereo camera support and equirectangular panoramic camera, we can render the entire scene with one click and deliver the image without stitching or resampling, saving us valuable time.

    vida_16_Historical

    For VR, the image needs to be noise-free, in stereo, and high resolution. Combining all 3 factors means our rendering time for a 4K by 4K frame is 8 times longer than a traditional 1080p frame. With two consumer-grade GPUs working together (980Ti and 780), Cycles was able to crunch through most of our scenes in under 3 hours per frame.

    Working in VR has some limitations. The layout has to follow realworld scales, otherwise it would look odd in 3D. It is also more demanding to create the scene, as everything has to look good from every angle. We also spent a lot of time on the details. The images had to stand up to scrutiny. Any imperfection would be readily visible due to the level of immersion offered by VR.

    vida_16_Zoom

    For this project, we tackled a huge variety of topics, ranging from geography to anatomy. This was only possible thanks to the four spectacular artists we have: Felipe Torrents, Jonathan Sousa de Jesus, Diego Gangl and Greg Zaal.

    vida_16_Bones

    vida_16_Others

    Our work can be seen in the Google Expeditions app available for Android.

    On blender.org we are always looking for inspiring user stories! Share yours with foundation@blender.org.

    Follow us on Twitter or Facebook to get the latest user stories!

    July 06, 2016

    GIMP at Texas LinuxFest

    I'll be at Texas LinuxFest in Austin, Texas this weekend. Friday, July 8 is the big day for open source imaging: first a morning Photo Walk led by Pat David, from 9-11, after which Pat, an active GIMP contributor and the driving force behind the PIXLS.US website and discussion forums, gives a talk on "Open Source Photography Tools". Then after lunch I'll give a GIMP tutorial. We may also have a Graphics Hackathon/Q&A session to discuss all the open-source graphics tools in the last slot of the day, but that part is still tentative. I'm hoping we can get some good discussion especially among the people who go on the photo walk.

    Lots of interesting looking talks on Saturday, too. I've never been to Texas LinuxFest before: it's a short conference, just two days, but they're packing a lot into those two days and but it looks like it'll be a lot of fun.

    July 05, 2016

    Flatpak and GNOME Software

    I wanted to write a little about how Flatpak apps are treated differently to packages in GNOME Software. We’ve now got two plugins in master, one called flatpak-user and another called flatpak-system. They both share 99% of the same code, only differing in how they are initialised. As you might expect, -user does per-user installation and updating, and the latter does it per-system for all users. Per-user applications that are specific to just a single user account are an amazingly useful concept, as most developers found using tools like jhbuild. We default to installing software at the moment for all users, but there is actually a org.gnome.software.install-bundles-system-wide dconf key that can be used to reverse this on specific systems.

    We go to great lengths to interoperate with the flatpak command line tool, so if you install the nightly GTK3 build of GIMP per-user you can install the normal version system-wide and they both show in the installed and updates panel without conflicting. We’ve also got file notifications set up so GNOME Software shows the correct application state straight away if you add a remote or install a flatpak app on the command line. At the moment we show both packages and flatpaks in the search results, but when we suggest apps on the overview page we automatically prefer the flatpak version if both are available. In Ubuntu, snappy results are sorted above package results unconditionally, but I don’t know if this is a good thing to do for flatpaks upstream, comments welcome. I’m sure whatever defaults I choose will mortally offend someone.

    Screenshot from 2016-07-05 14-45-35

    GNOME Software also supports single-file flatpak bundles like gimp.flatpak – just double click and you’re good to install. These files are somewhat like a package in that all the required files are included and you can install without internet access. These bundles can also install a remote (ie a reference to a flatpak repository) too, which allows them to be kept up to date. Such per-application remotes are only used for the specific application and not the others potentially in the same tree (for the curious, this is called a “noenumerate” remote). We also support the more rarely seen dummy.flatpakrepo files too; these allow a user to install a remote which could contain a number of applications and makes it very easy to set up an add-on remote that allows you browse a different set of apps than shipped, for instance the Endless-specific apps. Each of these files contains all the metadata we need in AppStream format, with translations, icons and all the things you expect from a modern software center. It’s a shame snappy decided not to use AppStream and AppData for application metadata, as this kind of extra data really makes the UI really beautiful.

    Screenshot from 2016-07-05 14-54-18

    With the latest version of flatpak we also do a much better job of installing the additional extensions the application needs, for instance locales or debug data. Sharing the same code between the upstream command line tool and gnome-software means we always agree on what needs installing and updating. Just like the CLI, gnome-software can update flatpaks safely live (even when the application is running), although we do a little bit extra compared to the CLI and download the data we need to do the update when the session is idle and on suitable unmetered network access. This means you can typically just click the ‘Update’ button in the updates panel for a near-instant live-update. This is what people have wanted for years, and I’ve told each and every bug-report that live updates using packages only works 99.99% of the time, exploding in a huge fireball 0.01% of the time. Once all desktop apps are packaged as flatpaks we will only need to reboot for atomic offline updates for core platform updates like a new glibc or the kernel. That future is very nearly now.

    Screenshot from 2016-07-05 14-54-59

    darktable 2.0.5 released

    we're proud to announce the fifth bugfix release for the 2.0 series of darktable, 2.0.5!

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.5.

    as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

    $ sha256sum darktable-2.0.5.tar.xz
    898b71b94e7ef540eb1c87c829daadc8d8d025b1705d4a9471b1b9ed91b90a02 darktable-2.0.5.tar.xz
    $ sha256sum darktable-2.0.5.dmg
    e0ae0e5e19771810a80d6851e022ad5e51fb7da75dcbb98d96ab5120b38955fd  darktable-2.0.5.dmg

    and the changelog as compared to 2.0.4 can be found below.

    New Features

    • Add geolocation to watermark variables

    Bugfixes

    • Mac: bugfix + build fix
    • Lua: fixed dt.collection not working
    • Fix softproofing with some internal profiles
    • Fix non-working libsecret pwstorage backend
    • Fixed a few issues within (rudimentary) lightroom import
    • Some fixes related to handling of duplicates and/or tags

    Base Support

    • Canon EOS 80D (no mRAW/sRAW support!)

    White Balance Presets

    • Canon EOS 80D

    Noise Profiles

    • Canon EOS 80D

    Translations Updates

    • Danish
    • German
    • Slovak

    July 04, 2016

    Texas Linux Fest 2016


    Texas Linux Fest 2016

    Everything's Bigger in Texas!

    While in London this past April I got a chance to hang out a bit with LWN.net editor and fellow countryman, Nathan Willis. (It sounds like the setup for a bad joke: “An Alabamian and Texan meet in a London pub…”). Which was awesome because even though we were both at LGM2014, we never got a chance to sit down and chat.

    So it was super-exciting for me to hear from Nate about possibly doing a photowalk and Free Software photo workshop at the 2016 Texas Linux Fest, and as soon as I cleared it with my boss, I agreed!

    Dot at LGM 2014 My Boss

    So… mosey on down to Austin, Texas on July 8-9 for Texas Linux Fest and join Akkana Peck and myself for a photowalk first thing of the morning on Friday (July 8) to be immediately followed by workshops from both of us. I’ll be talking about Free Software photography workflows and projects and Akkana will be focusing on a GIMP workshop.

    This is part of a larger “Open Graphics” track on the entire first day that also includes Ted Gould creating technical diagrams using Inkscape, Brian Beck doing a Blender tutorial, and Jonathon Thomas showing off OpenShot 2.0. You can find the full schedule on their website.

    I hope to see some of you there!

    July 03, 2016

    Midsummer Nature Notes from Traveling

    A few unusual nature observations noticed over the last few weeks ...

    First, on a trip to Washington DC a week ago (my first time there). For me, the big highlight of the trip was my first view of fireflies -- bright green ones, lighting once or twice then flying away, congregating over every park, lawn or patch of damp grass. What fun!

    Predatory grackle

    [grackle]

    But the unusual observation was around mid-day, on the lawn near the Lincoln Memorial. A grackle caught my attention as it flashed by me -- a male common grackle, I think (at least, it was glossy black, relatively small and with only a moderately long tail).

    It turned out it was chasing a sparrow, which was dodging and trying to evade, but unsuccessfully. The grackle made contact, and the sparrow faltered, started to flutter to the ground. But the sparrow recovered and took off in another direction, the grackle still hot on its tail. The grackle made contact again, and again the sparrow recovered and kept flying. But the third hit was harder than the other two, and the sparrow went down maybe fifteen or twenty feet away from me, with the grackle on top of it.

    The grackle mantled over its prey like a hawk and looked like it was ready to begin eating. I still couldn't quite believe what I'd seen, so I stepped out toward the spot, figuring I'd scare the grackle away and I'd see if the sparrow was really dead. But the grackle had its eye on me, and before I'd taken three steps, it picked up the sparrow in its bill and flew off with it.

    I never knew grackles were predatory, much less capable of killing other birds on the wing and flying off with them. But a web search on grackles killing birds got quite a few hits about grackles killing and eating house sparrows, so apparently it's not uncommon.

    Daytime swarm of nighthawks

    Then, on a road trip to visit friends in Colorado, we had to drive carefully past the eastern slope of San Antonio Mountain as a flock of birds wheeled and dove across the road. From a distance it looked like a flock of swallows, but as we got closer we realized they were far larger. They turned out to be nighthawks -- at least fifty of them, probably considerably more. I've heard of flocks of nighthawks swarming around the bugs attracted to parking lot streetlights. And I've seen a single nighthawk, or occasionally two, hawking in the evenings from my window at home. But I've never seen a flock of nighthawks during the day like this. An amazing sight as they swoop past, just feet from the car's windshield.

    Flying ants

    [Flying ant courtesy of Jen Macke]

    Finally, the flying ants. The stuff of a bad science fiction movie! Well, maybe if the ants were 100 times larger. For now, just an interesting view of the natural world.

    Just a few days ago, Jennifer Macke wrote a fascinating article in the PEEC Blog, "Ants Take Wing!" letting everyone know that this is the time of year for ants to grow wings and fly. (Jen also showed me some winged lawn ants in the PEEC ant colony when I was there the day before the article came out.) Both males and females grow wings; they mate in the air, and then the newly impregnated females fly off, find a location, shed their wings (leaving a wing scar you can see if you have a strong enough magnifying glass) and become the queen of a new ant colony.

    And yesterday morning, as Dave and I looked out the window, we saw something swarming right below the garden. I grabbed a magnifying lens and rushed out to take a look at the ones emerging from the ground, and sure enough, they were ants. I saw only black ants. Our native harvester ants -- which I know to be common in our yard, since I've seen the telltale anthills surrounded by a large bare area where they clear out all vegetation -- have sexes of different colors (at least when they're flying): females are red, males are black. These flying ants were about the size of harvester ants but all the ants I saw were black. I retreated to the house and watched the flights with binoculars, hoping to see mating, but all the flyers I saw seemed intent on dispersing. Either these were not harvester ants, or the females come out at a different time from the males. Alas, we had an appointment and had to leave so I wasn't able to monitor them to check for red ants. But in a few days I'll be watching for ants that have lost their wings ... and if I find any, I'll try to identify queens.

    June 29, 2016

    Color Manipulation with the Colour Checker LUT Module


    Color Manipulation with the Colour Checker LUT Module

    hanatos tinkering in darktable again...

    I was lucky to get to spend some time in London with the darktable crew. Being the wonderful nerds they are, they were constantly working on something while we were there. One of the things that Johannes was working on was the colour checker module for darktable.

    Having recently acquired a Fuji camera, he was working on matching color styles from the built-in rendering on the camera. Here he presents some of the results of what he was working on.

    This was originally published on the darktable blog, and is being republished here with permission. —Pat


    motivation

    for raw photography there exist great presets for nice colour rendition:

    unfortunately these are eat-it-or-die canned styles or icc lut profiles. you have to apply them and be happy or tweak them with other tools. but can we extract meaning from these presets? can we have understandable and tweakable styles like these?

    in a first attempt, i used a non-linear optimiser to control the parameters of the modules in darktable’s processing pipeline and try to match the output of such styles. while this worked reasonably well for some of pat’s film luts, it failed completely on canon’s picture styles. it was very hard to reproduce generic colour-mapping styles in darktable without parametric blending.

    that is, we require a generic colour to colour mapping function. this should be equally powerful as colour look up tables, but enable us to inspect it and change small aspects of it (for instance only the way blue tones are treated).

    overview

    in git master, there is a new module to implement generic colour mappings: the colour checker lut module (lut: look up table). the following will be a description how it works internally, how you can use it, and what this is good for.

    in short, it is a colour lut that remains understandable and editable. that is, it is not a black-box look up table, but you get to see what it actually does and change the bits that you don’t like about it.

    the main use cases are precise control over source colour to target colour mapping, as well as matching in-camera styles that process raws to jpg in a certain way to achieve a particular look. an example of this are the fuji film emulation modes. to this end, we will fit a colour checker lut to achieve their colour rendition, as well as a tone curve to achieve the tonal contrast.

    target

    to create the colour lut, it is currently necessary to take a picture of an it8 target (well, technically we support any similar target, but didn’t try them yet so i won’t really comment on it). this gives us a raw picture with colour values for a few colour patches, as well as a in-camera jpg reference (in the raw thumbnail..), and measured reference values (what we know it should look like).

    to map all the other colours (that fell in between the patches on the chart) to meaningful output colours, too, we will need to interpolate this measured mapping.

    theory

    we want to express a smooth mapping from input colours \(\mathbf{s}\) to target colours \(\mathbf{t}\), defined by a couple of sample points (which will in our case be the 288 patches of an it8 chart).

    the following is a quick summary of what we implemented and much better described in JP’s siggraph course [0].

    radial basis functions

    radial basis functions are a means of interpolating between sample points via

    $$f(x) = \sum_i c_i\cdot\phi(| x - s_i|),$$

    with some appropriate kernel \(\phi(r)\) (we’ll get to that later) and a set of coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at and in between the source colour positions \(s_i\). now to make sure the function actually passes through the target colours, i.e. \(f(s_i) = t_i\), we need to solve a linear system. because we want the function to take on a simple form for simple problems, we also add a polynomial part to it. this makes sure that black and white profiles turn out to be black and white and don’t oscillate around zero saturation colours wildly. the system is

    $$ \left(\begin{array}{cc}A &P\\P^t & 0\end{array}\right) \cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) = \left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$

    where

    $$ A=\left(\begin{array}{ccc} \phi(r_{00})& \phi(r_{10})& \cdots \\ \phi(r_{01})& \phi(r_{11})& \cdots \\ \phi(r_{02})& \phi(r_{12})& \cdots \\ \cdots & & \cdots \end{array}\right),$$

    and \(r_{ij} = | s_i - t_j |\) is the distance (CIE 76 \(\Delta\)E, \(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between source colour \(s_i\) and target colour \(t_j\), in our case

    $$P=\left(\begin{array}{cccc} L_{s_0}& a_{s_0}& b_{s_0}& 1\\ L_{s_1}& a_{s_1}& b_{s_1}& 1\\ \cdots \end{array}\right)$$

    is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial part. these are here so we can for instance easily reproduce \(t = s\) by setting \(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).

    many options will do the trick and solve the system here. we use singular value decomposition in our implementation. one advantage is that it is robust against singular matrices as input (accidentally map the same source colour to different target colours for instance).

    thin plate splines

    we didn’t yet define the radial basis function kernel. it turns out so-called thin plate splines have very good behaviour in terms of low oscillation/low curvature of the resulting function. the associated kernel is

    $$\phi(r) = r^2 \log r.$$

    note that there is a similar functionality in gimp as a gegl colour mapping operation (which i believe is using a shepard-interpolation-like scheme).

    creating a sparse solution

    we will feed this system with 288 patches of an it8 colour chart. that means, with the added four polynomial coefficients, we have a total of 292 source/target colour pairs to manage here. apart from performance issues when executing the interpolation, we didn’t want that to show up in the gui like this, so we were looking to reduce this number without introducing large error.

    indeed this is possible, and literature provides a nice algorithm to do so, which is called orthogonal matching pursuit [1].

    this algorithm will select the most important hand full of coefficients \(\in \mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make best use of gui real estate.

    the colour checker lut module

    clut-iop

    gui elements

    when you select the module in darkroom mode, it should look something like the image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid instead). by default, it will load the 24 patches of a colour checker classic and initialise the mapping to identity (no change to the image).

    • the grid shows a list of coloured patches. the colours of the patches are the source points \(\mathbf{s}\).
    • the target colour \(t_i\) of the selected patch \(i\) is shown as offset controlled by sliders in the ui under the grid of patches.
    • an outline is drawn around patches that have been altered, i.e. the source and target colours differ.
    • the selected patch is marked with a white square, and the number shows in the combo box below.

    interaction

    to interact with the colour mapping, you can change both source and target colours. the main use case is to change the target colours however, and start with an appropriate palette (see the presets menu, or download a style somewhere).

    • you can change lightness (L), green-red (a), blue-yellow (b), or saturation (C) of the target colour via sliders.
    • select a patch by left clicking on it, or using the combo box, or using the colour picker
    • to change source colour, select a new colour from your image by using the colour picker, and shift-left-click on the patch you want to replace.
    • to reset a patch, double-click it.
    • right-click a patch to delete it.
    • shift-left-click on empty space to add a new patch (with the currently picked colour as source colour).

    example use cases

    example 1: dodging and burning with the skin tones preset

    to process the following image i took of pat in the overground, i started with the skin tones preset in the colour checker module (right click on nothing in the gui or click on the icon with the three horizontal lines in the header and select the preset).

    then, i used the colour picker (little icon to the right of the patch# combo box) to select two skin tones: very bright highlights and dark shadow tones. the former i dragged the brightness down a bit, the latter i brightened up a bit via the lightness (L) slider. this is the result:

    original dialed down contrast in skin tones

    example 2: skin tones and eyes

    in this image, i started with the fuji classic chrome-like style (see below for a download link), to achieve the subdued look in the skin tones. then, i picked the iris colour and saturated this tone via the saturation slider.

    as a side note, the flash didn’t fire in this image (iso 800) so i needed to stop it up by 2.5ev and the rest is all natural lighting..

    original
    +2.5ev classic chrome saturated eyes

    use darktable-chart to create a style

    as a starting point, i matched a colour checker lut interpolation function to the in-camera processing of fuji cameras. these have the names of old film and generally do a good job at creating pleasant colours. this was done using the darktable-chart utility, by matching raw colours to the jpg output (both in Lab space in the darktable pipeline).

    here is the link to the fuji styles, and how to use them. i should be doing pat’s film emulation presets with this, too, and maybe styles from other cameras (canon picture styles?). darktable-chart will output a dtstyle file, with the mapping split into tone curve and colour checker module. this allows us to tweak the contrast (tone curve) in isolation from the colours (lut module).

    these styles were created with the X100T model, and reportedly they work so/so with different camera models. the idea is to create a Lab-space mapping which is well configured for all cameras. but apparently there may be sufficient differences between the output of different cameras after applying their colour matrices (after all these matrices are just an approximation of the real camera to XYZ mapping).

    so if you’re really after maximum precision, you may have to create the styles yourself for your camera model. here’s how:

    step-by-step tutorial to match the in-camera jpg engine

    note that this is essentially similar to pascal’s colormatch script, but will result in an editable style for darktable instead of a fixed icc lut.

    • need an it8 (sorry, could lift that, maybe, similar to what we do for basecurve fitting)

    • shoot the chart with your camera:

      • shoot raw + jpg
      • avoid glare and shadow and extreme angles, potentially the rims of your image altogether
      • shoot a lot of exposures, try to match L=92 for G00 (or look that up in your it8 description)
    • develop the images in darktable:

      • lens and vignetting correction needed on both or on neither of raw + jpg
      • (i calibrated for vignetting, see lensfun)
      • output colour space to Lab (set the secret option in darktablerc: allow_lab_output=true)
      • standard input matrix and camera white balance for the raw, srgb for jpg.
      • no gamut clipping, no basecurve, no anything else.
      • maybe do perspective correction and crop the chart
      • export as float pfm
    • darktable-chart

      • load the pfm for the raw image and the jpg target in the second tab
      • drag the corners to make the mask match the patches in the image
      • maybe adjust the security margin using the slider in the top right, to avoid stray colours being blurred into the patch readout
      • you need to select the gray ramp in the combo box (not auto-detected)
      • export csv
    darktable-lut-tool-crop-01 darktable-lut-tool-crop-02 darktable-lut-tool-crop-03 darktable-lut-tool-crop-04

    edit the csv in a text editor and manually add two fixed fake patches HDR00 and HDR01:

    name;fuji classic chrome-like
    description;fuji classic chrome-like colorchecker
    num_gray;24
    patch;L_source;a_source;b_source;L_reference;a_reference;b_reference
    A01;22.22;13.18;0.61;21.65;17.48;3.62
    A02;23.00;24.16;4.18;26.92;32.39;11.96
    ...
    HDR00;100;0;0;100;0;0
    HDR01;200;0;0;200;0;0
    ...
    

    this is to make sure we can process high-dynamic range images and not destroy the bright spots with the lut. this is needed since the it8 does not deliver any information out of the reflective gamut and for very bright input. to fix wide gamut input, it may be needed to enable gamut clipping in the input colour profile module when applying the resulting style to an image with highly saturated colours. darktable-chart does that automatically in the style it writes.

    • fix up style description in csv if you want
    • run darktable-chart --csv
    • outputs a .dtstyle with everything properly switched off, and two modules on: colour checker + tonecurve in Lab

    fitting error

    when processing the list of colour pairs into a set of coefficients for the thin plate spline, the program will output the approximation error, indicated by average and maximum CIE 76 \(\Delta\)E for the input patches (the it8 in the examples here). of course we don’t know anything about colours which aren’t represented in the patch. the hope would be that the sampling is dense enough for all intents and purposes (but nothing is holding us back from using a target with even more patches).

    for the fuji styles, these errors are typically in the range of mean \(\Delta E\approx 2\) and max \(\Delta E \approx 10\) for 24 patches and a bit less for 49. unfortunately the error does not decrease very fast in the number of patches (and will of course drop to zero when using all the patches of the input chart).

    provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
    provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751
    
    astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
    astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165
    
    velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
    velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697
    
    classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
    classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298
    
    mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
    mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548
    

    future work

    it is possible to match the reference values of the it8 instead of a reference jpg output, to calibrate the camera more precisely than the colour matrix would.

    • there is a button for this in the darktable-chart tool
    • needs careful shooting, to match brightness of reference value closely.
    • at this point it’s not clear to me how white balance should best be handled here.
    • need reference reflectances of the it8 (wolf faust ships some for a few illuminants).

    another next step we would like to take with this is to match real film footage (porta etc). both reference and film matching will require some global exposure calibration though.

    references

    • [0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, “Scattered data interpolation for computer graphics” in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. pdf
    • [1] J. A. Tropp and A. C. Gilbert, “Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit”, in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.

    Tue 2016/Jun/28

    • La Mapería

      It is Hack Week at SUSE, and I am working on La Mapería (the map store), a little program to generate beautiful printed maps from OpenStreetMap data.

      I've gotten to the point of having something working: the tool downloads rendered map tiles, assembles them with Cairo as a huge PDF surface, centers the map on a sheet of paper, and prints nice margins and a map scale. This was harder to me than it looks: I am pretty good at dealing with pixel coordinates and transformations, but a total newbie with geodetic calculations, geographical coodinate conversions, and thinking in terms of a physical map scale instead of just a DPI and a paper size.

      Printed map Printed map 2

      The resulting chart has a map and a frame with arc-minute markings, and a map scale rule. I want to have a 1-kilometer UTM grid if I manage to wrap my head around map projections.

      Coordinates and printed maps

      The initial versions of this tool evolved in an interesting way. Assembling a map from map tiles is basically this:

      1. Figure out the tile numbers for the tiles in the upper-left and the lower-right corners of the map.
      2. Composite each tile into a large image, like a mosaic.

      The first step is pretty easy if you know the (latitude, longitude) of the corners: the relevant conversion from coordinates to tile numbers is in the OpenStreetMap wiki. The second step is just two nested for() loops that paste tile images onto a larger image.

      When looking at a web map, it's reasonably easy to find the coordinates for each corner. However, I found that printed maps want one to think in different terms. The map scale corresponds to the center of the map (it changes slightly towards the corners, due to the map's projection). So, instead of thinking of "what fits inside the rectangle given by those corners", you have to think in terms of "how much of the map will fit given your paper size and the map scale... around a center point".

      So, my initial tool looked like

      python3 make-map.py
              --from-lat=19d30m --from-lon=-97d
              --to-lat=19d22m --to-lon=-96d47m
              --output=output.png

      and then I had to manually scale that image to print it at the necessary DPI for a given map scale (1:50,000). This was getting tedious. It took me a while to convert the tool to think in terms of these:

      • Paper size and margins
      • Coordinates for the center point of the map
      • Printed map scale

      Instead of providing all of these parameters in the command line, the program now takes a little JSON configuration file.

      La Mapería generates a PDF or an SVG (for tweaking with Inkscape before sending it off to a printing bureau). It draws a nice frame around the map, and clips the map to the frame's dimensions.

      La Mapería is available on github. It may or may not work out of the box right now; it includes my Mapbox access token — it's public — but I really would like to avoid people eating my Mapbox quota. I'll probably include the map style data with La Mapería's source code so that people can create their own Mapbox accounts.

      Over the rest of the week I will be documenting how to set up a Mapbox account and a personal TileStache cache to avoid downloading tiles repeatedtly.

    June 26, 2016

    How to un-deny a host blocked by denyhosts

    We had a little crisis Friday when our server suddenly stopped accepting ssh connections.

    The problem turned out to be denyhosts, a program that looks for things like failed login attempts and blacklists IP addresses.

    But why was our own IP blacklisted? It was apparently because I'd been experimenting with a program called mailsync, which used to be a useful program for synchronizing IMAP folders with local mail folders. But at least on Debian, it has broken in a fairly serious way, so that it makes three or four tries with the wrong password before it actually uses the right one that you've configured in .mailsync. These failed logins are a good way to get yourself blacklisted, and there doesn't seem to be any way to fix mailsync or the c-client library it uses under the covers.

    Okay, so first, stop using mailsync. But then how to get our IP off the server's blacklist? Just editing /etc/hosts.deny didn't do it -- the IP reappeared there a few minutes later.

    A web search found lots of solutions -- you have to edit a long list of files, but no two articles had the same file list. It appears that it's safest to remove the IP from every file in /var/lib/denyhosts.

    So here are the step by step instructions.

    First, shut off the denyhosts service:

    service denyhosts stop
    

    Go to /var/lib/denyhosts/ and grep for any file that includes your IP:

    grep aa.bb.cc.dd *
    

    (If you aren't sure what your IP is as far as the outside world is concerned, Googling what's my IP will helpfully tell you, as well as giving you a list of other sites that will also tell you.)

    Then edit each of these files in turn, removing your IP from them (it will probably be at the end of the file).

    When you're done with that, you have one more file to edit: remove your IP from the end of /etc/hosts.deny

    You may also want to add your IP to /etc/hosts.allow, but it may not make much difference, and if you're on a dynamic IP it might be a bad idea since that IP will eventually be used by someone else.

    Finally, you're ready to re-start denyhosts:

    service denyhosts start
    

    Whew, un-blocked. And stay away from mailsync. I wish I knew of a program that actually worked to keep IMAP and mbox mailboxes in sync.

    June 23, 2016

    Siggraph 2016 Computer Animation Festival Selections

    We are proud to share the news that 3 films completely produced with Blender have been selected for the 43rd Computer Animation Festival to be celebrated in Anaheim, California, 24-28 July 2016! The films are Cosmos Laundromat (Blender Institute, directed by Mathieu Auvray), Glass Half (Blender Institute, directed by Beorn Leonard) and Alike (directed and produced by Daniel M. Lara and Rafa Cano).

    et_selection

    The films are going to be screened at the Electronic Theater, which is one of the highlights of the SIGGRAPH conference. SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research and it is an honour to see such films in the same venue where computer graphics has been pioneered for decades.

    Here you can see a trailer of the Animation Festival, where some shots of Cosmos Laundromat can be spotted.

    June 22, 2016

    Sharing is Caring


    Sharing is Caring

    Letting it all hang out

    It was always my intention to make the entire PIXLS.US website available under a permissive license. The content is already all licensed Creative Commons, By Attribution, Share-Alike (unless otherwise noted). I just hadn’t gotten around to actually posting the site source.

    Until now(ish). I say “ish“ because I apparently released the code back in April and am just now getting around to talking about it.

    Also, we finally have a category specifically for all those darktable weenies on discuss!

    Don’t Laugh

    I finally got around to pushing my code for this site up to Github on April 27 (I’m basing this off git logs because my memory is likely suspect). It took a while, but better late than never? I think part of the delay was a bit of minor embarrassment on my part for being so sloppy with the site code. In fact, I’m still embarrassed - so don’t laugh at me too hard (and if you do, at least don’t point while laughing too).

    Carrie White Brian De Palma’s interpretation of my fears…

    So really this post is just a reminder to anyone that was interested that this site is available on Github:

    https://github.com/pixlsus/

    In fact, we’ve got a couple of other repositories under the Github Organization PIXLS.US including this website, presentation assets, lighting diagram SVG’s, and more. If you’ve got a Github account or wanted to join in with hacking at things, by all means send me a note and we’ll get you added to the organization asap.

    Note: you don’t need to do anything special if you just want to grab the site code. You can do this quickly and easily with:

    git clone https://github.com/pixlsus/website.git

    You actually don’t even need a Github account to clone the repo, but you will need one if you want to fork it on Github itself, or to send pull-requests. You can also feel free to simply email/post patches to us as well:

    git format-patch testing --stdout > your_awesome_work.patch

    Being on Github means that we also now have an issue tracker to report any bugs or enhancements you’d like to see for the site.

    So no more excuses - if you’d like to lend a hand just dive right in! We’re all here to help! :)

    Speaking of Helping

    Speaking of which, I wanted to give a special shout-out to community member @paperdigits (Mica), who has been active in sharing presentation materials in the Presentations repo and has been actively hacking at the website. Mica’s recommendations and pull requests are helping to make the site code cleaner and better for everyone, and I really appreciate all the help (even if I am scared of change).

    Thank you, Mica! You rock!

    Those Stinky darktable People

    Yes, after member Claes asked the question on discuss about why we didn’t have a darktable category on the forums, I relented and created one. Normally I want to make sure that any category is going to have active people to maintain and monitor the topics there. I feel like having an empty forum can sometimes be detrimental to the perception of a project/community.

    darktable logo

    In this case, any topics in the darktable category will also show up in the more general Software category as well. This way the visibility and interactions are still there, but with the added benefit that we can now choose to see only darktable posts, ignore them, or let all those stinky users do what they want in there.

    Besides, now we can say that we’ve sufficiently appeased Morgan Hardwood‘s organizational needs…

    So, come on by and say hello in the brand new darktable category!

    June 21, 2016

    AAA game, indie game, card-board-box

    Early bird gets eaten by the Nyarlathotep
     
    The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

    The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
     
    But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
     
    Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
     
    It's late here, I'll be off to do some testing I think :)

    PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

    Sharing Galore


    Sharing Galore

    or, Why This Community is Awesome

    Community member and RawTherapee hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the Söderåsen National Park (Sweden!). Ofnuts is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject. After bugging David Tschumperlé he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.

    So much neat content being shared for everyone to play with and learn from! Come see what everyone is doing!

    Old Oak - A Tutorial

    Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap. Just as Morgan Hardwood did on the forums a few days ago!

    Old Oak by Morgan Hardoowd Old Oak by Morgan Hardwood cbsa

    He introduces the image and post:

    There is an old oak by the southern entrance to the Söderåsen National Park. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley rabbits sure love it.

    The image itself is a treat. I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image. The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.

    Of course, Morgan doesn’t stop there. You should absolutely go read his entire post. He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the flat-field, a shot of his color target + DCP, and finally his RawTherapee .PP3 file with all of his settings! Whew!

    If you’re interested I urge you to go check out (and participate!) in his topic on the forums: Old Oak - A Tutorial.

    I Will Burn This Place to the Ground

    Speaking of sharing material, Ofnuts has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity. Why do I say this?

    Kill It With Fire! Kill it with fire!

    Because he started a topic appropriately entitled: “NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:

    CarVac Version A version by CarVac
    MLC Morgin Version By MLC/Morgin
    By Jonas Wagner By Jonas Wagner
    iarga By iarga
    by PkmX By PkmX
    by Kees Guequierre By Kees Guequierre

    Of course, I had a chance to try processing it as well. Here’s what I ended up with:

    Flames

    Ahhhh, just writing this post is a giant bag of NOPE*. If you’d like to join in on the fun(?) and share your processing as well - go check out the topic!

    Now let’s move on to something more cute and fuzzy, like an ALOT…

    * I kid, I’m not really an arachnophobe (within reason), but I can totally see why someone would be.

    Median Blending ALOT of Images with G’MIC

    Hyperbole and a Half ALOT The ALOT. Borrowed from Allie Brosh and here because I really wanted an excuse to include it.

    I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post). One of those friends is G’MIC creator and community member David Tschumperlé.

    A few years back he helped me with some artwork I was generating with imagemagick at the time. I was averaging images together to see what an amalgamation would look like. For instance, here is what all of the Sports Illustrated swimsuit edition (NSFW) covers (through 2000) look like, all at once:

    Sport Illustrated Swimsuit Covers Through 2000

    A natural progression of this idea was to consider doing a median blend vs. mean. The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not. This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).

    It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted. This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.

    So it’s awesome that, yet again, David has found a solution to the problem! He explains it in greater detail on his topic:

    A guide about computing the temporal average/median of video frames with G’MIC

    He basically chops up the image frame into regions, then computes the pixel-median value for those regions. Here’s an example of his result:

    P!nk Try Mean/Median Mean/Median samples from P!nk - Try music video.

    Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!

    Sharing Galore


    Sharing Galore

    or, Why This Community is Awesome

    Community member and RawTherapee hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the Söderåsen National Park (Sweden!). Ofnuts is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject. After bugging David Tschumperlé he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.

    So much neat content being shared for everyone to play with and learn from! Come see what everyone is doing!

    Old Oak - A Tutorial

    Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap. Just as Morgan Hardwood did on the forums a few days ago!

    Old Oak by Morgan Hardoowd Old Oak by Morgan Hardwood cbsa

    He introduces the image and post:

    There is an old oak by the southern entrance to the Söderåsen National Park. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley rabbits sure love it.

    The image itself is a treat. I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image. The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.

    Of course, Morgan doesn’t stop there. You should absolutely go read his entire post. He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the flat-field, a shot of his color target + DCP, and finally his RawTherapee .PP3 file with all of his settings! Whew!

    If you’re interested I urge you to go check out (and participate!) in his topic on the forums: Old Oak - A Tutorial.

    I Will Burn This Place to the Ground

    Speaking of sharing material, Ofnuts has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity. Why do I say this?

    Kill It With Fire! Kill it with fire!

    Because he started a topic appropriately entitled: “NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:

    CarVac Version A version by CarVac
    MLC Morgin Version By MLC/Morgin
    By Jonas Wagner By Jonas Wagner
    iarga By iarga
    by PkmX By PkmX
    by Kees Guequierre By Kees Guequierre

    Of course, I had a chance to try processing it as well. Here’s what I ended up with:

    Flames

    Ahhhh, just writing this post is a giant bag of NOPE*. If you’d like to join in on the fun(?) and share your processing as well - go check out the topic!

    Now let’s move on to something more cute and fuzzy, like an ALOT…

    * I kid, I’m not really an arachnophobe (within reason), but I can totally see why someone would be.

    Median Blending ALOT of Images with G’MIC

    Hyperbole and a Half ALOT The ALOT. Borrowed from Allie Brosh and here because I really wanted an excuse to include it.

    I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post). One of those friends is G’MIC creator and community member David Tschumperlé.

    A few years back he helped me with some artwork I was generating with imagemagick at the time. I was averaging images together to see what an amalgamation would look like. For instance, here is what all of the Sports Illustrated swimsuit edition (NSFW) covers (through 2000) look like, all at once:

    Sport Illustrated Swimsuit Covers Through 2000

    A natural progression of this idea was to consider doing a median blend vs. mean. The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not. This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).

    It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted. This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.

    So it’s awesome that, yet again, David has found a solution to the problem! He explains it in greater detail on his topic:

    A guide about computing the temporal average/median of video frames with G’MIC

    He basically chops up the image frame into regions, then computes the pixel-median value for those regions. Here’s an example of his result:

    P!nk Try Mean/Median Mean/Median samples from P!nk - Try music video.

    Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!

    June 18, 2016

    Cave 6" as a Quick-Look Scope

    I haven't had a chance to do much astronomy since moving to New Mexico, despite the stunning dark skies. For one thing, those stunning dark skies are often covered with clouds -- New Mexico's dramatic skyscapes can go from clear to windy to cloudy to hail or thunderstorms and back to clear and hot over the course of a few hours. Gorgeous to watch, but distracting for astronomy, and particularly bad if you want to plan ahead and observe on a particular night. The Pajarito Astronomers' monthly star parties are often clouded or rained out, as was the PEEC Nature Center's moon-and-planets star party last week.

    That sort of uncertainty means that the best bet is a so-called "quick-look scope": one that sits by the door, ready to be hauled out if the sky is clear and you have the urge. Usually that means some kind of tiny refractor; but it can also mean leaving a heavy mount permanently set up (with a cover to protect it from those thunderstorms) so it's easy to carry out a telescope tube and plunk it on the mount.

    I have just that sort of scope sitting in our shed: an old, dusty Cave Astrola 6" Newtonian on an equatorian mount. My father got it for me on my 12th birthday. Where he got the money for such a princely gift -- we didn't have much in those days -- I never knew, but I cherished that telescope, and for years spent most of my nights in the backyard peering through the Los Angeles smog.

    Eventually I hooked up with older astronomers (alas, my father had passed away) and cadged rides to star parties out in the Mojave desert. Fortunately for me, parenting standards back then allowed a lot more freedom, and my mother was a good judge of character and let me go. I wonder if there are any parents today who would let their daughter go off to the desert with a bunch of strange men? Even back then, she told me later, some of her friends ribbed her -- "Oh, 'astronomy'. Suuuuuure. They're probably all off doing drugs in the desert." I'm so lucky that my mom trusted me (and her own sense of the guys in the local astronomy club) more than her friends.

    The Cave has followed me through quite a few moves, heavy, bulky and old fashioned as it is; even when I had scopes that were bigger, or more portable, I kept it for the sentimental value. But I hadn't actually set it up in years. Last week, I assembled the heavy mount and set it up on a clear spot in the yard. I dusted off the scope, cleaned the primary mirror and collimated everything, replaced the finder which had fallen out somewhere along the way, set it up ... and waited for a break in the clouds.

    [Hyginus Rille by Michael Karrer] I'm happy to say that the optics are still excellent. As I write this (to be posted later), I just came in from beautiful views of Hyginus Rille and the Alpine Valley on the moon. On Jupiter the Great Red Spot was just rotating out. Mars, a couple of weeks before opposition, is still behind a cloud (yes, there are plenty of clouds). And now the clouds have covered the moon and Jupiter as well. Meanwhile, while I wait for a clear view of Mars, a bat makes frenetic passes overhead, and something in the junipers next to my observing spot is making rhythmic crunch, crunch, crunch sounds. A rabbit chewing something tough? Or just something rustling in the bushes?

    I just went out again, and now the clouds have briefly uncovered Mars. It's the first good look I've had at the Red Planet in years. (Tiny achromatic refractors really don't do justice to tiny, bright objects.) Mars is the most difficult planet to observe: Dave liks to talk about needing to get your "Mars eyes" trained for each Mars opposition, since they only come every two years. But even without my "Mars eyes", I had no trouble seeing the North pole with dark Acidalia enveloping it, and, in the south, the sinuous chain of Sini Sabaeus, Meridiani, Margaritifer, and Mare Erythraeum. (I didn't identify any of these at the time; instead, I dusted off my sketch pad and sketched what I saw, then compared it with XEphem's Mars view afterward.)

    I'm liking this new quick-look telescope -- not to mention the childhood memories it brings back.

    June 17, 2016

    Appimages, Snaps, XDG-Apps^WFlatpaks

    Lots of excitement... When Canonical announced that their snaps work on a number of other Linux distributions, the reactions were predictable, sort of amusing and missing the point.

    In the end, all this going back and forth, these are just turf wars. There are Redhat/Fedora people scared and horrified that Canonical/Ubuntu might actually set a standard for once, there are probably Canonical/Ubuntu people scared that their might not set a standard (though after several days of this netstorm, I haven't seen anything negative from their side, there are traditional packagers worried that the world may change and that they lose their "curating" position.

    And there's me scared that I'll have to maintain debs, rpms, flatpaks, snaps, appimages, OSX bundles, MSI installers, NSIS installers and portable zips. My perspective is a bit that of an outsider, I don't care about the politics, though I do wish that it isn't a dead certainty that we'll end up having both flatpaks (horrible name, by the way) and snaps in the Linux world.

    Both the Canonical and the Fedora side claim to be working with the community, and, certainly, I was approached about snap and helped make a Krita snap. Which is a big win, both for me and for snap. But both projects ignore the appimage project, which is a real community effort, without corporate involvement. Probably because there is no way for companies to use appimage to create a lock-in effort or chance monetization, it'll always be a community project, ignored by the big Linux companies.

    Here's my take, speaking a someone who is actually releasing software to end users using some of these new-fangled systems.

    The old rpm/deb way of packaging is excellent for creating the base system. For software where having the latest version doesn't matter that much for productivity. It's a system that's been used for about twenty years and served us reasonably well. But if you are developing software for end users that is regularly updated, where the latest version is important because it always has improvements that let the users do more work, it's a problem. It's a ghastly drag having to actually make the packages if you're not part of a distribution, and having to make packages for several distributions is not feasible for a small team. And if we don't, then when there are distributions that do not backport new versions to old releases because they only backport bugfixes, not releases, users lose out.

    Snap turns out to be pretty easy to make, and pretty easy to upload to Ubuntu's app store, and pretty easy to find once it's there, seeing that there were already more than a thousand downloads after a few days. I don't care about the security technology, that's just not relevant for Krita. If you use Krita, you want it to access your files. It takes about five minutes to make a new snap and upload it -- pretty good going. I was amazed and pleased that the snap now runs on a number of other distributions, and if Canonical/Ubuntu follows up on that, plugs the holes and fixes the bugs, it'll be a big plus. Snap also offers all kinds of flexibility, like adding a patched Qt, that I haven't even tried yet. I also haven't checked how to add translations yet, but that's also because the system we use to release translations for Krita needs changing, and I want to do that first.

    I haven't got any experience with flatpak. I know there was a start on making a Krita flatpak, but I haven't seen any results. I think that the whole idea of a runtime, which is a dependency thing, is dumb, though. Sure, it'll save some disk space, but at the cost of added complexity. I don't want that. For flatpak, I'll strike a wait-and-see attitude: I don't see the need for it, but if it materializes, and takes as little of my time as snap, I might make them. Unless I need to install Fedora for it, because that's one Linux distribution that just doesn't agree with me.

    Appimages, finally, are totally amazing, because they run everywhere. They don't need any kind of runtime or installation. Creating the initial AppImage recipe took a lot of time and testing, mainly because of the run-everywhere requirement. That means fiddly work trying to figure out which low-level libraries need to be included to make OpenGL work, and which don't. There might be bumps ahead, for instance if we want to start using OpenCL -- or so I was told in a comment on LWN. I don't know yet. Integration with the desktop environment is something Simon is working on, by installing a .desktop file in the user's home directory. Sandboxing is also being worked on, using some of the same technology as flatpak, apparently. Automatic updates is also something that is becoming possible. I haven't had time to investigate those things yet, because of release pressures, kickstarter pressures and all that sort of thing. One possible negative about appimages is that users have a hard time understanding them -- they just cannot believe that download, make executable, go is all there's to it. So much so that I've considered making a tar.xz with an executable appimage inside so users are in a more familiar territory. Maybe even change the extension from .appimage to .exe?

    Anyway, when it comes to actually releasing software to end users in a way that doesn't drive me crazy, I love AppImages, I like snap, I hate debs, rpms, repositories, ppa's and their ilk and flatpak has managed to remain a big unknown. If we could get a third format to replace all the existing formats, say flatsnapimage, wouldn't that be lovely?

    Wouldn't it?

    June 16, 2016

    silverorange job opening: Back-end Web Developer

    Silverorange, the web design and development company where I work, is looking to hire another great back-end web developer. It’s a nice place to work.

    Translation parameters in angular-gettext

    As a general rule, I try not to include new features in angular-gettext: small is beautiful and for the most part I consider the project as finished. However, Ernest Nowacki just contributed one feature that was too good to leave out: translation parameters.

    To understand what translation parameters are, consider the following piece of HTML:

    <span translate>Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}.</span>
    

    The resulting string that needs to be handled by your translators is both ugly and hard to use:

    msgid "Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}."
    

    With translation parameters you can add local aliases:

    <span translate
          translate-params-date="post.modificationDate | date : 'yyyy-MM-dd HH:mm'"
          translate-params-author="post.author">
        Last modified: {{date}} by {{author}}.
    </span>
    

    With this, translators only see the following:

    msgid "Last modified: {{date}} by {{author}}."
    

    Simply beautiful.

    You’ll need angular-gettext v2.3.0 or newer to use this feature.

    More information in the documentation: https://angular-gettext.rocketeer.be/dev-guide/translate-params/.


    Comments | More on rocketeer.be | @rubenv on Twitter

    June 15, 2016

    Running Krita Snaps on Other Distributions

    This is pretty cool: in the week before the Krita release, Michael Hall submitted a snapcraft definition for making a Krita snap. A few iterations later, we have something that works (unless you're using an NVidia GPU with the proprietary drivers). Adding Krita to the Ubuntu app store was also really easy.

    And now, if you go to snapcraft.io and click on a Linux distribution's logo, you'll get instructions on how to get snap running on your system -- and that means the snap package for Krita can work on Arch, Debian, Fedora, Gentoo -- and Ubuntu of course. Pretty unbelievable! OpenSUSE is still missing though...

    Of course, running a snap still means you need to install something before you can run Krita while an AppImage doesn't need anything making it executable. Over the past month, I've encountered a lot of Linux users who just couldn't believe it's so easy, and were asking for install instructions :-)

    June 11, 2016

    The 2016 Kickstarter

    This year's kickstarter fundraising campaign for Krita was more nerve-wracking than the previous two editions. Although we ended up 135% funded, we were almost afraid we wouldn't make it, around the middle. Maybe only the release of Krita 3.0 turned the campaign around. Here's my chaotic and off-the-cuff analysis of this campaign.

    Campaign setup

    We were ambitious this year and once again decided upon two big goals: text and vector, because we felt both are real pain points in Krita that really need to be addressed. I think now that we probably should have made both into super-stretch goals one level above the 10,000 euro Python stretch goal and let our community decide.

    Then we could have made the base level one stretch goal of 15,000 euros, and we'd have been "funded" on the second day and made the Kickstarter expectation that a succesful campaign is funded immediately. Then we could have opened the paypal pledges really early into the campaign and advertise the option properly.

    We also hadn't thought through some stretch goals in sufficient depth, so sometimes we weren't totally sure ourselves what we're offering people. This contrasts with last year, where the stretch goals were precisely defined. (But during development became gold-plated -- a 1500 stretch goal should be two weeks of work, which sometimes became four or six weeks.)

    We did have a good story, though, which is the central part of any fundraiser. Without a good story that can be summarized in one sentence, you'll get nowhere. And text and vector have been painful for our users for years now, so that part was fine.

    We're also really well-oiled when it comes to preparation: Irina, me and Wolthera sat together for a couple of weekends to first select the goals, then figure out the reward levels and possible rewards, and then to write the story and other text. We have lists of people to approach, lists of things that need to be written in time to have them translated into Russian and Japanese -- that's all pretty well oiled.

    Not that our list of rewards was perfect, so we had to do some in-campaign additions, and we made at least one mistake: we added a 25 euro level when the existing 25 euros rewards had sold out. But the existing rewards re-used overstock from last year, and for the new level we have to have new goodies made. And that means our cost for those rewards is higher than we thought. Not high enough that those 25 euros pledges don't help towards development, but it's still a mistake.

    Our video was very good this year: about half of the plays were watched to the end, which is an amazing score!

    Kickstarter is becoming a tired formula

    Already after two days, people were saying on the various social media sites that we wouldn't make it. The impression with Kickstarter these days is that if you're not 100% funded in one or two days, you're a failure. Kickstarter has also become that site where you go for games, gadgets and gags.

    We also noticed less engagement: fewer messages and comments on the kickstarter site itself. That could have been a function of a less attractive campaign, of course.

    That Kickstarter still hasn't got a deal with Paypal is incredible. And Kickstarter's campaign tools are unbelievably primitive: from story editor to update editor (both share the same wysiwyg editor which is stupidly limited, and you can only edit updates for 30 minutes) to the survey tools, which don't allow copy and paste between reward levels or any free text except in the intro. Basically, Kickstarter isn't spending any money on its platform any more, and it shows.

    It is next to impossible to get news coverage for a fundraising campaign

    You'd think that "independent free software project funds full-time development through community, not commercial, support" would make a great story, especially when the funding is a success and the results are visible for everyone. You'd think that especially the free software oriented media would be interested in a story like this. But, with some exceptions, no.

    Last year, I was told by a journalist reporting on free and open source software that there are too many fundraising campaigns to cover. He didn't want to drown his readers in them, and it would be unethical to ignore some and cover others.

    But are there so many fundraisers for free software? I don't know, since none get into the news. I know about a few, mostly in the graphics software category -- synfig, blender, Jehan's campaign for Zemarmot, the campaign by the Software Freedom Conversancy, KDE's Randa campaign. But that's really just a handful.

    I think that the free and open source news media are doing their readers a disservice by not covering campaigns like ours; and they are doing the ecosystem a disservice. Healthy, independent projects that provide software in important categories, like Krita, are essential for free software to prosper.

    Exhaustion

    Without the release, we might not have made it. But doing a Kickstarter is exhausting: it's only a month, but feels like two or three. Doing a release and a Kickstarter is double exhausting. We did raise Krita's profile and userbase to a whole other level, though! (Which also translates into a flood of bug reports, and bugzilla basically has become unmanageable for us: we need more triagers and testers, badly!)

    Right now, I'd like to take a few days off, and Dmitry smartly is taking a few days off, but there's still so much on my backlog that it's not going to happen.

    I also had a day job for three days a week during the campaign, during which I wasn't available for social media work or promo, and I really felt that to be a problem. But I need that job to fund my own work on Krita...

    Referrers

    Kickstarter lets one know where the backers are coming from. Kickstarter itself is a source of backers: about 4500 euros came from Kickstarter itself. Next up is Reddit with 3000 euros, twitter with 1700, facebook 1400, krita.org 1000 and blendernation with 900. After that, the long tail starts. So, in the absence of news coverage, social media is really important and the Blender community is once again proven to be much bigger than most people in the free software community realize.

    Conclusion

    The campaign was a success, and the result pretty much the right size, I think. If we had double the result, we would have had to find another freelancer to work on Krita full-time. I'm not sure we're ready for that yet. We've also innovated this year, by deciding to offer artists in our communities commissions to create art for the rewards. That's something we'll be setting in motion soon.

    Another innovation is that we decided to produce an art book with work by Krita artists. Calls for submissions will go out soon! That book will also go into the shop, and it's kind of an exercise for the other thing we want to do this year: publish a proper Pepper and Carrot book.

    If sales from books will help fund development further, we might skip one year of Kickstarter-like fund raising, in the hope that a new platform will spring up that will offer a fresh way of doing fund raising.

    June 10, 2016

    Visual diffs and file merges with vimdiff

    I needed to merge some changes from a development file into the file on the real website, and discovered that the program I most often use for that, meld, is in one of its all too frequent periods where its developers break it in ways that make it unusable for a few months. (Some of this is related to GTK, which is a whole separate rant.)

    That led me to explore some other diff/merge alternatives. I've used tkdiff quite a bit for viewing diffs, but when I tried to use it to merge one file into another I found its merge just too hard to use. Likewise for emacs: it's a wonderful editor but I never did figure out how to get ediff to show diffs reliably, let alone merge from one file to another.

    But vimdiff looked a lot easier and had a lot more documentation available, and actually works pretty well.

    I normally run vim in an xterm window, but for a diff/merge tool, I want a very wide window which will show the diffs side by side. So I used gvimdiff instead of regular vimdiff: gvimdiff docs.dev/filename docs.production/filename

    Configuring gvimdiff to see diffs

    gvimdiff initially pops up a tiny little window, and it ignores Xdefaults. Of course you can resize it, but who wants to do that every time? You can control the initial size by setting the lines and columns variables in .vimrc. About 180 columns by 60 lines worked pretty well for my fonts on my monitor, showing two 80-column files side by side. But clearly I don't want to set that in .vimrc so that it runs every time I run vim; I only want that super-wide size when I'm running a side-by-side diff.

    You can control that by checking the &diff variable in .vimrc:

    if &diff
        set lines=58
        set columns=180
    endif
    

    If you do decide to resize the window, you'll notice that the separator between the two files doesn't stay in the center: it gives you lots of space for the right file and hardly any for the left. Inside that same &diff clause, this somewhat arcane incantation tells vim to keep the separator centered:

        autocmd VimResized * exec "normal \<C-w>="
    

    I also found that the colors, in the vim scheme I was using, made it impossible to see highlighted text. You can go in and edit the color scheme and make your own, of course, but an easy way quick fix is to set all highlighting to one color, like yellow, inside the if $diff section:

        highlight DiffAdd    cterm=bold gui=none guibg=Yellow
        highlight DiffDelete cterm=bold gui=none guibg=Yellow
        highlight DiffChange cterm=bold gui=none guibg=Yellow
        highlight DiffText   cterm=bold gui=none guibg=Yellow
    

    Merging changes

    Okay, once you can view the differences between the two files, how do you merge from one to the other? Most online sources are quite vague on that, but it's actually fairly easy:

    ]c jumps to the next difference
    [c jumps to the previous difference
    dp makes them both look like the left side (apparently stands for diff put
    do makes them both look like the right side (apparently stands for diff obtain

    The only difficult part is that it's not really undoable. u (the normal vim undo keystroke) works inconsistently after dp: the focus is generally in the left window, so u applies to that window, while dp modified the right window and the undo doesn't apply there. If you put this in your .vimrc

    nmap du :wincmd w<cr>:normal u<cr>:wincmd w<cr>
    
    then you can use du to undo changes in the right window, while u still undoes in the left window. So you still have to keep track of which direction your changes are going.

    Worse, neither undo nor this du command restores the highlighting showing there's a difference between the two files. So, really, undoing should be reserved for emergencies; if you try to rely on it much you'll end up being unsure what has and hasn't changed.

    In the end, vimdiff probably works best for straightforward diffs, and it's probably best get in the habit of always merging from right to left, using do. In other words, run vimdiff file-to-merge-to file-to-merge-from, and think about each change before doing it to make it less likely that you'll need to undo.

    And hope that whatever silly transient bug in meld drove you to use vimdiff gets fixed quickly.

    June 09, 2016

    Display Color Profiling on Linux


    Display Color Profiling on Linux

    A work in progress

    This article by Pascal de Bruijn was originally published on his site and is reproduced here with permission.  —Pat


    Attention: This article is a work in progress, based on my own practical experience up until the time of writing, so you may want to check back periodically to see if it has been updated.

    This article outlines how you can calibrate and profile your display on Linux, assuming you have the right equipment (either a colorimeter like for example the i1 Display Pro or a spectrophotometer like for example the ColorMunki Photo). For a general overview of what color management is and details about some of its parlance you may want to read this before continuing.

    A Fresh Start

    First you may want to check if any kind of color management is already active on your machine, if you see the following then you’re fine:

    $ xprop -display :0.0 -len 14 -root _ICC_PROFILE
    _ICC_PROFILE: no such atom on any window.
    

    However if you see something like this, then there is already another color management system active:

    $ xprop -display :0.0 -len 14 -root _ICC_PROFILE
    _ICC_PROFILE(CARDINAL) = 0, 0, 72, 212, 108, 99, 109, 115, 2, 32, 0, 0, 109, 110
    

    If this is the case you need to figure out what and why… For GNOME/Unity based desktops this is fairly typical, since they extract a simple profile from the display hardware itself via EDID and use that by default. I’m guessing KDE users may want to look into this before proceeding. I can’t give much advice about other desktop environments though, as I’m not particularly familiar with them. That said, I tested most of the examples in this article with XFCE 4.10 on Xubuntu 14.04 “Trusty”.

    Display Types

    Modern flat panel displays are comprised of two major components for purposes of our discussion, the backlight and the panel itself. There are various types of backlights, White LED (most common nowadays), CCFL (most common a few years ago), RGB LED and Wide Gamut CCFL, the latter two of which you’d typically find on higher end displays. The backlight primarily defines a displays gamut and maximum brightness. The panel on the other hand primarily defines the maximum contrast and acceptable viewing angles. Most common types are variants of IPS (usually good contrast and viewing angles) and TN (typically mediocre contrast and poor viewing angles).

    Display Setup

    There are two main cases, there are laptop displays, which usually allow for little configuration, and regular desktop displays. For regular displays there are a few steps to prepare your display to be profiled, first you need to reset your display to its factory defaults. We leave the contrast at its default value. If your display has a feature called dynamic contrast you need to disable it, this is critical, if you’re unlucky enough to have a display for which this cannot be disabled, then there is no use in proceeding any further. Then we set the color temperature setting to custom and set the R/G/B values to equal values (often 100/100/100 or 255/255/255). As for the brightness, set it to a level which is comfortable for prolonged viewing, typically this means reducing the brightness from its default setting, this will often be somewhere around 25–50 on a 0–100 scale. Laptops are a different story, often you’ll be fighting different lighting conditions, so you may want to consider profiling your laptop at its full brightness. We’ll get back to the brightness setting later on.

    Before continuing any further, let the display settle for at least half an hour (as its color rendition may change while the backlight is warming up) and make sure the display doesn’t go into power saving mode during this time.

    Another point worth considering is cleaning the display before starting the calibration and profiling process, do keep in mind that displays often have relatively fragile coatings, which may be deteriorated by traditional cleaning products, or easily scratched using regular cleaning cloths. There are specialist products available for safely cleaning computer displays.

    You may also want to consider dimming the ambient lighting while running the calibration and profiling procedure to prevent (potential) glare from being an issue.

    Software

    If you’re in a GNOME or Unity environment it’s highly recommend to use GNOME Color Manager (with colord and argyll). If you have recent versions (3.8.3, 1.0.5, 1.6.2 respectively), you can profile and setup your display completely graphically via the Color applet in System Settings. It’s fully wizard driven and couldn’t be much easier in most cases. This is what I personally use and recommend. The rest of this article focuses on the case where you are not using it.

    Xubuntu users in particular can get experimental packages for the latest argyll and optionally xiccd from my xiccd-testing PPAs. If you’re using a different distribution you’ll need to source help from its respective community.

    Report On The Uncalibrated Display

    To get an idea of the displays uncalibrated capabilities we use argyll’s dispcal:

    $ dispcal -H -y l -R
    Uncalibrated response:
    Black level = 0.4179 cd/m^2
    50%   level = 42.93 cd/m^2
    White level = 189.08 cd/m^2
    Aprox. gamma = 2.14
    Contrast ratio = 452:1
    White     Visual Daylight Temperature = 7465K, DE 2K to locus =  3.2
    

    Here we see the display has a fairly high uncalibrated native whitepoint at almost 7500K, which means the display is bluer than it should be. When we’re done you’ll notice the display becoming more yellow. If your displays uncalibrated native whitepoint is below 6500K you’ll notice it becoming more blue when loading the profile.

    Another point to note is the fairly high white level (brightness) of almost 190 cd/m2, it’s fairly typical to target 120 cd/m2 for the final calibration, keeping in mind that we’ll lose 10 cd/m2 or so because of the calibration itself. So if your display reports a brightness significantly higher than 130 cd/m2 you may want to considering turning down the brightness another notch.

    Calibrating And Profiling Your Display

    First we’ll use argyll’s dispcal to measure and adjust (calibrate) the display, compensating for the displays whitepoint (targeting 6500K) and gamma (targeting industry standard 2.2, more info on gamma here):

    $ dispcal -v -m -H -y l -q l -t 6500 -g 2.2 asus_eee_pc_1215p
    

    Next we’ll use argyll’s targen to generate measurement patches to determine its gamut:

    $ targen -v -d 3 -G -f 128 asus_eee_pc_1215p
    

    Then we’ll use argyll’s dispread to apply the calibration file generated by dispcal, and measure (profile) the displays gamut using the patches generated by targen:

    $ dispread -v -N -H -y l -k asus_eee_pc_1215p.cal asus_eee_pc_1215p
    

    Finally we’ll use argyll’s colprof to generate a standardized ICC (version 2) color profile:

    $ colprof -v -D "Asus Eee PC 1215P" -C "Copyright 2013 Pascal de Bruijn" \
              -q m -a G -n c asus_eee_pc_1215p
    Profile check complete, peak err = 9.771535, avg err = 3.383640, RMS = 4.094142
    

    The parameters used to generate the ICC color profile are fairly conservative and should be fairly robust. They will likely provide good results for most use-cases. If you’re after better accuracy you may want to try replacing -a G with -a S or even -a s, but I very strongly recommend starting out using -a G.

    You can inspect the contents of a standardized ICC (version 2 only) color profile using argyll’s iccdump:

    $ iccdump -v 3 asus_eee_pc_1215p.icc
    

    To try the color profile we just generated we can quickly load it using argyll’s dispwin:

    $ dispwin -I asus_eee_pc_1215p.icc
    

    Now you’ll likely see a color shift toward the yellow side. For some possibly aged displays you may notice it shifting toward the blue side.

    If you’ve used a colorimeter (as opposed to a spectrophotometer) to profile your display and if you feel the profile might be off, you may want to consider reading this and this.

    Report On The Calibrated Display

    Next we can use argyll’s dispcal again to check our newly calibrated display:

    $ dispcal -H -y l -r
    Current calibration response:
    Black level = 0.3432 cd/m^2
    50%   level = 40.44 cd/m^2
    White level = 179.63 cd/m^2
    Aprox. gamma = 2.15
    Contrast ratio = 523:1
    White     Visual Daylight Temperature = 6420K, DE 2K to locus =  1.9
    

    Here we see the calibrated displays whitepoint nicely around 6500K as it should be.

    Loading The Profile In Your User Session

    If your desktop environment is XDG autostart compliant, you may want to considering creating a .desktop file which will load the ICC color profile during all users session login:

    $ cat /etc/xdg/autostart/dispwin.desktop
    [Desktop Entry]
    Encoding=UTF-8
    Name=Argyll dispwin load color profile
    Exec=dispwin -I /usr/share/color/icc/asus_eee_pc_1215p.icc
    Terminal=false
    Type=Application
    Categories=
    

    Alternatively you could use colord and xiccd for a more sophisticated setup. If you do make sure you have recent versions of both, particularly for xiccd as it’s still a fairly young project.

    First we’ll need to start xiccd (in the background), which detects your connected displays and adds it to colord‘s device inventory:

    $ nohup xiccd &
    

    Then we can query colord for its list of available devices:

    $ colormgr get-devices
    

    Next we need to query colord for its list of available profiles (or alternatively search by a profile’s full filename):

    $ colormgr get-profiles
    $ colormgr find-profile-by-filename /usr/share/color/icc/asus_eee_pc_1215p.icc
    

    Next we’ll need to assign our profile’s object path to our display’s object path:

    $ colormgr device-add-profile \
       /org/freedesktop/ColorManager/devices/xrandr_HSD121PHW1_70842_pmjdebruijn_1000 \
       /org/freedesktop/ColorManager/profiles/icc_e7fc40cb41ddd25c8d79f1c8d453ec3f
    

    You should notice your displays color shift within a second or so (xiccd applies it asynchronously), assuming you haven’t already applied it via dispwin earlier (in which case you’ll notice no change).

    If you suspect xiccd isn’t properly working, you may be able to debug the issue by stopping all xiccd background processes, and starting it in debug mode in the foreground:

    $ killall xiccd
    $ G_MESSAGES_DEBUG=all xiccd
    

    Also in xiccd‘s case you’ll need to create a .desktop file to load xiccd during all users session login:

    $ cat /etc/xdg/autostart/xiccd.desktop
    [Desktop Entry]
    Encoding=UTF-8
    Name=xiccd
    GenericName=X11 ICC Daemon
    Comment=Applies color management profiles to your session
    Exec=xiccd
    Terminal=false
    Type=Application
    Categories=
    OnlyShowIn=XFCE;
    

    You’ll note that xiccd does not need any parameters, since it will query colord‘s database what profile to load.

    If your desktop environment is not XDG autostart compliant, you need to ask them how to start custom commands (dispwin or xiccd respectively) during session login.

    Dual Screen Caveats

    Currently having a dual screen color managed setup is complicated at best. Most programs use the _ICC_PROFILE atom to get the system display profile, and there’s only one such atom. To resolve this issue new atoms were defined to support multiple displays, but not all applications actually honor them. So with a dual screen setup there is always a risk of applications applying the profile for your first display to your second display or vice versa.

    So practically speaking, if you need a reliable color managed setup, you should probably avoid dual screen setups altogether.

    That said, most of argyll’s commands support a -d parameter for selecting which display to work with during calibration and profiling, but I have no personal experience with them whatsoever, since I purposefully don’t have a dual screen setup.

    Application Support Caveats

    As my other article explains display color profiles consist of two parts, one part (whitepoint & gamma correction) is applied via X11 and thus benefits all applications. There is however a second part (gamut correction) that needs to be applied by the application. And application support for both input and display color management vary wildly. Many consumer grade applications have no color management awareness whatsoever.

    Firefox can do color management and it’s half-enabled by default, read this to properly configure Firefox.

    GIMP for example has display color management disabled by default, you need to enable it via its preferences.

    Eye of GNOME has display color management enabled by default, but it has nasty corner case behaviors, for example when a file has no metadata no color management is done at all (instead of assuming sRGB input). Some of these issues seem to have been resolved on Ubuntu Trusty (LP #272584).

    Darktable has display color management enabled by default and is one of the few applications which directly support colord and the display specific atoms as well as the generic _ICC_PROFILE atom as fallback. There are however a few caveats for darktable as well, documented here.


    This article by Pascal de Bruijn was originally published on his site and is reproduced here with permission.

    And done!

    Of course, we should have posted this yesterday. Or earlier today! But when around midnight we opened the Champagne (only a half-bot, and it was off, too! Mumm, booh!), we all felt we were at the end of a long, long month! We, that’s Boudewijn, Irina and Wolthera, gathered in Deventer for the occasion (and also for the Google Summer of Code). Over the past month, hundreds of bugs have been fixed, we’ve gone through an entire release cycle, we managed another successful Kickstarter campaign! Exhaustion had set in, and we went for a walk around scenic Deventer to look at cows, sheep, dogs, swans, piglets, ducklings, budgerigars and chickens, and lots of fresh early summer foliage.

    But not all was laziness! Yesterday, all Kickstarter backers got their surveys, and over half have already returned them! Today, the people who backed us through paypal got their surveys, and we got a fair return rate as well! Currently, the score looks like this:

    • 24. Python Scripting Plugin: 414
    • 8. SVG Import/Export: 373

    With runners up…

    •         21. Flipbook/Sketchbook 176
    •         2. Composition Guides 167
    •         1. Transform from pivot point 152
    •         7. Vector Layers as Mask 132
    •         13. Arrange Layers 129

    The special goals selected by the 1500 euro backers are Improving the Reference Docker and — do what you developers think most fun! That’s not an official stretch goal, but we’ve got some ideas…

     

     

    Sources for Openly-Licensed Content

    This morning I got an email from my colleague Tyler Golden who was seeking advice on good places to get openly-licensed content so I put together a list. It seems the list would be generally useful (especially for my new design interns, who will be blogging on Planet Fedora soon <img src=🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> ) so here you are, a blog post. <img src=🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" />

    There’s a lot more content types I could go through but I’m going to stick to icons/graphics and photography for now. If you know of any other good sources in these categories (or desperately need another category of content covered,) please let me know and I’ll update this list.

    Also of note – please note any licenses for materials you’re evaluating for use, and if they require attribution please give it! It doesn’t have to be a major deal. (I covered this quite a bit in a workshop I’ve given a few times on Gimp & Inkscape so you might want to check out that preso if you need more info on that.)

    Icons / Graphics

    • The Noun Project

      noun-project

      Ryan Lerch clued me in to this one. All of the graphics are Creative Commons (yo uhave to provide attribution) or you can pay a small fee if you don’t want to have to attribute. There’s a lot of nice vector-based icons here.

      thenounproject.com

    • Open Clip Art

      openclipart

      Everything is CC0 – no attribution needed – and all vector sources. Quality varies widely but there are some real gems in there. (My offerings are here, but my favorite collection is by the Fedora Design team’s gnokii. Ryan Lerch has a good set too!) There’s a plugin that comes with Inkscape that lets you search open clip art and pull in the artwork directly without having to go to the website.

      openclipart.org

    • Xaviju’s Inkscape Open Symbols

      font-awesome

      I love these because you can browse the graphics right in Inkscape’s UI and drag over whichever ones you want into your document. There’s a lot of different libraries there with different licenses but the github page gives links to all the upstreams. I’m a big fan of Font Awesome, which is one of the libraries here, and we’ve been using it in Fedora’s webapps as of late; except for the brand icons they are all licensed under the Open Font License.

      github.com/Xaviju/inkscape-open-symbols

    • Stamen

      stamen

      If you need a map, this app is awesome. It uses open licensed Open Street Map data and styles it – there’s watercolor and lithographic styles, just to name a couple. If you ever need a map graphic definitely check this out.

      maps.stamen.com

    Photos

    • Pixabay

      pixabay

      This site has photography, graphics, and videos all under a CC0 license (meaning: no attribution required.) For me, this site is a relative newcomer but has some pretty high-quality works.

      pixabay.com

    • Flickr

      flickr

      Flickr lets you search by license. I’ve gotten a lot of great content on there under CC BY / CC BY SA (both of which allow commercial use and modification.) (More on Flickr below.)

      flickr.com/search/?license=4%2C5%2C9%2C10

    • Wikimedia Commons

      wikicommons

      You have to be a bit careful on here because some stuff isn’t actually freely licensed but most of it is. (I have seen trademarked stuff get uploaded on here.) Just evaluate content on here with a critical eye!

      commons.wikimedia.org

    • Miscellaneous Government Websites

      loc

      Photography released by the US government is required to be public domain in many cases. I don’t know about other countries as much, but I’m sure it’s the case for some of them (Europeana is run by a consortium of various EU countries for example.) These agencies are also starting to publish to Flickr which is great. NASA publishes a lot of photos that are public domain; I’ve also gone through the Library of Congress website to get images.

    • CC Search

      ccsearch

      This is Creative Commons’ search engine; it lets you search a bunch of places that have openly-licensed content at once.

      search.creativecommons.org/

    • CompFight

      compfight

      This is an interface on top of Flickr. It lets you search for images and dictate which licenses you’re interested in. Using it can be faster than searching Flickr.

      compfight.com

    But wait, there’s more!

    Naheem linked me to this aptly-named awesome resource:

    https://github.com/neutraltone/awesome-stock-resources

    Even more goodness there!!

    June 07, 2016

    Recommended Reading: Trajectory Book 1 and 2 by Robert Campbell

    Years ago I had the pleasure of meeting Deb Richardson and Rob Campbell, a couple who were both working at Mozilla at the time. They came to our Zap Your PRAM conference in Dalvay back in 2008.

    Rob was working on the Firefox dev tools, which had begun to lag behind Chrome, and have since become great again.

    Trajectory by Rob Campbell

    Then last year, I saw that Rob was self-publishing a science-fiction novel. This interested me as several of the books I’ve enjoyed recently are in the genre (Seveneves by Neal Stephenson, Kim Stanley Robinson’s Aurora, and my all-time favourite, the Mars trilogy). However, I was concerned. What if someone you know invites you to their comedy night and just isn’t funny? Fortunately, this wasn’t the case with Rob.

    Rob’s book, Trajectory Book 1 was great. Easy to read, interesting, and nerdy in the right ways. My only complaint was that it ended abruptly. The solution to this, obviously, is Book 2, which came out yesterday.

    If you have any interest in science fiction, I can gladly recommend Rob Campbell’s Trajectory Book 1 (and I’m looking forward to starting Trajectory Book 2).

    June 06, 2016

    David Revoy livestreaming on Twitch

    Ten hours before the end of the Kickstarter (Tuesday June 7, from 21:00 to 23:00 CEST, UTC+2), David Revoy will draw in public! You can follow it on the official Krita channel on twitch.tv:  https://www.twitch.tv/artwithkrita

    Read more about it on David’s website

    Interview with Sara Tepes

    Tranquil

    Could you tell us something about yourself?

    My name’s Sara Tepes, I’m 17 years old,  I was born in Romania but grew up in the U.S. and I live super close to Washington D.C. I love roses, rabbits, tea, and historical movies.

    Do you paint professionally, as a hobby artist, or both?

    I work on commissions and various projects and get paid for my work, so I’m sort of a freelance part-time illustrator, but I also draw and paint as a hobby. I hope to major in graphic design and be a professional full time freelancer.

    What genre(s) do you work in?

    Traditional drawing and both digital and traditional painting.
    Garden

    Whose work inspires you the most—who are your role models as an artist?

    I have been inspired by Tony DiTerlizzi ever since I was a tiny kid who read the Spiderwick Chronicles. He was my art god for the longest time, and I still love his work; his technique is brilliant and the creatures he creates are just alive on the paper. Lois Van Baarle, aka Loish, (http://loish.net/) has been a huge role model ever since I first discovered her digital paintings in 2012. Her paintings have such wonderful colors, details, expressions and body language!

    Traditional painters include John Singer Sargent, John William Waterhouse, Claude Monet, Gustav Klimt, and Alphonse Mucha.

    How and when did you get to try digital painting for the first time?

    I used to read a bunch of ”How to Draw Manga” books which discussed basic digital art with cell shading.  I started digital painting in 2011 when I was 12 with this old, crappy photo effect program. It basically had an airbrush feature and a select tool and paint bucket. It was super
    simplistic and wasn’t even meant for digital painting, but I really wanted to digitally color in the manga drawings I was doing at the time.

    What makes you choose digital over traditional painting?

    Well, I work in both mediums, but I generally prefer digital painting because it’s super reliable. I don’t have to worry about my paint palette drying before I can use it, working in terrible lighting and getting all the colors skewed up, or having a really long drying time on the canvas. There’s no prep or cleanup to it.

    How did you find out about Krita?

    When I got a new computer, I was looking for free digital painting software. I was 13 and didn’t have $900 for Adobe Photoshop, but I didn’t like pirating the program. I found MyPaint, Gimp, and Krita and installed and used all of them.

    What was your first impression?

    I was curious about the program but I didn’t like the brush blending. At the time, all the “painterly” brushes had color blending and it annoyed me a lot. Thank God that’s not the case with the program right now!

    What do you love about Krita?

    It doesn’t have a huge learning curve like other programs. It’s straightforward, super professional, has a bunch of great features and brushes, and autosaves every minute! It’s pretty fantastic!

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    The ONLY thing that I don’t like about Krita is that it doesn’t have a freehand warping tool how Photoshop has Liquefy or Gimp has iWarp. That would be really helpful honestly.

    If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

    Probably “Red Dress”. I love the backlighting and the vibrant red highlights. I really have to focus on how colors are affected by light, and I think I did a pretty good job with this one.
    Red Dress

    What techniques and brushes did you use in it?

    Just the color tool and the bristles_hairy brush.

    Where can people see more of your work?

    Instagram: https://instagram.com/sarucatepes/
    Twitter: https://twitter.com/sarucatepes
    Tumblr: http://themerbunny.tumblr.com
    Pinterest: https://www.pinterest.com/sarahandaric/
    DeviantArt: http://sarucatepes.deviantart.com
    Google+: https://plus.google.com/+SaraTepes

    June 05, 2016

    Digital diaphragm for optical lenses

    In photography most optical lenses use mechanical diaphragms for aperture control. They are traditionally manufactured from metal blades and works quite good. However metal blades exposes some disadvantages:

    • mechanical parts will sooner or later fail
    • the cheaper forms give strong diffraction spikes
    • manufacturers need more metal blades for a round iris, which is expensive
    • a metal blade with its sharp edges give artefacts, which are visible in out of focus regions.
    • but, contrast is very high by using opaque metal

    In order to obtain a better bokeh, some lenses are equipped with apodization filters. Those filters work mostly for fully open arperture and are very specialised and thus relatively expensive.

    A digital arperture build as a transparent display with enough spatial resolution can not only improve the shape of the diaphragm. It could feature as a apodisation filter, if it supports enough gray levels. And it can change its form programatically.

    Two possible digital diaphragm forms:
    Kreise

    • leverage existing display technology
    • better aperture shape for reduced artefacts
    • apodisation filter on demand for best bokeh or faster light
    • programmable or at least updateable aperture pattern (sharp/gausian/linear/…)
    • no metal blades or other mechanical parts to fail
    • over the years get cheaper than mechanical counterpart
    • reduce number of glas to air surfaces in optical lens design
    • integratable aperture into lens groups
    • display transparency increases quickly and is for OLED at 45% by 2016, which means at the moment just one f-stop
    • mobile demands high display resolutions anyway

    The digital arperture can easily be manufactured as a monochrome display and be placed traditionally between two optical lens groups, where today the diaphragm is located. Or it is even possible to optically integrate the aperture into one lens group, without additional glas to air surfaces, as is needed with moving blades. Once the optical quality of the digital filter display is good enough a digital diaphragm can be even cheaper than a high quality mechanical counterpart.

    Design the Kickstarter T-shirt!

    The Kickstarter has been funded so we’ll be needing T-shirts! Here’s your chance to earn fame by designing the one we’ll send to our backers: Special June drawing challenge!

    The topic is: FLOW — interpret it any way you like. If your design wins the poll, it will be printed on all the Kickstarter backer shirts and we’ll send you one, too.

    The contest is open until June 24, 12:00 UTC. That’s almost three weeks!

    Summer vacations – not the farmer’s fault!

    Episode 1 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

    I posted a brief description of the Five Monkey experiment a few days ago, as an introduction to a series someone suggested to me as I was telling stories of how certain things came about> One of the stories was about school Summer vacation. Many educators these days feel for the most part that school holidays are too long, and that kids lose knowledge due to atrophy during the Summer months – the phenomenon even has a name.  And yet attempts to restructure the school year are strongly resisted, because of the amount of investment we have as a society in the school rhythms. But, why do US schools have 10-12 weeks of Summer vacation at all?

    The story I had heard is that the Summer holiday is as long as it is, because at the origins of the modern education system, in a more agrarian society, kids were needed on the farm during the harvest and could not attend school.I do like to be accurate when talking about history, and so I went reading, and it turns out that this explanation is mostly a myth – at least in the US. And, as a farmer’s kid, that mostly makes sense to me. The harvest is mostly from August through to the beginning of October, so starting school in September, one of the busiest farming months, does not make a ton of sense.

    But there is a grain of truth to it – in the US in the 1800s, there were typically two different school rhythms, depending on whether you lived in town or in the country. In town, schools were open all year round, but many children did not go all of the time. In the country, schools were mainly in session during two periods – Winter and Summer. Spring, when crops are plated, and Autumn, when they are harvested, were the busy months, and schools were closed. The advent of compulsory schooling brought the need to standardise the school year, and so vacations were introduced in the cities, and restructured in the country, to what we see today. This was essentially a compromise, and the long Summer vacation was driven, as you might expect, by the growing middle class’s desire to take Summer holidays with their children, not the farming family’s desire to exploit child labour. It was also the hardest period of the year for children in cities, with no air conditioning to keep school classrooms cool during the hottest months of the year.

    So, while there is a grain of truth (holidays were scheduled around the harvest initially), the main driver for long Summer holidays is the same as today – parents want holidays too. The absence of air conditioning in schools would have been a distant second.

    This article is US centric, but I have also seen this subject debated in France, where the tourism industry has strongly opposed changes to the school year structure, and in Ireland, where we had 8-9 weeks vacation in primary school. So – not off to a very good start, then!

    June 04, 2016

    Walking your Goat at the Summer Concert

    I love this place. We just got back from this week's free Friday concert at Ashley Pond. Not a great band this time (the previous two were both excellent). But that's okay -- it's still fun to sit on the grass on a summer evening and watch the swallows wheeling over the pond and the old folks dancing up near the stage and the little kids and dogs dashing pell-mell through the crowd, while Dave, dredging up his rock-star past, explains why this band's sound is so muddy (too many stacked effects pedals).

    And then on the way out, I'm watching appreciatively as the teen group, who were earlier walking a slack line strung between two trees, has now switched to juggling clubs. (I know old people are supposed to complain about "kids today", but honestly, the kids here seem smart and fit and into all kinds of cool activities.) One of the jugglers has just thrown three clubs and a ball, and is mostly keeping them all in the air, when I hear a bleat to my right -- it's a girl walking by with a goat on a leash.

    Just another ordinary Friday evening in Los Alamos.

    June 03, 2016

    Anatomy of a bug fix

    Updated builds with the fix are here: https://www.kickstarter.com/projects/krita/krita-2016-lets-make-text-and-vector-art-awesome/posts/1594853!

    People sometimes assume that free software developers are only interested in adding cool new features, getting their few pixels of screenspace fame, and don’t care about fixing bugs. That’s not true – otherwise we wouldn’t have fixed about a thousand bugs in the past year (though it would be better if we hadn’t created the bugs in the first place). But sometimes bug fixing is just fun: sherlocking through the code, trying to come up with a mental model of what might be going wrong, hacking the code, discovering that you were right. Heady stuff, everyone should try it some time! Just head over to bugzilla and pick yourself a crash (crash bugs are among the easiest to fix).

    But let’s take a look at a particularly nasty bug, one that we couldn’t fix for ages. Ever since Krita 2.9.6, we have received crash reports about Krita crashing when people were using drawing tablets. Not just any drawing tablets, but obscure tablets with names like Trust, Peritab, Adesso, Waltop, Aiptek, Genius — and others. Not the tablets that we do support because the companies have donated test hardware to Krita, like Wacom, Yiynova and Huion.

    Also, not tablets that are readily available: most of these brands only produce hardware for a short time, flog it to unsuspecting punters and disappear. I.e., we couldn’t just go to the local computer shop and get one, or find one online and have it delivered. And since all these tablets have one thing in common, namely their cheapness, the users who bought them in all likelihood not all that flush, otherwise they would have bought a better tablet. So they couldn’t afford to donate their tablet to the project.

    A hardware related bug without hardware to test with, that’s nearly impossible to fix. We had four “facts” to start with:

    • The bug started appearing after Krita 2.9.6 — unfortunately, that was when we rewrote a lot of the tablet support to allow Krita to work with tablets like the Surface Pro, and it was impossible to pinpoint which change was responsible for the crash.
    • All these tablets show the same suspicious values when we were querying them for dimensions
    • All these crashes happened after that query for the tablet dimensions
    • All crashes happen on Windows only

    Now, on Windows, you talk to tablets through something called the “Wintab” API. The tablet manufacturer, or more likely, the manufacturer of the chip that that the tablet manufacturer uses, writes an implementation of this API in the form of a Wintab driver.

    Wintab is ancient: it started out in the 16 bits Windows 3.0 times. It’s gnarly, it’s illogical, it’s hoary. You can only have one wintab driver dll on your system, which means that you cannot, like on Linux plug in a Huion, test, plug in a Wacom test, plug in a Yiynova and test — you need to install and uninstall the driver every time.

    Anyway, last week we found a second-hand Trust tablet for sale. Since we’ve had at least six reports of crashes with just that particular brand, we got it. We installed a fresh Windows 10, installed the driver Trust fortunately still provides despite having discontinued its tablets, installed Krita, started Krita, brought pen to tablet and… Nothing happened. No crash, and Krita painted a shoddy, shakey, pressure sensitive line.

    Dash it, 30 euros down the drain.

    Next, we got an old Genius tablet and installed Windows 7. And bingo! A crash, and the same suspicious values in the tablet log. Now we’re talking! Unfortunately, the crash happened right inside the “Genius” wintab driver. Either we’re using the Wintab API wrong, or Genius implemented it wrong, but we cannot see the code. This is what Dmitry was looking at now:

    Gibberish...

    Gibberish…

    But it gave the hint we needed. It is a bug in the Wintab driver, and we are guessing that since all these drivers give us the same weird context information, they all share the same codebase, come from the same manufacturer in fact, and have the same bug.

    It turned out that when we added support for the Surface Pro 3, which has an N-Trig pen, we needed a couple of workarounds for its weirder features. We wrote code that would query the wintab dll for the name of the tablet, and if that was an N-Trig, we set the workaround flag:

    UINT nameLength = m_winTab32DLL.wTInfo(WTI_DEVICES, DVC_NAME, 0);
    TCHAR* dvcName = new TCHAR[nameLength + 1];
    UINT returnLength = m_winTab32DLL.wTInfo(WTI_DEVICES, DVC_NAME, dvcName);
    Q_ASSERT(nameLength == returnLength);
    QString qDvcName = QString::fromWCharArray((const wchar_t*)dvcName);
    // Name changed between older and newer Surface Pro 3 drivers
    if (qDvcName == QString::fromLatin1("N-trig DuoSense device") ||
                qDvcName == QString::fromLatin1("Microsoft device")) {
        isSurfacePro3 = true;
    }
    delete[] dvcName;

    Now follow me closely: the first line gets some info (wTInfo) from the wintab driver. It’s a call and has three parameters: the first says we want info about devices, the second says we want a name, and third one is 0. That is, zero. Null.  The second call is exactly the same, but passes something called dvcName, that is a pointer to a bit or memory where the wintab driver will write the name of the device. It’s a number, significantly bigger than 0.  The Wintab API says that if you pass 0 (null) as the third parameter, the driver should return the length of what it will return if you would pass it the length. Follow me? If you ask for  the name with 0 for length, it tells you the length; if you ask for the name with the right length, it gives you the name.

    See for yourself: http://www.wacomeng.com/windows/docs/Wintab_v140.htm#_Toc275759816

    You have to go through this hoop to set apart a chunk of memory big enough for Wintab to copy the tablet name in. Too short, and you get a crash: that’s what happens when you write out of bounds. Too long, and you waste space, and besides, how can you know how long a tablet name could be?

    Okay, there’s one other way to crash, other than writing too much stuff in too small a chunk of memory. And that’s trying to write to Very Special Memory Adress 0. That’s zero, the first location in the memory of your computer. In fact, writing to location 0 (zero) is so extremely forbidden that programmers use it to flag, meaning “don’t write here”. A competent programmer will always check for 0 (zero) before writing to memory.

    If you’re still here, I’m sure you’re getting suspicious now.

    Yes, you’re right. The people who wrote the driver for the tablets that Trust, Genius, Adesso, Peritab, Aiptek and all their ilk repackaged, rebranded and resold were not competent. They did not check for zero; they blithely started writing the name of the tablet into the address provided.

    And poof! Krita crashes, we get the bug reports — because after all, it must be Krita’s fault? The tablet works with Photoshop! Whereas it’s entirely likely that the people who cobbled together the driver didn’t even read the Wintab spec, but just fiddled with their driver until Photoshop more or less worked, before they called it a day and went to drown their sorrows in baijiu.

    Enfin, we have now “fixed” the bug — we provide 1024 characters of space for the driver to write the name of the tablet in, and hope for the best…

    Note that this doesn’t mean that your Trust, Genius or whatever tablet will work well and give satisfaction: these are still seriously badly put together products. After fixing the bug, we tried drawing with the Genius tablet and got weird, shaky lines. Then we tested with Photoshop, and after a while, saw the same weird shaky lines there. It almost looks as if the tablet driver developers didn’t really care about their product and just returned some randomly rounded numbers for the pen position.

     

    The five monkeys thought experiment

    The (probably apocryphal) five monkeys experiment goes like this:

    Five monkeys are placed in a cage. There is a lever, which, if pulled, delivers food. The monkeys soon learn how it works, and regularly pull the level.

    One day, when the lever is pulled, food is still delivered to the puller, but all the monkeys in the cage get an ice-cold shower for a period of time. The monkeys quickly learn the correlation between the lever and the cold shower, and stop any monkey from getting to the lever.

    After a while, one of the monkeys is removed, and replaced by a new monkey. Out of curiosity, the new monkey tried to pull the lever, and was beaten into submission by the other monkeys. Progressively, more of the original five monkeys are removed, and replaced with new monkeys, and they all learn the social rule – if you try to pull the lever, the group will stop you.

    Eventually, all of the original monkeys are gone. At this point, you can turn off the shower, secure in the knowledge that none of the monkeys will pull the lever, without ever knowing what will happen if they do.

    A funny anecdote, right? A lesson for anyone who ever thinks “because that’s the way it has always been”.

    And yet, there are a significant number of things in modern society that are the way they are because at one point in time, there was some constraint that applied, which no longer applies in the world of air travel and computers. I got thinking about this because of the electoral college and the constitutional delays between the November election and the January inauguration of a new president – a system that exists to get around the logistical constraints of having to travelling distances on horseback. But that is far from the only example.

    I hope to write a series, covering each of the examples I have found, and hopefully uncovering others along the way, and the electoral college will be one of them. First up, though, will be the Summer school vacation.

     

    June 01, 2016

    A little while back I had attempted to document a shoot with my friend and model, Mairi. In particular I wanted to capture a start-to-finish workflow for processing a portrait using free software. There are often many tutorials for individual portions of a retouching process but rarely do they get seen in the context of a full workflow.

    The results became a two-part post on my blog. For posterity (as well as for those who may have missed it the first time around) I am republishing the second part of the tutorial Postprocessing here.

    Though the post was originally published in 2013 the process it describes is still quite current (and mostly still my same personal workflow). This tutorial covers the retouching in post while the original article about setting up and conducting the shoot is still over on my personal blog.

    Mairi Portrait Final The finished result from the tutorial.
    by Pat David (cba).

    The tutorial may read a little long but the process is relatively quick once it’s been done a few times. Hopefully it proves to be helpful to others as a workflow to use or tweak for their own process!

    Coming Soon

    I am still working on getting some sample shots to demonstrate the previously mentioned noise free shadows idea using dual exposures. I just need to find some sample shots that will be instructive while still at least being something nice to look at…

    Also, another guest post is coming down the pipes from the creator of PhotoFlow, Andrea Ferrero! He’ll be talking about creating blended panorama images using Hugin and PhotoFlow. Judging by the results on his sample image, this will be a fun tutorial to look out for!

    May 31, 2016

    Krita 3.0 Released

    Today the Krita team releases Krita 3.0, the Animation Release. Wrapping up a year of work, this is a really big release: animation support integrated into Krita’s core, Instant Preview for better performance painting and drawing with big brushes on big canvases, ported to the latest version of the Qt platform and too many bigger and smaller new features and improvements to mention!

    krita-3.0

    Many of the new features were funded by the 2015 Kickstarter campaign. A big thank-you to all our backers! The remaining stretch goals will be released with Krita 3.1, later this year. And don’t forget that we’ve still got seven days in the current kickstarter campaign, We’re nearly funded, so there should still be time to reach some stretch goals this year, too!

    The full list of improvements is too long for a release announcement. Please check out the extensive release notes we prepared!

    Note: Krita 3.0 load and saves its configuration and resources in a different place than 2.9 so it’s possible to use both versions together without conflicts. Here is a tutorial on migrating resources.

    Downloads

    Windows

    On Windows, Krita supports Wacom, Huion and Yiynova tablets, as well as the Surface Pro series of tablets. Trust, Bosto, Genius, Peritab and similar tablets are not supported at this moment because we lack testing hardware that allows us to reproduce reported bugs.

    The portable zip file builds can be unzipped and run by double-clicking the krita link. If you want to use the installer builds, please uninstall Krita 2.9 first.

    For Windows users, Alvin Wong has created a shell extension that allows you to preview krita images in Windows Explorer. You can install it separately, but it is also included in the setup installers.

    If your virus scanner or other security software complains please verify the sha1 checksum noted below: if the checksum checks out, the files are safe.

    Krita on Windows is tested on Windows 7, Windows 8 and Windows 10.

    Linux

    For Linux, we offer AppImages that should run on any reasonable recent Linux distribution. For Ubuntu 12.04 and CentOS 6.x you need the appimage that is built without support for OpenMP. We are working on updating the Krita Lime repository: for now, you can use that to install the krita3-testing build. Helio Castro is packaging Krita for Redhat/CentOS/Fedora.

    You can download the appimage, make it executable and run it in place. No installation is needed. At this moment, we only have appimages for 64 bits versions of Linux.

    You can also get Krita from Ubuntu’s App Store in snap format, thanks to Michael Hall’s help. Note that you cannot use the snap version of Krita with the NVidia proprietary driver, due to a limitation in Ubuntu and that there are no translations yet.

    OSX

    Krita on OSX will be fully supported with version 3.1. Krita 3.0 for OSX is still missing Instant Preview and High Quality Canvas scaling. There are also some issues with rendering the image — these issues follow from Apple’s decision to drop support for the OpenGL 3.0 compatibility profile in their display drivers. We are working to reimplement these features using OpenGL 3.0 Core profile. For now, we recommend disabling OpenGL when using Krita on OSX for production work. Krita for OSX is tested on 10.9 and 10.11 since we do not have access to other versions of OSX.

    Source

    A source archive is available for distributions wishing to package Krita 3.0. If you’re a curious user, it is recommended to build Krita directly from the git repository instead, so you get all fixes daily fresh. See David Revoy’s guide for an introduction to building Krita. If you build Krita from source and your version of Qt is lower than Qt 5.6.1, it is necessary to also rebuild Qt using the patches in krita/3rdparty/ext_qt.

    May 30, 2016

    designing interaction for creative pros /4

    This is the fourth and final part of my LGM 2015 lecture. Part one urged to make a clear choice: is the software you’re making is for creative professionals, or not? Part two was all about the rally car and the need for speed. Part three showed the need to support the free and measured working modes of masters.

    Today’s topic is how to be good in creative‐pro interaction. We start by revisiting the cars of part two.

    party like it’s…

    It is no coincidence that I showed you a 1991 family car and a 1991 rally car:

    pics of the two cars source: netcarshow.com and imgbuddy.com

    I did that because our world—that of software for creative pros—is largely stuck in that era. And just like 1991 king‐of‑the‑hill cars (even factory‐mint examples found in a time capsule), this software is no longer competitive.

    a pair of yellow, y-front underpants yes, it’s pants! source: charliepants.com

    It is my observation that in this field there is an abundance of opportunities to do better. If one just starts scratching the surface, on a product, workflow, or interaction level, then today’s software simply starts to crumble.

    testing, testing, one two

    For instance, while doing research for the Metapolator project, I asked some users to show me how they work with the font design tools of today. They showed me the glyph range, the central place to organise their work and get started:

    a font editor's table view of all the glyphs in a font

    They also showed me the curve editor, where the detailed work on each glyph is done:

    a big window with a glyph outline editor

    Both of them need the whole screen. In a short amount of time I saw a lot of switching between the both of them. I sensed wasted time and broken flows. I also saw tiny, slow handles in the editor. And I thought: this cannot be it.

    They also showed me, in another program, designing in context:

    editing a glyph in the context of a few others

    I immediately sensed this was a big deal. I saw that they had pushed the envelope—however, not broken through to the other side.

    Besides that, I observed that editing was done in outline mode (see the ‘y’, above), but evaluation is of solid black glyphs. Again I sensed broken flows, because of switching between making and evaluating. And I thought: this cannot be it.

    Frank sez…

    Enough of that; let’s zoom out from my field report, to the big issue at hand. To paraphrase Zappa:

    ‘How it has always been’ may not be quite dead, but it sure smells funny.

    The question is: how did we get to this situation? Let me dig through my experience to find some of the causes.

    First of all we can observe that each piece of creative‐pro software is a vertical market product; i.e. it is not used by the general population; only by certain masters. That means we are in armpit of usability territory. Rereading that blog post, I see I was already knee‐deep into this topic: ‘its development is driven by reacting to what users ask for (features!) and fear of changing “like it has always been” through innovation.’

    go on, have another cake

    The mechanism is clear: users and software makers, living in very different worlds, have a real hard time communicating. Where they manage, they are having the wrong conversation: ‘gimme more features!’ —‘OK, if that makes you happy.’

    What is happening today is that users are discussing software made yesterday. They are not able to communicate that their needs are so lousily addressed. Instead, they want some more cherries on top and this cements the position of this outdated software.

    Constantly, users are telling software makers, implicitly and explicitly, ‘add plenty of candy, but don’t change a thing.’

    This has been going on for decades—lost decades.

    bond of pain

    A second cause that I want to highlight is that both users and software makers have worked for years to get on the inside and it has been a really painful experience for all of them. This unites them against change.

    Thus users have been fighting several frustrating years to get ‘into’ software that was not designed (for them; armpit of usability, remember), but instead made on terms favourable to the software makers.

    Software makers spent year after year trying to make something useful. Lacking any form of user research, the whole process has been an exasperating stab‐in‐the‐dark marathon.

    Thus a variant of the Stockholm syndrome spooks both parties. They are scarred‐for‐life victims of the general dynamic of the pro‑software industry. But now that they have gotten this far, their instinct is to sustain it.

    the point

    Two decades of experience shows that there is a way out of this misery; to become competitive (again). There is no incremental way to get there; you’ll have to snap out of it. What is called for is innovation—of your product, workflow, your interaction. A way that unlocks results is:

    1. user research
      Experienced researchers cut straight past the wants and get the user needs on the table. (Obligatory health & safety notice: market research has nothing to do with user research; it is not even a little bit useful in this context.)
    2. design‐driven innovation
      When user needs are clear (see point 1), then a designer can tell you any minute of the project—first to last—what part of ‘how it has always been’ is begging to be replaced, and which part is the solid foundation to build upon. Designer careers are built on getting this right, every time.

    Skip either point—or doing it only in a superficial, or non-consequential, way—and I’ll guarantee you’ll stay stuck in 1991. Making it happen requires action:

    Software‐makers: enthusiastically seek out user researchers and designers and start to sail by them. Stop considering adding features a good thing, stop being a captive of ‘how it has always been’ and trust the accomplished.

    picture show

    To illustrate all this, let’s look at some of my designs for Metapolator. To be able to solve these problems of contemporary font design tools that I mentioned above, I had to snap out of the old way.

    First of all, I pushed designing in context a lot further, by introducing in‑specimen editing:

    a pangram type specimen is displayed in a window

    Every glyph you see above is directly editable, eliminating switching between overview and editing. The size that the glyphs are displayed in can be adjusted any given moment, whatever suits the evaluate/edit balance.

    ‘OK, that’s great’ you say, ‘but every once in a while one needs a glyph range to do some gardening.’ To address that, I used a handy trick: the glyph range is just another specimen:

    the glyph range organised as a specimen

    Everybody in the Metapolator team thought I was crazy, but I was dead set on eliminating outline mode. I sensed there was chance to do that here, because the focus moves from working at the edge of the glyph—the high‐contrast black–white transition—to the center line within:

    center line displayed within a full-black glyph, the points     on it connected to large handles outside the glyph area

    Then there was the matter of offering users generous handles that are fast to grab and use. After brainstorming with Simon Egli, the design shown above was born: put them ‘on sticks’ outside, so that they do not impede visual evaluation of the glyph.

    pep talk

    In closing: to be good in creative‐pro interaction, I encourage you to—

    Do not ask how the past can guide you. Ask yourself what you can do to guide your software for creative pros into the 21st century.

    May 27, 2016

    You know what they say: Big hands, small horse.

    You know what they say: Big hands, small horse.

    CSS Text Line Spacing Exposed!

    Want evenly spaced lines of text like when writing on the lined paper we all used as kids? Should be easy. Turns out with CSS it is not. This post will show why. It is the result of too much time reading specs and making tests as I worked on Inkscape’s multi-line text.

    The first thing to understand is that CSS text works by filling line boxes with glyphs and then stacking the boxes, much as is done in printing with movable type.

    Four lines of movable type placed in a composing stick over a box of movable type.

    Movable type placed in a composing stick. The image has been flipped horizontally so the glyphs are legible. (Modified from photo by Willi Heidelbach [CC BY 2.5], via Wikimedia Commons)

    A line of CSS text is composed of a series of glyphs. It corresponds to a row of movable type where each glyph represents (mostly) a piece of type. The CSS ‘font-size’ property corresponds to the height of the type. A CSS line box contains a line of CSS text plus any leading (extra space) above and below the line.

    Four lines of text mimicking the above figure.

    The same four lines of text as in the previous figure. The CSS line boxes are shown by red rectangles. The line boxes are stacked without any leading between the lines.

    The lines in the above figure are set tight, without any spacing between the lines. This makes the text hard to read. It is normal in typesetting to add a bit of leading between lines to give the lines a small amount of separation. This can be done with CSS through the ‘line-height’ property. A typical value of the ‘line-height’ property would be ‘1.2’ which means in the simplest terms to make the distance between the baselines of the text be 1.2 times the font size. CSS dictates that the extra space be split, half above the line, half below the line. The following example uses a ‘line-height’ value of 1.5 (to make the figure clearer).

    Same four lines of text as in above figure but with leading added between lines.

    The same four lines of text as in the previous figure but with leading added by a ‘line-height’ value of 1.5. The distance between the baselines (light-blue lines) is 1.5 times the font size. (Line boxes without leading are shown in green, line boxes with leading in red.)

    Unlike with physical type faces, lines can be moved closer together than the height of the glyph boxes by using a ‘line-height’ value less than one. Normally you would not want to do this.

    Same four lines of text as in above figure but with negative leading.

    The same four lines of text as in the previous figure but with negative leading generated by a ‘line-height’ value of 0.8.

    When only one font is used (same family and size), the distance between the baselines is consistent and easy to predict. But with multiple fonts it becomes a bit of a challenge. To understand the inner workings of ‘line-height’ we first need to get back to basics.

    Glyphs are designed inside an em box. The ‘font-size’ property scales the em box so when rendered the height of the em box matches the font size. For most scripts, the em box is divided into two parts by a baseline. The ascent measures the distance between the baseline and the top of the box while the descent measures the distance between the baseline and the bottom of the box.

    Diagram of 'em' box showing ascent and descent.

    The coordinate system for defining glyphs is based on the “em box” (blue square). The origin of the coordinate system for Latin based glyphs is at the baseline on the left side of the box. The baseline divides the em box into two parts.

    The distinction between ‘ascent’ and descent’ is important as the height of the CSS line box is calculated by finding independently the maximum ascent and the maximum descent of all the glyphs in a line of text and then adding the two values. The ratio between ascent and descent is a font design issue and will be different for different font families. Mixing font families may then lead to a line box height greater than that for a single font family.

    Two 'M' glyphs from different fonts aligned to their alphabetic baseline.

    Two ‘M’ glyphs with the same font size but from different font families (DejaVu Sans and Scheherazade). Their glyph boxes (blue rectangles) have the same height (equal to the em box or font size) but the boxes are shifted vertically so that their baselines are aligned. The resulting line box (dashed red rectangle), assuming a ‘line-height’ value of ‘1’, has a height that is greater than if just one font was used.

    Keeping the same font family but mixing different sizes can also give results that are a bit unexpected.

    Two text blocks.

    Left: Text with a font size of 25 pixels and with a ‘line-height’ value of ‘2’. Right: Same as left but font size of 50px for middle line. Notice how the line boxes (red dashed rectangles) are lined up on a grid but that the baselines (light-blue lines) on the right are not; the middle right line’s baseline is off the grid.

    So far, we’ve discussed only ‘line-height’ values that are unitless. Both absolute (‘px’, ‘pt’, etc. ) and relative (‘em’, ‘ex’, ‘%’) units are also allowed. The “computed” value of a unitless value is the unitless value while the “computed” value of a value with units is the “absolute” value. The actual value “used” for determining line box height is for a unitless value, the computed value multiplied by the font size, while for the values with units it is the “absolute value” For example, assuming a font size of 24px:

    ‘line-height:1.5′
    computed value: 1.5, used value: 36px;
    ‘line-height: 36px’
    computed and used values: 36px;
    ‘line-height: 150%’
    computed and used values: 36px;
    ‘line-height: 1.5em’
    computed and used values: 36px.

    The importance of this is that it is the computed value of ‘line-height’ that is inherited by child elements. This gives different results for values with units compared to those without as seen in the following figure:

    Two text blocks.

    Left: Text with a font size of 25 pixels and with a ‘line-height’ value of ‘2’. Right: Same as left but ‘line-height’ value of ‘2em’. With the unitless ‘line-height’ value, the child element (second line, span with larger font) inherits the value ‘2’. As the larger font has a size of 50px, the “used” value for ‘line-height’ is 100px (2 times 50px) thus the line box is 100px tall. With the ‘line-height’ value of ‘2em’, the computed value is 50px. This is inherited by the child element which is then used in calculating the line box height. CodePen.

    The astute observer will notice that in the above example the line box height of the middle line on the right is not 50 pixels as one might naturally expect. It is actually a bit larger. Why? Recall that the line box height is calculated from the maximum ascent and maximum descent of all the glyphs. One small detail was left out. CSS dictates that an imaginary zero width glyph called the “strut” be included in the calculation. This strut represents a glyph in the containing block’s initial font and with the block’s initial font size and line height. This throws everything out of alignment as shown in the figure below.

    'A' and 'D' glyphs aligned to a common baseline with the 'D' having twice the font size as the 'A'.

    Let the ‘A’ represents the strut. The glyph boxes for the ‘A’ and ‘D’ without considering line height are shown by blue rectangles. The glyph boxes with line height taken into account are shown by red-dashed rectangles. For the ‘D’, the glyph boxes with and without taking into account the line height are the same. Note that both the ‘A’ and ‘D’ boxes with line height factored in have the same height (2em relative to the containing block font size). The two boxes are aligned using the ‘alphabetic’ baseline. This results in the ‘A’ glyph box (with effect of line height) extending down past the bottom of the ‘D’ glyph box. The resulting line box (solid-pink rectangle) height is thus greater than either of the glyph box heights. The extra height is shown by the light gray rectangle.

    So how can one keep line boxes on a regular grid? The solution is to rely on the strut! The way to do this is to make sure that the ascents and descents all child elements are smaller than the containing block strut’s ascent and descent values. One can do this most easily by setting ‘line-height’ to zero in child elements.

    Two blocks of text, both showing evenly spaced lines. The third line on the right has text with a font size twice the rest of the lines.

    Text with evenly spaced lines. Left: All text with the same font size. Right: The third line has text with double the font size but with a ‘line-height’ value of ‘0’. This ensures that the strut controls the spacing between lines. CodePen.

    As one can see, positioning text on a regular grid can be done through a bit of effort. Does it have to be so difficult? There maybe an easier solution on the horizon. The CSS working group is working on a “Line Grid” specification that may make this trivial.

    May 25, 2016

    Blog backlog, Post 3, DisplayLink-based USB3 graphics support for Fedora

    Last year, after DisplayLink released the first version of the supporting tools for their USB3 chipsets, I tried it out on my Dell S2340T.

    As I wanted a clean way to test new versions, I took Eric Nothen's RPMs, and updated them along with newer versions, automating the creation of 32- and 64-bit x86 versions.

    The RPM contains 3 parts, evdi, a GPLv2 kernel module that creates a virtual display, the LGPL library to access it, and a proprietary service which comes with "firmware" files.

    Eric's initial RPMs used the precompiled libevdi.so, and proprietary bits, compiling only the kernel module with dkms when needed. I changed this, compiling the library from the upstream repository, using the minimal amount of pre-compiled binaries.

    This package supports quite a few OEM devices, but does not work correctly with Wayland, so you'll need to disable Wayland support in /etc/gdm/custom.conf if you want it to work at the login screen, and without having to restart the displaylink.service systemd service after logging in.


     Plugged in via DisplayPort and USB (but I can only see one at a time)


    The source for the RPM are on GitHub. Simply clone and run make in the repository to create 32-bit and 64-bit RPMs. The proprietary parts are redistributable, so if somebody wants to host and maintain those RPMs, I'd be glad to pass this on.

    May 24, 2016

    LVFS Technical White Paper

    I spent a good chunk of today writing a technical whitepaper titled Introducing the Linux Vendor Firmware Service — I’d really appreciate any comments, either from people who have seen all progress from the start or who don’t know anything about it at all.

    Typos, or more general comments are all welcome and once I’ve got something a bit more polished I’ll be sending this to some important suits in a few well known companies. Thanks for any help!

    Year of the Linux Desktop

    As some of you already know, xdg-app project is dead. The Swedish conspiracy members tell me it’s a good thing and should turn your attention to project Flatpak.

    Flatpak aims to solve the painful problem of the Linux distribution — the fact that the OS is intertwined with the applications. It is a pain to decouple the two to be able to

    • Keep a particular version of an app around, regardless of OS updates. Or vice versa, be able to run an uptodate application on an older OS.
    • Allow application authors distribute binaries they built themselves. Binaries they can support and accept useful bug reports for. Binaries they can keep updated.

    But enough of the useful info, you can read all about the project on the new website. Instead, here comes the irrelevant tidbits that I find interesting to share myself. The new website has been built with Middleman, because that’s what I’ve been familiar with and worked for me in other projects.

    It’s nice to have a static site that is maintainable and easy to update over time. Using something like Middleman allows to do things like embedding an SVG inside a simple markdown page and animate it with CSS.

    =partial "graph.svg"
    :css
      @keyframes spin {
        0% { transform: rotateZ(0deg); }
        100% { transform: rotateZ(359deg); }
      }
      #cog {
        animation: spin 6s infinite normal linear forwards;
      }
    

    See it in action.

    The resulting page has the SVG embedded to allow text copy & pasting and page linking, while keeping the SVG as a separate asset allows easy edits in Inkscape.

    What I found really refreshing is seeing so much outside involvement on the website despite ever publicising it. Even during developing the site as my personal project I would get kind pull requests and bug reports on github. Thanks to all the kind souls out there. While not forgetting about future proofing our infrastructure, we should probably not forget the barrier to entry and making use of well established infrastructures like github.

    Also, there is no Swedish conspiracy. Oh and Flatpak packages are almost ready to go for Fedora.

    colour manipulation with the colour checker lut module

    colour manipulation with the colour checker lut module

    [update 2016/07/31: there was a section about intermediate export to csv and manually changing that file. this is no longer needed, exporting the style directly from darktable-chart is fine now.]

    motivation

    for raw photography there exist great presets for nice colour rendition:

    • in-camera colour processing such as canon picture styles
    • fuji film-emulation-like presets (provia velvia astia classic-chrome)
    • pat david's film emulation luts

    unfortunately these are eat-it-or-die canned styles or icc lut profiles. you
    have to apply them and be happy or tweak them with other tools. but can we
    extract meaning from these presets? can we have understandable and tweakable
    styles like these?

    in a first attempt, i used a non-linear optimiser to control the parameters of
    the modules in darktable's processing pipeline and try to match the output of
    such styles. while this worked reasonably well for some of pat's film luts, it
    failed completely on canon's picture styles. it was very hard to reproduce
    generic colour-mapping styles in darktable without parametric blending.

    that is, we require a generic colour to colour mapping function. this should be
    equally powerful as colour look up tables, but enable us to inspect it and
    change small aspects of it (for instance only the way blue tones are treated).

    overview

    in git master, there is a new module to implement generic colour mappings: the
    colour checker lut module (lut: look up table). the following will be a
    description how it works internally, how you can use it, and what this is good
    for.

    in short, it is a colour lut that remains understandable and editable. that is,
    it is not a black-box look up table, but you get to see what it actually does
    and change the bits that you don't like about it.

    the main use cases are precise control over source colour to target colour
    mapping, as well as matching in-camera styles that process raws to jpg in a
    certain way to achieve a particular look. an example of this are the fuji film
    emulation modes. to this end, we will fit a colour checker lut to achieve their
    colour rendition, as well as a tone curve to achieve the tonal contrast.

    target

    to create the colour lut, it is currently necessary to take a picture of an
    it8 target (well, technically we support any similar target, but
    didn't try them yet so i won't really comment on it). this gives us a raw
    picture with colour values for a few colour patches, as well as a in-camera jpg
    reference (in the raw thumbnail..), and measured reference values (what we know
    it should look like).

    to map all the other colours (that fell in between the patches on the chart) to
    meaningful output colours, too, we will need to interpolate this measured
    mapping.

    theory

    we want to express a smooth mapping from input colours \(\mathbf{s}\) to target
    colours \(\mathbf{t}\), defined by a couple of sample points (which will in our
    case be the 288 patches of an it8 chart).

    the following is a quick summary of what we implemented and much better
    described in JP's siggraph course [0].

    radial basis functions

    radial basis functions are a means of interpolating between sample points
    via

    $$f(x) = \sum_i c_i\cdot\phi(\| x - s_i\|),$$

    with some appropriate kernel \(\phi(r)\) (we'll get to that later) and a set of
    coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at
    and in between the source colour positions \(s_i\). now to make
    sure the function actually passes through the target colours, i.e. \(f(s_i) =
    t_i\), we need to solve a linear system. because we want the function to take
    on a simple form for simple problems, we also add a polynomial part to it. this
    makes sure that black and white profiles turn out to be black and white and
    don't oscillate around zero saturation colours wildly. the system is

    $$ \left(\begin{array}{cc}A &P\\P^t & 0\end{array}\right)
    \cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) =
    \left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$

    where

    $$A=\left(\begin{array}{ccc}
    \phi(r_{00})& \phi(r_{10})& \cdots \\
    \phi(r_{01})& \phi(r_{11})& \cdots \\
    \phi(r_{02})& \phi(r_{12})& \cdots \\
    \cdots & & \cdots
    \end{array}\right),$$

    and \(r_{ij} = \| s_i - t_j \|\) is the distance (CIE 76 \(\Delta\)E,
    \(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between
    source colour \(s_i\) and target colour \(t_j\), in our case

    $$P=\left(\begin{array}{cccc}
    L_{s_0}& a_{s_0}& b_{s_0}& 1\\
    L_{s_1}& a_{s_1}& b_{s_1}& 1\\
    \cdots
    \end{array}\right)$$

    is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial
    part. these are here so we can for instance easily reproduce \(t = s\) by setting
    \(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this
    system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).

    many options will do the trick and solve the system here. we use singular value
    decomposition in our implementation. one advantage is that it is robust against
    singular matrices as input (accidentally map the same source colour to
    different target colours for instance).

    thin plate splines

    we didn't yet define the radial basis function kernel. it turns out so-called
    thin plate splines have very good behaviour in terms of low oscillation/low curvature
    of the resulting function. the associated kernel is

    $$\phi(r) = r^2 \log r.$$

    note that there is a similar functionality in gimp as a gegl colour mapping
    operation (which i believe is using a shepard-interpolation-like scheme).

    creating a sparse solution

    we will feed this system with 288 patches of an it8 colour chart. that means,
    with the added four polynomial coefficients, we have a total of 292
    source/target colour pairs to manage here. apart from performance issues when
    executing the interpolation, we didn't want that to show up in the gui like
    this, so we were looking to reduce this number without introducing large error.

    indeed this is possible, and literature provides a nice algorithm to do so, which
    is called orthogonal matching pursuit [1].

    this algorithm will select the most important hand full of coefficients \(\in
    \mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up
    to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make
    best use of gui real estate.

    the colour checker lut module

    clut-iop

    gui elements

    when you select the module in darkroom mode, it should look something like the
    image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid
    instead). by default, it will load the 24 patches of a colour checker classic
    and initialise the mapping to identity (no change to the image).

    • the grid shows a list of coloured patches. the colours of the patches are
      the source points \(\mathbf{s}\).
    • the target colour \(t_i\) of the selected patch \(i\) is shown as
      offset controlled by sliders in the ui under the grid of patches.
    • an outline is drawn around patches that have been altered, i.e. the source
      and target colours differ.
    • the selected patch is marked with a white square, and the number shows
      in the combo box below.

    interaction

    to interact with the colour mapping, you can change both source and target
    colours. the main use case is to change the target colours however, and start
    with an appropriate palette (see the presets menu, or download a style
    somewhere).

    • you can change lightness (L), green-red (a), blue-yellow (b), or saturation
      (C) of the target colour via sliders.
    • select a patch by left clicking on it, or using the combo box, or using the
      colour picker
    • to change source colour, select a new colour from your image by using the
      colour picker, and shift-left-click on the patch you want to replace.
    • to reset a patch, double-click it.
    • right-click a patch to delete it.
    • shift-left-click on empty space to add a new patch (with the currently
      picked colour as source colour).

    example use cases

    example 1: dodging and burning with the skin tones preset

    to process the following image i took of pat in the overground, i started with
    the skin tones preset in the colour checker module (right click on nothing in
    the gui or click on the icon with the three horizontal lines in the header and
    select the preset).

    then, i used the colour picker (little icon to the right of the patch# combo
    box) to select two skin tones: very bright highlights and dark shadow tones.
    the former i dragged the brightness down a bit, the latter i brightened up a
    bit via the lightness (L) slider. this is the result:

    originaldialed down contrast in skin tones

    example 2: skin tones and eyes

    in this image, i started with the fuji classic chrome-like style (see below for
    a download link), to achieve the subdued look in the skin tones. then, i
    picked the iris colour and saturated this tone via the saturation slider.

    as a side note, the flash didn't fire in this image (iso 800) so i needed to
    stop it up by 2.5ev and the rest is all natural lighting..

    original

    +2.5ev classic chromesaturated eyes

    use darktable-chart to create a style

    as a starting point, i matched a colour checker lut interpolation function to
    the in-camera processing of fuji cameras. these have the names of old film and
    generally do a good job at creating pleasant colours. this was done using the
    darktable-chart utility, by matching raw colours to the jpg output (both in
    Lab space in the darktable pipeline).

    here is the link to the fuji styles, and how to use them.
    i should be doing pat's film emulation presets with this, too, and maybe
    styles from other cameras (canon picture styles?). darktable-chart will
    output a dtstyle file, with the mapping split into tone curve and colour
    checker module. this allows us to tweak the contrast (tone curve) in isolation
    from the colours (lut module).

    these styles were created with the X100T model, and reportedly they work so/so
    with different camera models. the idea is to create a Lab-space mapping which
    is well configured for all cameras. but apparently there may be sufficient
    differences between the output of different cameras after applying their colour
    matrices (after all these matrices are just an approximation of the real camera
    to XYZ mapping).

    so if you're really after maximum precision, you may have to create the styles
    yourself for your camera model. here's how:

    step-by-step tutorial to match the in-camera jpg engine

    note that this is essentially similar to pascal's colormatch script, but will result in an editable style for darktable instead of a fixed icc lut.

    • need an it8 (sorry, could lift that, maybe, similar to what we do for
      basecurve fitting)
    • shoot the chart with your camera:
      • shoot raw + jpg
      • avoid glare and shadow and extreme angles, potentially the rims of your
        image altogether
      • shoot a lot of exposures, try to match L=92 for G00 (or look that up in
        your it8 description)
    • develop the images in darktable:
      • lens and vignetting correction needed on both or on neither of raw + jpg
      • (i calibrated for vignetting, see lensfun)
      • output colour space to Lab (set the secret option in darktablerc:
        allow_lab_output=true)
      • standard input matrix and camera white balance for the raw, srgb for jpg.
      • no gamut clipping, no basecurve, no anything else.
      • maybe do perspective correction and crop the chart
      • export as float pfm
    • darktable-chart
      • load the pfm for the raw image and the jpg target in the second tab
      • drag the corners to make the mask match the patches in the image
      • maybe adjust the security margin using the slider in the top right, to
        avoid stray colours being blurred into the patch readout
      • you need to select the gray ramp in the combo box (not auto-detected)
      • click process
      • export
      • fix up style description in the export dialog if you want
      • outputs a .dtstyle with everything properly switched off, and two modules
        on: colour checker + tonecurve in Lab

    darktable-lut-tool-crop-01darktable-lut-tool-crop-02

    darktable-lut-tool-crop-03darktable-lut-tool-crop-04

    to fix wide gamut input, it may be needed to enable gamut clipping in the input colour
    profile module when applying the resulting style to an image with highly
    saturated colours. darktable-chart does that automatically in the style it
    writes.

    fitting error

    when processing the list of colour pairs into a set of coefficients for the
    thin plate spline, the program will output the approximation error, indicated
    by average and maximum CIE 76 \(\Delta E\) for the input patches (the it8 in the
    examples here). of course we don't know anything about colours which aren't
    represented in the patch. the hope would be that the sampling is dense enough
    for all intents and purposes (but nothing is holding us back from using a
    target with even more patches).

    for the fuji styles, these errors are typically in the range of mean \(\Delta E\approx 2\)
    and max \(\Delta E \approx 10\) for 24 patches and a bit less for 49.
    unfortunately the error does not decrease very fast in the number of patches
    (and will of course drop to zero when using all the patches of the input chart).

    provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
    provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751
    
    astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
    astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165
    
    velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
    velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697
    
    classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
    classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298
    
    mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
    mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548
    

    future work

    it is possible to match the reference values of the it8 instead of a reference
    jpg output, to calibrate the camera more precisely than the colour matrix
    would.

    • there is a button for this in the darktable-chart tool
    • needs careful shooting, to match brightness of reference value closely.
    • at this point it's not clear to me how white balance should best be handled here.
    • need reference reflectances of the it8 (wolf faust ships some for a few illuminants).

    another next step we would like to take with this is to match real film footage
    (porta etc). both reference and film matching will require some global exposure
    calibration though.

    references

    • [0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, "Scattered data interpolation for computer graphics" in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. pdf
    • [1] J. A. Tropp and A. C. Gilbert, "Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit", in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.

    links

    May 23, 2016

    Interview with Neotheta

    Monni

    Could you tell us something about yourself?

    I’m Neotheta, 23-year-old from Finland and draw colourful pictures with animals, furries and alike. Drawing has been my passion since I was little, I was also interested in digital art early on but had a love&hate relationship with that because computers were not for kids, unstable and tools were pretty awkward back in those days. So I learned drawing mostly with traditional tools first.

    Do you paint professionally, as a hobby artist, or both?

    Both, I work full-time as an artist right now and hope to continue so!

    What genre(s) do you work in?

    Is furry a genre? I practice my own styles to draw animals and furries – luckily this is where the most demand for my work is as well. But I’ve also drawn more cartoony & simplified styles for children’s books.

    SupernovaWhose work inspires you most — who are your role models as an artist?

    My mom draws really well, she got me drawing! After that it’s been a blender of many inspirations, they tend to change pretty frequently – except I’ve always loved cats! I’m more of a role-maker than a taker, so I often do things differently on purpose – it’s not always a good thing but probably the reason why I’m currently drawing for a living.

    How and when did you get to try digital painting for the first time?

    6-years-old, I drew a colourful bunny with MS paint at my mom’s workplace (a school) and told her students that this is the future! After that my parents also gave me a drawing tablet but I was somewhat disappointed at the programs intended for digital art at the time – they were all kind of awkward to use. So I gave up digital art for many years and went back to traditional tools. I think I was 15 when I decided to try again seriously.

    What makes you choose digital over traditional painting?

    I enjoy bright colours, many of those are difficult to produce with traditional colours. Also the ability to print the finished drawing on desired material, such as fabrics – or test what it looks best on and what size. I can also share the same drawing with many people if the outcome is
    awesome.

    How did you find out about Krita?

    I was actually on a sort of mental breakdown because my computer had kicked the bucket and my new setup simply didn’t function together. I had recently experienced how stable and awesome Debian was for PC and I really wanted to give it a try instead of windows. In the middle of the mess and new things someone told me I should try Krita because it sounded like it’d fit my needs – a drawing program for Linux.

    What was your first impression?

    I was in total awe because first I was ready to sacrifice not using my old favorite programs just so I could work stable. But then Krita turned out to be better than my previous combination of using Paint tool Sai + Photoshop CS2, it had all the same features I needed in one. Krita on Linux was also SO STABLE and FAST and there was autosaving just in case. I learned to use Krita really quickly (also thanks to the helpful community!) and kept finding new useful tools like a constant stream. It was like a dream come true (still is).

    What do you love about Krita?

    It’s so stable and fast, I have it on my powerful desktop and old laptop and it functions so nicely on both of them! The community is wonderful. The brush engine is so diverse, interface is customizable, g’mic plugin, line smoothing, perspective assistants… to name a few!

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    Better text tools and multipage pdf saving would make Krita perfect for comics.

    What sets Krita apart from the other tools that you use?

    Stability, fast performance, for Linux, well designed for drawing and painting, and lots of features!

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    Everything I’ve drawn in the recent years have been on Krita so it’s a difficult pick. My current favorite is a personal drawing of my dragon character in a mystical crystal cavern.

    Crystal_Cavern

    What techniques and brushes did you use in it?

    This is my more simple style, first I sketch, then lineart, colour and add textures last. I’ve mostly used Wolthera’s dynamic inking pen, airbrush, gradients and layer effects. A more detailed description and .kra file for inspecting can be found from my site here: https://neotheta.fi/tutorials/wips/crystal/

    Where can people see more of your work?

    https://neotheta.fi

    Anything else you’d like to share?

    I recently made a telegram sticker pack with one sticker to spread word about Krita (when people get my pack they get the Krita sticker too). Feel free to add it to yours too or use in another creative way!

    kritasticker

    Krita at KomMissia

    Last weekend, ace Krita hacker Dmitry Kazakov attended KomMissia, the annual Russian comics festival. Best quote award goes to the Wacom booth attendants, who install Krita on all their demo machines because “too many people keep asking about it”!

    Here’s Dmitry’s report: enjoy!

    Last weekend I have attended the Russian annual comic festival “KomMissia”. That is a nice event where a lot of comic writers, painters and publishers meet, share ideas, talk and have a lot of fun together.

    My main goal of visiting the event was to find out what people in the comic industry need and what tools they expect to see in a graphics application. One of the goals of our Kickstarter 2016 is to create text tools for the comic artists, so I got a lot of useful information about the process the painters use.

    There were a lot of classes by famous comic masters. I got really impressed by the way how Dennis Calero works (although he doesn’t use Krita, I hope “yet”). He uses custom brushes in quite unexpected way to create the hatches on the painting. He paints a single hatch, then creates a brush from it and then just paints a set of hatches to create a shadow and uses ‘[‘ and ‘]’ shortcuts to modify the size of single hatches. Now I really want to implement a shortcut for that in Krita!

    image02
    I also got in touch with people from Wacom team who had a booth there. They showed a lot of nice and huge Cintiq’s. The funniest thing happened when I asked them if I can install Krita on their devices. They answered that they do already have Krita on most of their demo machines! They say that quite a lot of people asked them about Krita during the previous events, so they decided to install it by default 🙂 So now, if you happen to see a Wacom booth on an event, you can easily go a test Krita there!

    image03

    image04

    There were also a lot of classes organized by the art-material shops. They let people try various markers, paints and papers. I tried all of them. And now I have a lot of new ideas for new Krita brushes! Hehe… 🙂

    This is my “masterpiece” done with watercolor markers 🙂 We can actually implement something like that… The user might paint with usual brushes and then use a special tool for “watering” the canvas. That might be really useful for painters!

    image07
    And the paintings below are not just “testing strokes with acrylic markers”. It is live illustration of Kubelka-Munk color reflectance theory! The “lemon yellow” pigment is the same on both pictures, but due to the different opacity of its particles it looks absolutely different on different background colors!

    image00

    image08

    So now I’ve got a lot of ideas about what brushes and tools can be implemented in Krita! Just follow us on Twitter and VK and you will be the first to know about new features! 🙂

    PS:

    More photos and paintings (by Nikky Art) from the event

    image01
    image06

    image05

    External Plugins in GNOME Software (6)

    This is my last post about the gnome-software plugin structure. If you want more, join the mailing list and ask a question. If you’re not sure how something works then I’ve done a poor job on the docs, and I’m happy to explain as much as required.

    GNOME Software used to provide a per-process plugin cache, automatically de-duplicating applications and trying to be smarter than the plugins themselves. This involved merging applications created by different plugins and really didn’t work very well. For 3.20 and later we moved to a per-plugin cache which allows the plugin to control getting and adding applications to the cache and invalidating it when it made sense. This seems to work a lot better and is an order of magnitude less complicated. Plugins can trivially be ported to using the cache using something like this:

     
       /* create new object */
       id = gs_plugin_flatpak_build_id (inst, xref);
    -  app = gs_app_new (id);
    +  app = gs_plugin_cache_lookup (plugin, id);
    +  if (app == NULL) {
    +     app = gs_app_new (id);
    +     gs_plugin_cache_add (plugin, id, app);
    +  }
    

    Using the cache has two main benefits for plugins. The first is that we avoid creating duplicate GsApp objects for the same logical thing. This means we can query the installed list, start installing an application, then query it again before the install has finished. The GsApp returned from the second add_installed() request will be the same GObject, and thus all the signals connecting up to the UI will still be correct. This means we don’t have to care about migrating the UI widgets as the object changes and things like progress bars just magically work.

    The other benefit is more obvious. If we know the application state from a previous request we don’t have to query a daemon or do another blocking library call to get it. This does of course imply that the plugin is properly invalidating the cache using gs_plugin_cache_invalidate() which it should do whenever a change is detected. Whether a plugin uses the cache for this reason is up to the plugin, but if it does it is up to the plugin to make sure the cache doesn’t get out of sync.

    And one last thing: If you’re thinking of building an out-of-tree plugin for production use ask yourself if it actually belongs upstream. Upstream plugins get ported as the API evolves, and I’m already happily carrying Ubuntu and Fedora-specific plugins that either self-disable at runtime or are protected using --enable-foo configure argument.

    External Plugins in GNOME Software (5)

    This is my penultimate post about the gnome-software plugin structure. If you’ve followed everything so far, well done.

    There’s a lot of flexibility in the gnome-software plugin structure; a plugin can add custom applications and handle things like search and icon loading in a totally custom way. Most of the time you don’t care about how search is implemented or how icons are going to be loaded, and you can re-use a lot of the existing code in the appstream plugin. To do this you just save an AppStream-format XML file in either /usr/share/app-info/xmls/, /var/cache/app-info/xmls/ or ~/.local/share/app-info/xmls/. GNOME Software will immediately notice any new files, or changes to existing files as it has set up the various inotify watches.

    This allows plugins to care a lot less about how applications are going to be shown. For example, the steam plugin downloads and parses the descriptions from a remote service during gs_plugin_refresh(), and also finds the best icon types and downloads them too. Then it exports the data to an AppStream XML file, saving it to your home directory. This allows all the applications to be easily created (and then refined) using something as simple as gs_app_new("steam:foo.desktop"). All the search tokenisation and matching is done automatically, so it makes the plugin much simpler and faster.

    The only extra step the steam plugin needs to do is implement the gs_plugin_adopt_app() function. This is called when an application does not have a management plugin set, and allows the plugin to claim the application for itself so it can handle installation, removal and updating. In the case of steam it could check the ID has a prefix of steam: or could check some other plugin-specific metadata using gs_app_get_metadata_item().

    Another good example is the fwupd that wants to handle any firmware we’ve discovered in the AppStream XML. This might be shipped by the vendor in a package using Satellite, or downloaded from the LVFS. It wouldn’t be kind to set a management plugin explicitly in case XFCE or KDE want to handle this in a different way. This adoption function in this case is trivial:

    void
    gs_plugin_adopt_app (GsPlugin *plugin, GsApp *app)
    {
      if (gs_app_get_kind (app) == AS_APP_KIND_FIRMWARE)
        gs_app_set_management_plugin (app, "fwupd");
    }
    

    The next (and last!) blog post I’m going to write is about the per-plugin cache that’s available to plugins to help speed up some operations. In related news, we now have a mailing list, so if you’re interested in this stuff I’d encourage you to join and ask questions there. I also released gnome-software 3.21.2 this morning, so if you want to try all this plugin stuff yourself your distro if probably going to be updating packages soon.

    May 22, 2016

    External Plugins in GNOME Software (4)

    After my last post, I wanted to talk more about the refine functionality in gnome-software. As previous examples have shown it’s very easy to add a new application to the search results, updates list or installed list. Some plugins don’t want to add more applications, but want to modify existing applications to add more information depending on what is required by the UI code. The reason we don’t just add everything at once is that for search-as-you-type to work effectively we need to return results in less than about 50ms and querying some data can take a long time. For example, it might take a few hundred ms to work out the download size for an application when a plugin has to also look at what dependencies are already installed. We only need this information once the user has clicked the search results and when the user is in the details panel, so we can save a ton of time not working out properties that are not useful.

    Lets looks at another example.

    gboolean
    gs_plugin_refine_app (GsPlugin *plugin,
                          GsApp *app,
                          GsPluginRefineFlags flags,
                          GCancellable *cancellable,
                          GError **error)
    {
      /* not required */
      if ((flags & GS_PLUGIN_REFINE_FLAGS_REQUIRE_LICENSE) == 0)
        return TRUE;
    
      /* already set */
      if (gs_app_get_license (app) != NULL)
        return TRUE;
    
      /* FIXME, not just hardcoded! */
      if (g_strcmp0 (gs_app_get_id (app, "chiron.desktop") == 0))
        gs_app_set_license (app, "GPL-2.0 and LGPL-2.0+");
    
      return TRUE;
    }
    

    This is a simple example, but shows what a plugin needs to do. It first checks if the action is required, in this case GS_PLUGIN_REFINE_FLAGS_REQUIRE_LICENSE. This request is more common than you might expect as even the search results shows a non-free label if the license is unspecified or non-free. It then checks if the license is already set, returning with success if so. If not, it checks the application ID and hardcodes a license; in the real world this would be querying a database or parsing an additional config file. As mentioned before, if the license value is freely available without any extra work then it’s best just to set this at the same time as when adding the app with gs_app_list_add(). Think of refine as adding things that cost time to calculate only when really required.

    The UI in gnome-software is quite forgiving for missing data, hiding sections or labels as required. Some things are required however, and forgetting to assign an icon or short description will get the application vetoed so that it’s not displayed at all. Helpfully, running gnome-software --verbose on the command line will tell you why an application isn’t shown along with any extra data.

    As a last point, a few people have worries that these blogs are perhaps asking for trouble; external plugins have a chequered history in a number of projects and I’m sure gnome-software would be in an even worse position given that the core maintainer team is still so small. Being honest, if we break your external plugin due to an API change in the core you probably should have pushed your changes upstream sooner. There’s a reason you have to build with -DI_KNOW_THE_GNOME_SOFTWARE_API_IS_SUBJECT_TO_CHANGE

    Funding Krita’s Development

    Funding Krita

    We’re running this kickstarter to fund Krita’s development. That sounds like a truism, but free software projects actually trying to fund development is still a rarity. When KDE, our mother project, holds a fund raiser it’s to collect budget to make developers meetings possible, fund infrastructure (like this, Krita’s website) and so on, but KDE does not pay developers. Other projects don’t even try, or leave it to individual developers. Still others, like Blender, have a lot of experience funding development, of course.

    We are happily learning from Blender, and have funded development for years. The first fund raisers were to pay Lukas Tvrdy to work full-time on Krita for a couple of months. His work was handsomely funded and made it possible for Krita to take the leap from slow-and-buggy to usable for day to day work.

    Since 2013, the Krita Foundation supports Dmitry Kazakov to work full-time on Krita. And if we may be allowed to toot our horn a bit, that’s a pretty awesome achievement for a project that’s so much smaller than, for example, Blender. The results are there: every release make the previous release look old-hat. Since 2015, the Foundation also sponsors me, that’s Boudewijn Rempt, to work on Krita for three days a week. The other three days I have a day job — Krita doesn’t really bring in enough money to pay for my mortgage yet.

    So, what’s coming in, and what’s going out?

    In:

    krita-foundation-income-20152016

    • Kickstarter: last year’s kickstarter resulted in about 35,000 euros. (Which explains this year’s goal of 30,000, which I’m sure we’re going to make!)
    • Monthly donations through the development fund: about 350 euros per month, 4200 euros per year
    • Krita on Steam: about 500 euros a month, 6000 euros per year
    • One-time donations through paypal: about 500 euros per month, since this drops of sharply during the kickstarter month, it’s only about 5000 euros a years
    • Sales of training videos: about 500 euros per month, same as with the donations, so about 5000 euros a year.

    Last year we also had a total of 20,000 euros in special one-time donations, one earmarked for the port to Qt 5.

    So, we have a yearly income of about 60.000 euros. Not bad for a free software project without any solid commercial backing! Especially not when looking at what we’re doing with it!

    Now for spending the money — always fun!

    krita-foundationoutgo-20152016

    • Sponsored development: for Dmitry and me together, that’s about 42,000 a year. Yes, we’re cheap. And if you’re a commercial user of Krita and need something developed, contact us!
    • Supporting our volunteers: there are some volunteers in our community who spend an inordinate amount of time on Krita, for instance, preparing and sending out all kickstarter rewards. Dutch law allows us to give those volunteers a little something , and that comes to about 3000 euros a year.
    • Hardware. We cannot buy all obscure drawing tablets on the market, so that’s not where we spend our money. Besides, manufacturers like Wacom, Huion and Yiynova have supported us by sending us test hardware! But when we decided to make OSX a first-level supported platform, we needed a Mac. When there were reports of big trouble on AMD CPU/GPU hardware, we needed a test system. This comes to about 2500 euros
    • Mini-sprints: Basically, getting a small groups, the Summer of Code students, me and Dmitry together to prepare the projects, or gettting Wolthera and me together to prepare the kickstarter. That’s about 1000 euros a year.
    • Video course: we spend about 3000 euros a year on creating a new video training course. This year will be all about animation!
    • Kickstarter rewards, postage, administrative costs: 7000 euros.

    So, the total we spend at the moment is about… 57,500 euros.

    In other words, Mr. Micawber would declare us to be happy! “Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.

    But there’s not much of a buffer here, and a lot of potential for growth! And that’s still my personal goal for Krita: over the coming year or two, double the income and the spending.

    #happybdaybassel 2016 with Cost of Freedom & Waiting… Books In Print

    Today for Bassel’s 35th Birthday (#happybdaybassel) We Released Along With Many Others, The Cost of Freedom and Waiting… A Prose Book Now Available in Print.

    External Plugins in GNOME Software (3)

    Lots of nice feedback from my last post, so here’s some new stuff. Up now is downloading new metadata and updates in plugins.

    The plugin loader supports a gs_plugin_refresh() vfunc that is called in various situations. To ensure plugins have the minimum required metadata on disk it is called at startup, but with a cache age of infinite. This basically means the plugin must just ensure that any data exists no matter what the age.

    Usually once per day, we’ll call gs_plugin_refresh() but with the correct cache age set (typically a little over 24 hours) which allows the plugin to download new metadata or payload files from remote servers. The gs_utils_get_file_age() utility helper can help you work out the cache age of file, or the plugin can handle it some other way.

    For the Flatpak plugin we just make sure the AppStream metadata exists at startup, which allows us to show search results in the UI. If the metadata did not exist (e.g. if the user had added a remote using the commandline without gnome-software running) then we would show a loading screen with a progress bar before showing the main UI. On fast connections we should only show that for a couple of seconds, but it’s a good idea to try any avoid that if at all possible in the plugin.
    Once per day the gs_plugin_refresh() method is called again, but this time with GS_PLUGIN_REFRESH_FLAGS_PAYLOAD set. This is where the Flatpak plugin would download any ostree trees (but not doing the deloy step) so that the applications can be updated live in the details panel without having to wait for the download to complete. In a similar way, the fwupd plugin downloads the tiny LVFS metadata with GS_PLUGIN_REFRESH_FLAGS_METADATA and then downloads the large firmware files themselves only when the GS_PLUGIN_REFRESH_FLAGS_PAYLOAD flag is set.

    If the @app parameter is set for gs_plugin_download_file() then the progress of the download is automatically proxied to the UI elements associated with the application, for instance the install button would show a progress bar in the various different places in the UI. For a refresh there’s no relevant GsApp to use, so we’ll leave it NULL which means something is happening globally which the UI can handle how it wants, for instance showing a loading page at startup.

    gboolean
    gs_plugin_refresh (GsPlugin *plugin,
                       guint cache_age,
                       GsPluginRefreshFlags flags,
                       GCancellable *cancellable,
                       GError **error)
    {
      const gchar *metadata_fn = "/var/cache/example/metadata.xml";
      const gchar *metadata_url = "http://www.example.com/new.xml";
    
      /* this is called at startup and once per day */
      if (flags & GS_PLUGIN_REFRESH_FLAGS_METADATA) {
        g_autoptr(GFile) file = g_file_new_for_path (metadata_fn);
    
        /* is the metadata missing or too old */
        if (gs_utils_get_file_age (file) > cache_age) {
          if (!gs_plugin_download_file (plugin,
                                        NULL,
                                        metadata_url,
                                        metadata_fn,
                                        cancellable,
                                        error)) {
            /* it's okay to fail here */
            return FALSE;
          }
          g_debug ("successfully downloaded new metadata");
        }
      }
    
      /* this is called when the session is idle */
      if ((flags & GS_PLUGIN_REFRESH_FLAGS_PAYLOAD) == 0) {
        // FIXME: download any required updates now
      }
    
      return TRUE;
    }
    

    Note, if the downloading fails it’s okay to return FALSE; the plugin loader continues to run all plugins and just logs an error to the console. We’ll be calling into gs_plugin_refresh() again in only a few hours, so there’s no need to bother the user. For actions like gs_plugin_app_install we also do the same thing, but we also save the error on the GsApp itself so that the UI is free to handle that how it wants, for instance showing a GtkDialog window for example.

    New Rapid Photo Downloader


    New Rapid Photo Downloader

    Damon Lynch brings us a new release!

    Community member Damon Lynch happens to make an awesome program called Rapid Photo Downloader in his “spare” time. In fact you may have heard mention of it as part of Riley Brandt’s “The Open Source Photography Course”*. It is a program that specializes in downloading photo and video from media in as efficient a manner as possible while extending the process with extra functionality.

    * Riley donates a portion of the proceeds from his course to various projects, and Rapid Photo Downloader is one of them!

    Work Smart, not Dumb

    The main features of Rapid Photo Downloader are listed on the website:

    1. Generates meaningful, user configurable file and folder names
    2. Downloads photos and videos from multiple devices simultaneously
    3. Backs up photos and videos as they are downloaded
    4. Is carefully optimized to download and back up at high speed
    5. Easy to configure and use
    6. Runs under Unity, Gnome, KDE and other Linux desktops
    7. Available in thirty languages
    8. Program configuration and use is fully documented

    Damon announced his 0.9.0a1 release on the forums, and Riley Brandt even recorded a short overview of the new features:

    (Shortly after announcing the 0.9.0a1 release, he followed it up with a 0.9.0a2 release with some bug fixes).

    Some of the neat new features include being able to preview the download subfolder and storage space of devices before you download:

    Rapid Photo Downloader Main Window

    Also being able to download from multiple devices in parallel, including from all cameras supported by gphoto2:

    Rapid Photo Downloader Downloading

    There is much, much more in this release. Damon goes into much further detail on his post in the forum, copied here:


    How about its Timeline, which groups photos and videos based on how much time elapsed between consecutive shots. Use it to identify photos and videos taken at different periods in a single day or over consecutive days.

    You can adjust the time elapsed between consecutive shots that is used to build the Timeline to match your shooting sessions.

    Rapid Photo Downloader timeline

    How about a modern look?

    Rapid Photo Downloader about

    Download instructions: http://damonlynch.net/rapid/download.html

    For those who’ve used the older version, I’m copying and pasting from the ChangeLog, which covers most but not all changes:

    • New features compared to the previous release, version 0.4.11:

      • Every aspect of the user interface has been revised and modernized.

      • Files can be downloaded from all cameras supported by gPhoto2, including smartphones. Unfortunately the previous version could download from only some cameras.

      • Files that have already been downloaded are remembered. You can still select previously downloaded files to download again, but they are unchecked by default, and their thumbnails are dimmed so you can differentiate them from files that are yet to be downloaded.

      • The thumbnails for previously downloaded files can be hidden.

      • Unique to Rapid Photo Downloader is its Timeline, which groups photos and videos based on how much time elapsed between consecutive shots. Use it to identify photos and videos taken at different periods in a single day or over consecutive days. A slider adjusts the time elapsed between consecutive shots that is used to build the Timeline. Time periods can be selected to filter which thumbnails are displayed.

      • Thumbnails are bigger, and different file types are easier to distinguish.

      • Thumbnails can be sorted using a variety of criteria, including by device and file type.

      • Destination folders are previewed before a download starts, showing which subfolders photos and videos will be downloaded to. Newly created folders have their names italicized.

      • The storage space used by photos, videos, and other files on the devices being downloaded from is displayed for each device. The projected storage space on the computer to be used by photos and videos about to be downloaded is also displayed.

      • Downloading is disabled when the projected storage space required is more than the capacity of the download destination.

      • When downloading from more than one device, thumbnails for a particular device are briefly highlighted when the mouse is moved over the device.

      • The order in which thumbnails are generated prioritizes representative samples, based on time, which is useful for those who download very large numbers of files at a time.

      • Thumbnails are generated asynchronously and in parallel, using a load balancer to assign work to processes utilizing up to 4 CPU cores. Thumbnail generation is faster than the 0.4 series of program releases, especially when reading from fast memory cards or SSDs. (Unfortunately generating thumbnails for a smartphone’s photos is painfully slow. Unlike photos produced by cameras, smartphone photos do not contain embedded preview images, which means the entire photo must be downloaded and cached for its thumbnail to be generated. Although Rapid Photo Downloader does this for you, nothing can be done to speed it up).

      • Thumbnails generated when a device is scanned are cached, making thumbnail generation quicker on subsequent scans.

      • Libraw is used to render RAW images from which a preview cannot be extracted, which is the case with Android DNG files, for instance.

      • Freedesktop.org thumbnails for RAW and TIFF photos are generated once they have been downloaded, which means they will have thumbnails in programs like Gnome Files, Nemo, Caja, Thunar, PCManFM and Dolphin. If the path files are being downloaded to contains symbolic links, a thumbnail will be created for the path with and without the links. While generating these thumbnails does slow the download process a little, it’s a worthwhile tradeoff because Linux desktops typically do not generate thumbnails for RAW images, and thumbnails only for small TIFFs.

      • The program can now handle hundreds of thousands of files at a time.

      • Tooltips display information about the file including name, modification time, shot taken time, and file size.

      • Right click on thumbnails to open the file in a file browser or copy the path.

      • When downloading from a camera with dual memory cards, an emblem beneath the thumbnail indicates which memory cards the photo or video is on

      • Audio files that accompany photos on professional cameras like the Canon EOS-1D series of cameras are now also downloaded. XMP files associated with a photo or video on any device are also downloaded.

      • Comprehensive log files are generated that allow easier diagnosis of program problems in bug reports. Messages optionally logged to a terminal window are displayed in color.

      • When running under Ubuntu‘s Unity desktop, a progress bar and count of files available for download is displayed on the program’s launcher.

      • Status bar messages have been significantly revamped.

      • Determining a video’s correct creation date and time has been improved, using a combination of the tools MediaInfo and ExifTool. Getting the right date and time is trickier than it might appear. Depending on the video file and the camera that produced it, neither MediaInfo nor ExifTool always give the correct result. Moreover some cameras always use the UTC time zone when recording the creation date and time in the video’s metadata, whereas other cameras use the time zone the video was created in, while others ignore time zones altogether.

      • The time remaining until a download is complete (which is shown in the status bar) is more stable and more accurate. The algorithm is modelled on that used by Mozilla Firefox.

      • The installer has been totally rewritten to take advantage of Python‘s tool pip, which installs Python packages. Rapid Photo Downloader can now be easily installed and uninstalled. On Ubuntu, Debian and Fedora-like Linux distributions, the installation of all dependencies is automated. On other Linux distrubtions, dependency installation is partially automated.

      • When choosing a Job Code, whether to remember the choice or not can be specified.

    • Removed feature:

      • Rotate Jpeg images - to apply lossless rotation, this feature requires the program jpegtran. Some users reported jpegtran corrupted their jpegs’ metadata – which is bad under any circumstances, but terrible when applied to the only copy of a file. To preserve file integrity under all circumstances, unfortunately the rotate jpeg option must therefore be removed.
    • Under the hood, the code now uses:

      • PyQt 5.4 +

      • gPhoto2 to download from cameras

      • Python 3.4 +

      • ZeroMQ for interprocess communication

      • GExiv2 for photo metadata

      • Exiftool for video metadata

      • Gstreamer for video thumbnail generation

    • Please note if you use a system monitor that displays network activity, don’t be alarmed if it shows increased local network activity while the program is running. The program uses ZeroMQ over TCP/IP for its interprocess messaging. Rapid Photo Downloader’s network traffic is strictly between its own processes, all running solely on your computer.

    • Missing features, which will be implemented in future releases:

      • Components of the user interface that are used to configure file renaming, download subfolder generation, backups, and miscellaneous other program preferences. While they can be configured by manually editing the program’s configuration file, that’s far from easy and is error prone. Meanwhile, some options can be configured using the command line.

      • There are no full size photo and video previews.

      • There is no error log window.

      • Some main menu items do nothing.

      • Files can only be copied, not moved.


    Of course, Damon doesn’t sit still. He quickly followed up the 0.9.0a1 announcement by announcing 0.9.0a2 which included a few bug fixes from the previous release:

    • Added command line option to import preferences from from an old program version (0.4.11 or earlier).

    • Implemented auto unmount using GIO (which is used on most Linux desktops) and UDisks2 (all those desktops that don’t use GIO, e.g. KDE).

    • Fixed bug while logging processes being forcefully terminated.

    • Fixed bug where stored sequence number was not being correctly used when renaming files.

    • Fixed bug where download would crash on Python 3.4 systems due to use of Python 3.5 only math.inf


    If you’ve been considering optimizing your workflow for photo import and initial sorting now is as good a time as any - particularly with all of the great new features that have been packed into this release! Head on over to the Rapid Photo Downloader website to have a look and see the instructions for getting a copy:

    http://damonlynch.net/rapid/download.html

    Remember, this is Alpha software still (though most of the functionality is all in place). If you do run into any problems, please drop in and let Damon know in the forums!

    May 20, 2016

    External Plugins in GNOME Software (2)

    After quite a lot of positive feedback from my last post I’ll write some more about custom plugins. Next up is returning custom applications into the installed list. The use case here is a proprietary software distribution method that installs custom files into your home directory, but you can use your imagination for how this could be useful.

    The example here is all hardcoded, and a true plugin would have to derive the details about the GsApp, for example reading in an XML file or YAML config file somewhere. So, code:

    #include <gnome-software.h>
    
    void
    gs_plugin_initialize (GsPlugin *plugin)
    {
      gs_plugin_add_rule (plugin, GS_PLUGIN_RULE_RUN_BEFORE, "icons");
    }
    
    gboolean
    gs_plugin_add_installed (GsPlugin *plugin,
                             GsAppList *list,
                             GCancellable *cancellable,
                             GError **error)
    {
      g_autofree gchar *fn = NULL;
      g_autoptr(GsApp) app = NULL;
      g_autoptr(AsIcon) icon = NULL;
    
      /* check if the app exists */
      fn = g_build_filename (g_get_home_dir (), "chiron", NULL);
      if (!g_file_test (fn, G_FILE_TEST_EXISTS))
        return TRUE;
    
      /* the trigger exists, so create a fake app */
      app = gs_app_new ("example:chiron.desktop");
      gs_app_set_management_plugin (app, "example");
      gs_app_set_kind (app, AS_APP_KIND_DESKTOP);
      gs_app_set_state (app, AS_APP_STATE_INSTALLED);
      gs_app_set_name (app, GS_APP_QUALITY_NORMAL,
                       "Chiron");
      gs_app_set_summary (app, GS_APP_QUALITY_NORMAL,
                          "A teaching application");
      gs_app_set_description (app, GS_APP_QUALITY_NORMAL,
            "Chiron is the name of an application.\n\n"
            "It can be used to demo some of our features");
    
      /* these are all optional */
      gs_app_set_version (app, "1.2.3");
      gs_app_set_size_installed (app, 2 * 1024 * 1024);
      gs_app_set_size_download (app, 3 * 1024 * 1024);
      gs_app_set_origin_ui (app, "The example plugin");
      gs_app_add_category (app, "Game");
      gs_app_add_category (app, "ActionGame");
      gs_app_add_kudo (app, GS_APP_KUDO_INSTALLS_USER_DOCS);
      gs_app_set_license (app, GS_APP_QUALITY_NORMAL,
                          "GPL-2.0+ and LGPL-2.1+");
    
      /* create a stock icon (loaded by the 'icons' plugin) */
      icon = as_icon_new ();
      as_icon_set_kind (icon, AS_ICON_KIND_STOCK);
      as_icon_set_name (icon, "input-gaming");
      gs_app_set_icon (app, icon);
    
      /* return new app */
      gs_app_list_add (list, app);
    
      return TRUE;
    }
    

    This shows a lot of the plugin architecture in action. Some notable points:

    • The application ID (example:chiron.desktop) has a prefix of example which means we can co-exist with any package or flatpak version of the Chiron application, not setting the prefix would make the UI confused if more than one chiron.desktop got added.
    • Setting the management plugin means we can check for this string when working out if we can handle the install or remove action.
    • Most applications want a kind of AS_APP_KIND_DESKTOP to be visible as an application.
    • The origin is where the application originated from — usually this will be something like Fedora Updates.
    • The GS_APP_KUDO_INSTALLS_USER_DOCS means we get the blue “Documentation” award in the details page; there are many kudos to award to deserving apps.
    • Setting the license means we don’t get the non-free warning — removing the 3rd party warning can be done using AS_APP_QUIRK_PROVENANCE
    • The icons plugin will take the stock icon and convert it to a pixbuf of the correct size.

    To show this fake application just compile and install the plugin, touch ~/chiron and then restart gnome-software.

    Screenshot from 2016-05-20 21-22-38

    By filling in the optional details (which can also be filled in using gs_plugin_refine_app() (to be covered in a future blog post) you can also make the details page a much more exciting place. Adding a set of screenshots is left as an exercise to the reader.

    Screenshot from 2016-05-20 21-22-46

    For anyone interested, I’m also slowly writing up these blog posts into proper docbook and uploading them with the gtk-doc files here. I think this documentation would have been really useful for the Endless and Ubuntu people a few weeks ago, so if anyone sees any typos or missing details please let me know.

    May 19, 2016

    External plugins in GNOME Software

    I’ve just pushed a set of patches to gnome-software master that allow people to compile out-of-tree gnome-software plugins.

    In general, building things out-of-tree isn’t something that I think is a very good idea; the API and ABI inside gnome-software is still changing and there’s a huge benefit to getting plugins upstream where they can undergo review and be ported as the API adapts. I’m also super keen to provide configurability in GSettings for doing obviously-useful things, the sort of thing Fleet Commander can set for groups of users. However, now we’re shipping gnome-software in enterprise-class distros we might want to allow customers to ship thier own plugins to make various business-specific changes that don’t make sense upstream. This might involve querying a custom LDAP server and changing the suggested apps to reflect what groups the user is in, or might involve showing a whole new class of applications that does not conform to the Linux-specific “application is a desktop-file” paradigm. This is where a plugin makes sense, and something I’d like to support in future updates to RHEL 7.

    At this point it probably makes sense to talk a bit about how the architecture of gnome-software works. At its heart it’s just a big plugin loader that has some GTK UI that gets created for various result types. The idea is we have lots of small plugins that each do one thing and then pass the result onto the other plugins. These are ordered by dependencies against each other at runtime and each one can do things like editing an existing application or adding a new application to the result set. This is how we can add support for things like firmware updating, Steam, GNOME Shell web-apps and flatpak bundles without making big changes all over the source tree.

    There are broadly 3 types of plugin methods:

    • Actions: Do something on a specific GsApp; install gimp.desktop
    • Refine: Get details about a specific GsApp; is firefox.desktop installed? or get reviews for inkscape.desktop
    • Adopt: Can this plugin handle this GsApp; can fwupd handle com.hughski.ColorHug2.firmware

    You only need to define the vfuncs that the plugin needs, and the name is taken automatically from the suffix of the .so file. So, lets look at a sample plugin one chunk at a time, taking it nice and slow. First the copyright and licence (it only has to be GPLv2+ if it’s headed upstream):

    /*
     * Copyright (C) 2016 Richard Hughes 
     * Licensed under the GNU General Public License Version 2
     */
    

    Then, the magic header that sucks in everything that’s exported:

    #include <gnome-software.h>
    

    Then we have to define when our plugin is run in reference to other plugins, as we’re such a simple plugin we’re relying on another plugin to run after us to actually make the GsApp “complete”, i.e. adding icons and long descriptions:

    void
    gs_plugin_initialize (GsPlugin *plugin)
    {
      gs_plugin_add_rule (plugin, GS_PLUGIN_RULE_RUN_BEFORE, "appstream");
    }
    

    Then we can start to do something useful. In this example I want to show GIMP as a result (from any provider, e.g. flatpak or a distro package) when the user searches exactly for fotoshop. There is no prefixing or stemming being done for simplicity.

    gboolean
    gs_plugin_add_search (GsPlugin *plugin,
                          gchar **values,
                          GsAppList *list,
                          GCancellable *cancellable,
                          GError **error)
    {
      guint i;
      for (i = 0; values[i] != NULL; i++) {
        if (g_strcmp0 (values[i], "fotoshop") == 0) {
          g_autoptr(GsApp) app = gs_app_new ("gimp.desktop");
          gs_app_add_quirk (app, AS_APP_QUIRK_MATCH_ANY_PREFIX);
          gs_app_list_add (list, app);
        }
      }
      return TRUE;
    }
    

    We can then easily build and install the plugin using:

    gcc -shared -o libgs_plugin_example.so gs-plugin-example.c -fPIC \
     `pkg-config --libs --cflags gnome-software` \
     -DI_KNOW_THE_GNOME_SOFTWARE_API_IS_SUBJECT_TO_CHANGE &&
     sudo cp libgs_plugin_example.so `pkg-config gnome-software --variable=plugindir`
    

    Screenshot from 2016-05-19 10-39-53

    I’m going to be cleaning up the exported API and adding some more developer documentation before I release the next tarball, but if this is useful to you please let me know and I’ll do some more blog posts explaining more how the internal architecture of gnome-software works, and how you can do different things with plugins.

    G’MIC 1.7.1: When the flowers are blooming, image filters abound!

    Disclaimer: This article is a duplicate of this post, originally published on the Pixls.us website, by the same authors.

    Then we shall all burn together by Philipp Haegi.

     A new version 1.7.1Spring 2016” of G’MIC (GREYC’s Magic for Image Computing),
    the open-source framework for image processing, has been released recently (26 April 2016). This is a great opportunity to summarize some of the latest advances and features over the last 5 months.

    G’MIC: A brief overview

    G’MIC is an open-source project started in August 2008. It has been developed in the IMAGE team of the GREYC laboratory from the CNRS (one of the major French public research institutes). This team is made up of researchers and teachers specializing in the algorithms and mathematics of image processing. G’MIC is released under the free software licence CeCILL (GPL-compatible) for various platforms (Linux, Mac and Windows). It provides a set of various user interfaces for the manipulation of generic image data, that is images or image sequences of multispectral data being 2D or 3D, and with high-bit precision (up to 32bits floats per channel). Of course, it manages “classical” color images as well.

    logo_gmic

    Logo and (new) mascot of the G’MIC project, the open-source framework for image processing.

    Note that the project just got a redesign of its mascot Gmicky, drawn by David Revoy, a French illustrator well-known to free graphics lovers for being responsible for the great libre webcomics Pepper&CarottG’MIC is probably best known for it’s GIMP plug-in, first released in 2009. Today, this popular GIMP extension proposes more than 460 customizable filters and effects to apply on your images.

    gmic_gimp171_s

    Overview of the G’MIC plug-in for GIMP.

    But G’MIC is not a plug-in for GIMP only. It also offers a command-line interface, that can be used in addition with the CLI tools from ImageMagick or GraphicsMagick (this is undoubtly the most powerful and flexible interface of the framework). G’MIC also has a web service G’MIC Online to apply effects on your images directly from a web browser. Other G’MIC-based interfaces also exist (ZArt, a plug-in for Krita, filters for Photoflow…). All these interfaces are based on the generic C++ libraries CImg and libgmic which are portable, thread-safe and multi-threaded (through the use of OpenMP). Today, G’MIC has more than 900 functions to process images, all being fully configurable, for a library of only approximately 150 kloc of source code. It’s features cover a wide spectrum of the image processing field, with algorithms for geometric and color manipulations, image filtering (denoising/sharpening with spectral, variational or patch-based approaches…), motion estimation and registration, drawing of graphic primitives (up to 3d vector objects), edge detection, object segmentation, artistic rendering, etc. This is a versatile tool, useful to visualize and explore complex image data, as well as elaborate custom image processing pipelines (see these slides to get more information about the motivations and goals of the G’MIC project).

    A selection of some new filters and effects

    Here we look at the descriptions of some of the most significant filters recently added. We illustrate their usage from the G’MIC plug-in for GIMP. All of these filters are of course available from other interfaces as well (in particular within the CLI tool gmic).

    Painterly rendering of photographs

    The filter Artistic / Brushify tries to transform an image into a painting. Here, the idea is to simulate the process of painting with brushes on a white canvas. One provides a template image and the algorithm first analyzes the image geometry (local contrasts and orientations of the contours), then attempt to reproduce the image with a single brush that will be locally rotated and scaled accordingly to the contour geometry. By simulating enough of brushstrokes, one gets a “painted” version of the template image, which is more or less close to the original one, depending on the brush shape, its size, the number of allowed orientations, etc. All these settings being customizable by the user as parameters of the algorithm: This filter allows thus to render a wide variety of painting effects.

    gmic_brushify

    Overview of the filter “Brushify” in the G’MIC plug-in GIMP. The brush that will be used by the algorithmis visible on the top left.

    The animation below illustrates the diversity of results one can get with this filter, applied on the same input picture of a lion. Various brush shapes and geometries have been supplied to the algorithm. Brushify is computationally expensive so its implementation is parallelized (each core gives several brushstrokes simultaneously).

    brushify2

    A few examples of renderings obtained with “Brushify” from the same template image, but with different brushes and parameters.

    Note that it’s particularly fun to invoke this filter from the command line interface (using the option -brushify available in gmic) to process a sequence of video frames (see this example of “ brushified “ video):

    Reconstructing missing data from sparse samples

    G’MIC gets a new algorithm to reconstruct missing data in images. This is a classical problem in image processing, often named “Image Inpainting“, and G’MIC already had a lot of useful filters to solve this problem. Here, the newly added interpolation method assumes only a sparse set of image data is known, for instance a few scattered pixels over the image (instead of continuous chuncks of image data). The analysis and the reconstruction of the global image geometry is then particularly tough.

    The new option -solidify in G’MIC allows the reconstruction of dense image data from such a sparse sampling, based on a multi-scale diffusion PDE’s-based technique. The figure below illustrates the ability of the algorithm with an example of image reconstruction. We start from an input image of a waterdrop, and we keep only 2.7% of the image data (a very little amount of data!). The algorithm is able to reconstruct a whole image that looks like the input, even if all the small details have not been fully reconstructed (of course!). The more samples we have, the finer details we can recover.

    waterdrop2

    Reconstruction of an image from a sparse sampling.

    As this reconstruction technique is quite generic, several new G’MIC filters takes advantage of it:

    • Filter Repair / Solidify applies the algorithm in a direct manner, by reconstructing transparent areas from the interpolation of opaque regions. The animation below shows how this filter can be used to create an artistic blur on the image borders.
    gmic_sol

    Overview of the “Solidify” filter, in the G’MIC plug-in for GIMP.

    From an artistic point of view, there are many possibilities offered by this filters. For instance, it becomes really easy to generate color gradients with complex shapes, as shown with the two examples below (also in this video that details the whole process).

    gmic_solidify2

    Using the “Solidify” filter of G’MIC to easily create color gradients with complex shapes (input images on the left, filter results on the right).

    • Filter Artistic / Smooth abstract uses same idea as the one with the waterdrop image: it purposely sub-samples the image in a sparse way, by choosing keypoints mainly on the image edges, then use the reconstruction algorithm to get the image back. With a low number of samples, the filter can only render a piecewise smooth image, i.e. a smooth abstraction of the input image.
    smooth_abstract

    Overview of the “Smooth abstract” filter in the G’MIC plug-in for GIMP.

    • Filter Rendering / Gradient [random] is able to synthetize random colored backgrounds. Here again, the filter initializes a set of colors keypoints randomly chosen over the image, then interpolate them with the new reconstruction algorithm. We end up with a psychedelic background composed of randomly oriented color gradients.
    gradient_random

    Overview of the “Gradient [random]” filter in the G’MIC plug-in for GIMP.

    • Simulation of analog films : the new reconstruction algorithm also allowed a major improvement for all the analog film emulation filters that have been present in G’MIC for years. The section Film emulation/ proposes a wide variety of filters for this purpose. Their goal is to apply color transformations to simulate the look of a picture shot by an analogue camera with a certain kind of film. Below, you can see for instance a few of the 300 colorimetric transformations that are available in G’MIC.
    gmic_clut1

    A few of the 300+ color transformations available in G’MIC.

    From an algorithmic point of view, such a color mapping is extremely simple to implement : for each of the 300+ presets, G’MIC actually has an HaldCLUT, that is a function defining for each possible color (R,G,B) (of the original image), a new color (R’,G’,B’) color to set instead. As this function is not necessarily analytic, a HaldCLUT is stored in a discrete manner as a lookup table that gives the result of the mapping for all possible colors of the RGB cube (that is 2^24 = 16777216 values if we work with a 8bits precision per color component). This HaldCLUT-based color mapping is illustrated below for all values of the RGB color cube.

    gmic_clut0

    Principle of an HaldCLUT-based colorimetric transformation.

    This is a large amount of data: even by subsampling the RGB space (e.g. with 6 bits per component) and compressing the corresponding HaldCLUT file, you ends up with approximately 200 and 300 kB for each mapping file. Multiply this number by 300+ (the number of available mappings in G’MIC), and you get a total of 85MB of data, to store all these color transformations. Definitely not convenient to spread and package!

    The idea was then to develop a new lossy compression technique focused on HaldCLUT files, that is volumetric discretised vector-valued functions which are piecewise smooth by nature. And that what has been done in G’MIC, thanks to the new sparse reconstruction algorithm. Indeed, the reconstruction technique also works with 3D image data (such as a HaldCLUT!), so one simply has to extract a sufficient number of significant keypoints in the RGB cube and interpolate them afterwards to allow the reconstruction of a whole HaldCLUT (taking care to have a reconstruction error small enough to be sure that the color mapping we get with the compressed HaldCLUT is indistinguishable from the non-compressed one).

    gmic_clut2

    How the decompression of an HaldCLUT now works in G’MIC, from a set of colored keypoints located in the RGB cube.

    Thus, G’MIC doesn’t need to store all the color data from a HaldCLUT, but only a sparse sampling of it (i.e. a sequence of { rgb_keypoint, new_rgb_color }). Depending on the geometric complexity of the HaldCLUTs to encode, more or less keypoints are necessary (roughly from 30 to 2000). As a result, the storage of the 300+ HaldCLUTs in G’MIC requires now only 850 KiB of data (instead of 85 MiB), that is a compression gain of 99% ! That makes the whole HaldCLUT data storable in a single file that is easy to ship with the G’MIC package. Now, a user can then apply all the G’MIC color transformations while being offline (previously, each HaldCLUT had to be downloaded separately from the G’MIC server when requested).

    It looks like this new reconstruction algorithm from sparse samples is really great, and no doubts it will be used in other filters in the future.

    Make textures tileable

    Filter Arrays & tiles / Make seamless [patch-based] tries to transform an input texture to make it tileable, so that it can be duplicated as tiles along the horizontal and vertical axes without visible seams on the borders of adjacent tiles. Note that this is something that can be extremely hard to achieve, if the input texture has few auto-similarity or glaring luminosity changes spatially. That is the case for instance with the “Salmon” texture shown below as four adjacent tiles (configuration 2×2) with a lighting that goes from dark (on the left) to bright (on the right). Here, the algorithm modifies the texture so that the tiling shows no seams, but where the aspect of the original texture is preserved as much as possible (only the texture borders are modified).

    seamless1

    Overview of the “Make Seamless” filter in the G’MIC plug-in for GIMP.

    We can imagine some great uses of this filter, for instance in video games, where texture tiling is common to render large virtual worlds.

    seamless2

    Result of the “Make seamless” filter of G’MIC to make a texture tileable.

    Image decomposition into several levels of details

    A “new” filter Details / Split details [wavelets] has been added to decompose an image into several levels of details. It is based on the so-called “à trous” wavelet decomposition. For those who already know the popular Wavelet Decompose plug-in for GIMP, there won’t be so much novelty here, as it is mainly the same kind of decomposition technique that has been implemented. Having it directly in G’MIC is still a great news: it offers now a preview of the different scales that will be computed, and the implementation is parallelized to take advantage of multiple cores.

    gmic_wavelets

    Overview of the wavelet-based image decomposition filter, in the G’MIC plug-in for GIMP.

    The filter outputs several layers, so that each layer contains the details of the image at a given scale. All those layers blended together gives the original image back. Thus, one can work on those output layers separately and modify the image details only for a given scale. There are a lot of applications for this kind of image decomposition, one of the most spectacular being the ability to retouch the skin in portraits : the flaws of the skin are indeed often present in layers with middle-sized scales, while the natural skin texture (the pores) are present in the fine details. By selectively removing the flaws while keeping the pores, the skin aspect stays natural after the retouch (see this wonderful link for a detailed tutorial about skin retouching techniques, with GIMP).

    skin

    Using the wavelet decomposition filter in G’MIC for removing visible skin flaws on a portrait.

    Image denoising based on “Patch-PCA”

    G’MIC is also well known to offer a wide range of algorithms for image denoising and smoothing (currently more than a dozen). And he got one more ! Filter Repair / Smooth [patch-pca] proposed a new image denoising algorithm that is both efficient and computationally intensive (despite its multi-threaded implementation, you probably should avoid it on a machine with less than 8 cores…). In return, it sometimes does magic to suppress noise while preserving small details.

    patchpca

    Result of the new patch-based denoising algorithm added to G’MIC.

    The “Droste” effect

    The Droste effect (also known as “mise en abyme“ in art) is the effect of a picture appearing within itself recursively. To achieve this, a new filter Deformations / Continuous droste has been added into G’MIC. It’s actually a complete rewrite of the popular Mathmap’s Droste filter that has existed for years. Mathmap was a very popular plug-in for GIMP, but it seems to be not maintained anymore. The Droste effect was one of its most iconic and complex filter. Martin “Souphead”, one former user of Mathmap then took the bull by the horns and converted the complex code of this filter specifically into a G’MIC script, resulting in a parallelized implementation of the filter.

    droste0

    Overview of the converted “Droste” filter, in the G’MIC plug-in for GIMP.

    This filter allows all artistic delusions. For instance, it becomes trivial to create the result below in a few steps: create a selection around the clock, move it on a transparent background, run the Droste filter, et voilà!.

    droste2

    A simple example of what the G’MIC “Droste” filter can do.

    Equirectangular to nadir-zenith transformation

    The filter Deformations / Equirectangular to nadir-zenith is another filter converted from Mathmap to G’MIC. It is specifically used for the processing of panoramas: it reconstructs both the Zenith and the
    Nadir regions of a panorama so that they can be easily modified (for instance to reconstruct missing parts), before being reprojected back into the input panorama.

    zenith1

    Overview of the “Deformations / Equirectangular to nadir-zenith” filter in the G’MIC plug-in for GIMP.

    Morgan Hardwood has wrote a quite detailled tutorial, on pixls.us, about the reconstruction of missing parts in the Zenith/Nadir of an equirectangular panorama. Check it out!

    Other various improvements

    Finally, here are other highlights about the G’MIC project:

    • Filter Rendering / Kitaoka Spin Illusion is another Mathmap filter converted to G’MIC by Martin “Souphead”. It generates a certain kind of optical illusions as shown below (close your eyes if you are epileptic!)
    spin2

    Result of the “Kitaoka Spin Illusion” filter.

    • Filter Colors / Color blindness transforms the colors of an image to simulate different types of color blindness. This can be very helpful to check the accessibility of a web site or a graphical document for colorblind people. The color transformations used here are the same as defined on Coblis, a website that proposes to apply this kind of simulation online. The G’MIC filter gives strictly identical results, but it ease the batch processing of several images at once.
    gmic_cb

    Overview of the colorblindness simulation filter, in the G’MIC plug-in for GIMP.

    • Since a few years now, G’MIC has its own parser of mathematical expression, a really convenient module to perform complex calculations when applying image filters This core feature gets new functionalities: the ability to manage variables that can be complex, vector or matrix-valued, but also the creation of user-defined mathematical functions. For instance, the classical rendering of the Mandelbrot fractal set (done by estimating the divergence of a sequence of complex numbers) can be implemented like this, directly on the command line:
      $ gmic 512,512,1,1,"c = 2.4*[x/w,y/h] - [1.8,1.2]; z = [0,0]; for (iter = 0, cabs(z)
    gmic_mand

    Using the G’MIC math evaluator to implement the rendering of the Mandelbrot set, directly from the command line!_

    This clearly enlarge the math evaluator ability, as you are not limited to scalar variables anymore. You can now create complex filters which are able to solve linear systems or compute eigenvalues/eigenvectors, and this, for each pixel of an input image. It’s a bit like having a micro-(micro!)-Octave inside G’MIC. Note that the Brushify filter described earlier uses these new features extensively. It’s also interesting to know that the G’MIC math expression evaluator has its own JIT compiler to achieve a fast evaluation of expressions when applied on thousands of image values simultaneously.

    • Another great contribution has been proposed by Tobias Fleischer, with the creation of a new API to invoke the functions of the libgmic library (which is the library containing all the G’MIC features, initially available through a C++ API only). As the C ABI is standardized (unlike C++), this basically means G’MIC can be interfaced more easily with languages other than C++. In the future, we can imagine the development of G’MIC APIs for languages such as Python for instance. Tobias is currently using this new C API to develop G’MIC-based plug-ins compatible with the OpenFX standard. Those plug-ins should be usable indifferently in video editing software such as After effects, Sony Vegas Pro or Natron. This is still an on-going work though.
    gmic_natron

    Overview of some G’MIC-based OpenFX plug-ins, running under Natron.

    gmic_blender2

    Overview of a dedicated G’MIC script running within the Blender VSE.

    • You can find out G’MIC filters also in the opensource nonlinear video editor Flowblade, thanks to the hard work of Janne Liljeblad (Flowblade project leader). Here again, the goal is to allow the application of G’MIC effects and filters directly on image sequences, mainly for artistic purposes (as shown in this video or this one).
    gmic_flowblade

    Overview of a G’MIC filter applied under Flowblade, a nonlinear video editor.

    What’s next ?

    As you see, the G’MIC project is doing well, with an active development and cool new features added months after months. You can find and use interfaces to G’MIC in more and more opensource software, as GIMPKritaBlenderPhotoflowFlowbladeVeejayEKD and Natron in a near future (at least we hope so!).

    At the same time, we can see more and more external resources available for G’MIC : tutorials, blog articles (hereherehere,…), or demonstration videos (herehereherehere,…). This shows the project becoming more useful to users of opensource software for graphics and photography.

    The development of version 1.7.2 already hit the ground running, so stay tuned and visit the official G’MIC forum on pixls.us to get more info about the project developement and get answers to your questions. Meanwhile, feel the power of free software for image processing!

    May 18, 2016

    Krita 3.0 Release Candidate 1 Released

    We’re getting closer and closer to releasing Krita 3.0, the first version of Krita that includes animation tools, instant preview and which is based on Qt5! Today’s release candidate offers many fixes and improvements over the previous beta releases. The Animation and Instant Preview features were funded by last year’s successful Kickstarter, and right now we’re running our third Kickstarter campaign: this year’s main topics are creating a great text and vector toolset. After one week, we’re already half-way!

    support-krita-2016-3

    The biggest new feature is no doubt support for hand-drawn animation. This summer, Jouni Pentikäinen will continue improving the animation tools, but it’s already a solid toolset. Here’s a video tutorial where Wolthera shows how she created the animated headers for this year’s Kickstarter stretch goals:

    And here is another demonstration by Wolthera showing off the Instant Preview feature, which makes it possible to use big brushes on big canvases. It may take a bit more memory, but it gives a huge speed boost:

    Apart from Instant Preview, Animation, Qt5-support, Krita 3.0 will have a number of Kickstarter stretch goals, like improved layer handling, improved shortcuts, the tangent normal brush, and great colorspace selector, guides, a grids and guides docker, snapping to grids and guides, improved shortcut palette, gradient map filter and much, much, much more. And we’ll be sure to fix more issues before we present the final release.

    So check out the review prepared by Nathan Lovato, while we’re preparing the full release announcement:

    Release Candidate 3 Improvements

    Compared to the last beta, we’ve got the following improvements:

    • Shortcuts now also work if the cursor is not hovering over the canvas
    • Translations are more complete
    • The export to PDF dialog shows the page preview
    • The advanced color selector is faster
    • The vector gradient tool performs petter
    • Fill layers are saved and loaded correctly
    • Improvements to Instant Preview
    • Fix crashes when setting font properties in the text tool.
    • Fix handling the mirror axis handle
    • Use OpenMP in G’Mic on Windows and Linux, which makes most filters much faster
    • Fixes to the file dialog
    • The Spriter export plugin was rewritten
    • Fix a number of crashes
    • Fix the scaling of the toolbox icons
    • Add new icons for the pan and zoom tools
    • Make it possible to enable HiDPI mode by setting the environment variable KRITA_HIDPI to ON.
    • Fix the fade, distance and time sensors in the brush editor
    • Make it possible to open color palettes again
    • Add a shortcut for toggling onion skinning
    • Fix loading of the onion skin button states
    • Add a lock for the brush drawing angle
    • Handle positioning popups and dialogs on multi-monitor setups correctly

    And a load of smaller things!

    Downloads

    Windows Shell Extension package by Alvin Wong. Just install it and Windows Explorer will start showing preview and meta information for Krita files. (Disregard any warnings by virus checkers, because this package is built with the NSIS installer maker, some virus checkers always think it’s infected, it’s not.)

    Windows: Unzip and run the bin/krita.exe executable! These downloads do not interfere with your existing installation. The configuration file location has been moved from %APPDATA%\Local\kritarc to %APPDATA%\Local\krita\kritarc.

    The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released. Disable OpenGL in the preferences dialog to see a cursor outline, grids and guides and so on.

    The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage is provided with g’mic built without OpenMP (which makes it much slower)

    As usual, you can use these builds without affecting your 2.9 installation.

    Source: you can find the source here:

    G'MIC 1.7.1


    G'MIC 1.7.1

    When the flowers are blooming, image filters abound!

    A new version 1.7.1Spring 2016” of G’MIC (GREYC’s Magic for Image Computing), the open-source framework for image processing, has been released recently (26 April 2016). This is a great opportunity to summarize some of the latest advances and features over the last 5 months.

    G’MIC: A brief overview

    G’MIC is an open-source project started in August 2008. It has been developed in the IMAGE team of the GREYC laboratory from the CNRS (one of the major French public research institutes). This team is made up of researchers and teachers specializing in the algorithms and mathematics of image processing. G’MIC is released under the free software licence CeCILL (GPL-compatible) for various platforms (Linux, Mac and Windows). It provides a set of various user interfaces for the manipulation of generic image data, that is images or image sequences of multispectral data being 2D or 3D, and with high-bit precision (up to 32bits floats per channel). Of course, it manages “classical” color images as well.

    logo_gmic Logo and (new) mascot of the G’MIC project, the open-source framework for image processing.

    Note that the project just got a redesign of its mascot Gmicky, drawn by David Revoy, a French illustrator well-known to free graphics lovers for being responsible for the great libre webcomics Pepper&Carott.

    G’MIC is probably best known for it’s GIMP plug-in, first released in 2009. Today, this popular GIMP extension proposes more than 460 customizable filters and effects to apply on your images.

    gmic_gimp171_s Overview of the G’MIC plug-in for GIMP.

    But G’MIC is not a plug-in for GIMP only. It also offers a command-line interface, that can be used in addition with the CLI tools from ImageMagick or GraphicsMagick (this is undoubtly the most powerful and flexible interface of the framework). G’MIC also has a web service G’MIC Online to apply effects on your images directly from a web browser. Other G’MIC-based interfaces also exist (ZArt, a plug-in for Krita, filters for Photoflow…). All these interfaces are based on the generic C++ libraries CImg and libgmic which are portable, thread-safe and multi-threaded (through the use of OpenMP). Today, G’MIC has more than 900 functions to process images, all being fully configurable, for a library of only approximately 150 kloc of source code. It’s features cover a wide spectrum of the image processing field, with algorithms for geometric and color manipulations, image filtering (denoising/sharpening with spectral, variational or patch-based approaches…), motion estimation and registration, drawing of graphic primitives (up to 3d vector objects), edge detection, object segmentation, artistic rendering, etc. This is a versatile tool, useful to visualize and explore complex image data, as well as elaborate custom image processing pipelines (see these slides to get more information about the motivations and goals of the G’MIC project).

    A selection of some new filters and effects

    Here we look at the descriptions of some of the most significant filters recently added. We illustrate their usage from the G’MIC plug-in for GIMP. All of these filters are of course available from other interfaces as well (in particular within the CLI tool gmic).

    Painterly rendering of photographs

    The filter Artistic / Brushify tries to transform an image into a painting. Here, the idea is to simulate the process of painting with brushes on a white canvas. One provides a template image and the algorithm first analyzes the image geometry (local contrasts and orientations of the contours), then attempt to reproduce the image with a single brush that will be locally rotated and scaled accordingly to the contour geometry. By simulating enough of brushstrokes, one gets a “painted” version of the template image, which is more or less close to the original one, depending on the brush shape, its size, the number of allowed orientations, etc. All these settings being customizable by the user as parameters of the algorithm: This filter allows thus to render a wide variety of painting effects.

    gmic_brushify Overview of the filter “Brushify” in the G’MIC plug-in GIMP. The brush that will be used by the algorithmis visible on the top left.

    The animation below illustrates the diversity of results one can get with this filter, applied on the same input picture of a lion. Various brush shapes and geometries have been supplied to the algorithm. Brushify is computationally expensive so its implementation is parallelized (each core gives several brushstrokes simultaneously).

    brushify2 A few examples of renderings obtained with “Brushify” from the same template image, but with different brushes and parameters.

    Note that it’s particularly fun to invoke this filter from the command line interface (using the option -brushify available in gmic) to process a sequence of video frames (see this example of “ brushified “ video):


    Reconstructing missing data from sparse samples

    G’MIC gets a new algorithm to reconstruct missing data in images. This is a classical problem in image processing, often named “Image Inpainting“, and G’MIC already had a lot of useful filters to solve this problem. Here, the newly added interpolation method assumes only a sparse set of image data is known, for instance a few scattered pixels over the image (instead of continuous chuncks of image data). The analysis and the reconstruction of the global image geometry is then particularly tough.

    The new option -solidify in G’MIC allows the reconstruction of dense image data from such a sparse sampling, based on a multi-scale diffusion PDE’s-based technique. The figure below illustrates the ability of the algorithm with an example of image reconstruction. We start from an input image of a waterdrop, and we keep only 2.7% of the image data (a very little amount of data!). The algorithm is able to reconstruct a whole image that looks like the input, even if all the small details have not been fully reconstructed (of course!). The more samples we have, the finer details we can recover.

    waterdrop2 Reconstruction of an image from a sparse sampling.

    As this reconstruction technique is quite generic, several new G’MIC filters takes advantage of it:

    • Filter Repair / Solidify applies the algorithm in a direct manner, by reconstructing transparent areas from the interpolation of opaque regions. The animation below shows how this filter can be used to create an artistic blur on the image borders.
    gmic_sol Overview of the “Solidify” filter, in the G’MIC plug-in for GIMP.

    From an artistic point of view, there are many possibilities offered by this filters. For instance, it becomes really easy to generate color gradients with complex shapes, as shown with the two examples below (also in this video that details the whole process).

    gmic_solidify2 Using the “Solidify” filter of G’MIC to easily create color gradients with complex shapes (input images on the left, filter results on the right).
    • Filter Artistic / Smooth abstract uses same idea as the one with the waterdrop image: it purposely sub-samples the image in a sparse way, by choosing keypoints mainly on the image edges, then use the reconstruction algorithm to get the image back. With a low number of samples, the filter can only render a piecewise smooth image, i.e. a smooth abstraction of the input image.
    smooth_abstract Overview of the “Smooth abstract” filter in the G’MIC plug-in for GIMP.
    • Filter Rendering / Gradient [random] is able to synthetize random colored backgrounds. Here again, the filter initializes a set of colors keypoints randomly chosen over the image, then interpolate them with the new reconstruction algorithm. We end up with a psychedelic background composed of randomly oriented color gradients.
    gradient_random Overview of the “Gradient [random]” filter in the G’MIC plug-in for GIMP.
    • Simulation of analog films : the new reconstruction algorithm also allowed a major improvement for all the analog film emulation filters that have been present in G’MIC for years. The section Film emulation/ proposes a wide variety of filters for this purpose. Their goal is to apply color transformations to simulate the look of a picture shot by an analogue camera with a certain kind of film. Below, you can see for instance a few of the 300 colorimetric transformations that are available in G’MIC.
    gmic_clut1 A few of the 300+ color transformations available in G’MIC.

    From an algorithmic point of view, such a color mapping is extremely simple to implement : for each of the 300+ presets, G’MIC actually has an HaldCLUT, that is a function defining for each possible color (R,G,B) (of the original image), a new color (R’,G’,B’) color to set instead. As this function is not necessarily analytic, a HaldCLUT is stored in a discrete manner as a lookup table that gives the result of the mapping for all possible colors of the RGB cube (that is 2^24 = 16777216 values if we work with a 8bits precision per color component). This HaldCLUT-based color mapping is illustrated below for all values of the RGB color cube.

    gmic_clut0 Principle of an HaldCLUT-based colorimetric transformation.

    This is a large amount of data: even by subsampling the RGB space (e.g. with 6 bits per component) and compressing the corresponding HaldCLUT file, you ends up with approximately 200 and 300 kB for each mapping file. Multiply this number by 300+ (the number of available mappings in G’MIC), and you get a total of 85MB of data, to store all these color transformations. Definitely not convenient to spread and package!

    The idea was then to develop a new lossy compression technique focused on HaldCLUT files, that is volumetric discretised vector-valued functions which are piecewise smooth by nature. And that what has been done in G’MIC, thanks to the new sparse reconstruction algorithm. Indeed, the reconstruction technique also works with 3D image data (such as a HaldCLUT!), so one simply has to extract a sufficient number of significant keypoints in the RGB cube and interpolate them afterwards to allow the reconstruction of a whole HaldCLUT (taking care to have a reconstruction error small enough to be sure that the color mapping we get with the compressed HaldCLUT is indistinguishable from the non-compressed one).

    gmic_clut2 How the decompression of an HaldCLUT now works in G’MIC, from a set of colored keypoints located in the RGB cube.

    Thus, G’MIC doesn’t need to store all the color data from a HaldCLUT, but only a sparse sampling of it (i.e. a sequence of { rgb_keypoint, new_rgb_color }). Depending on the geometric complexity of the HaldCLUTs to encode, more or less keypoints are necessary (roughly from 30 to 2000). As a result, the storage of the 300+ HaldCLUTs in G’MIC requires now only 850 KiB of data (instead of 85 MiB), that is a compression gain of 99% ! That makes the whole HaldCLUT data storable in a single file that is easy to ship with the G’MIC package. Now, a user can then apply all the G’MIC color transformations while being offline (previously, each HaldCLUT had to be downloaded separately from the G’MIC server when requested).

    It looks like this new reconstruction algorithm from sparse samples is really great, and no doubts it will be used in other filters in the future.

    Make textures tileable

    Filter Arrays & tiles / Make seamless [patch-based] tries to transform an input texture to make it tileable, so that it can be duplicated as tiles along the horizontal and vertical axes without visible seams on the borders of adjacent tiles. Note that this is something that can be extremely hard to achieve, if the input texture has few auto-similarity or glaring luminosity changes spatially. That is the case for instance with the “Salmon” texture shown below as four adjacent tiles (configuration 2x2) with a lighting that goes from dark (on the left) to bright (on the right). Here, the algorithm modifies the texture so that the tiling shows no seams, but where the aspect of the original texture is preserved as much as possible (only the texture borders are modified).

    seamless1 Overview of the “Make Seamless” filter in the G’MIC plug-in for GIMP.

    We can imagine some great uses of this filter, for instance in video games, where texture tiling is common to render large virtual worlds.

    seamless2 Result of the “Make seamless” filter of G’MIC to make a texture tileable.

    Image decomposition into several levels of details

    A “new” filter Details / Split details [wavelets] has been added to decompose an image into several levels of details. It is based on the so-called “à trous” wavelet decomposition. For those who already know the popular Wavelet Decompose plug-in for GIMP, there won’t be so much novelty here, as it is mainly the same kind of decomposition technique that has been implemented. Having it directly in G’MIC is still a great news: it offers now a preview of the different scales that will be computed, and the implementation is parallelized to take advantage of multiple cores.

    gmic_wavelets Overview of the wavelet-based image decomposition filter, in the G’MIC plug-in for GIMP.

    The filter outputs several layers, so that each layer contains the details of the image at a given scale. All those layers blended together gives the original image back.

    Thus, one can work on those output layers separately and modify the image details only for a given scale. There are a lot of applications for this kind of image decomposition, one of the most spectacular being the ability to retouch the skin in portraits : the flaws of the skin are indeed often present in layers with middle-sized scales, while the natural skin texture (the pores) are present in the fine details. By selectively removing the flaws while keeping the pores, the skin aspect stays natural after the retouch (see this wonderful link for a detailed tutorial about skin retouching techniques, with GIMP).

    skin Using the wavelet decomposition filter in G’MIC for removing visible skin flaws on a portrait.

    Image denoising based on “Patch-PCA”

    G’MIC is also well known to offer a wide range of algorithms for image denoising and smoothing (currently more than a dozen). And he got one more ! Filter Repair / Smooth [patch-pca] proposed a new image denoising algorithm that is both efficient and computationally intensive (despite its multi-threaded implementation, you probably should avoid it on a machine with less than 8 cores…). In return, it sometimes does magic to suppress noise while preserving small details.

    patchpca Result of the new patch-based denoising algorithm added to G’MIC.

    The “Droste” effect

    The Droste effect (also known as “mise en abyme“ in art) is the effect of a picture appearing within itself recursively. To achieve this, a new filter Deformations / Continuous droste has been added into G’MIC. It’s actually a complete rewrite of the popular Mathmap’s Droste filter that has existed for years. Mathmap was a very popular plug-in for GIMP, but it seems to be not maintained anymore. The Droste effect was one of its most iconic and complex filter. Martin “Souphead”, one former user of Mathmap then took the bull by the horns and converted the complex code of this filter specifically into a G’MIC script, resulting in a parallelized implementation of the filter.

    droste0 Overview of the converted “Droste” filter, in the G’MIC plug-in for GIMP.

    This filter allows all artistic delusions. For instance, it becomes trivial to create the result below in a few steps: create a selection around the clock, move it on a transparent background, run the Droste filter, et voilà!.

    droste2 A simple example of what the G’MIC “Droste” filter can do.

    Equirectangular to nadir-zenith transformation

    The filter Deformations / Equirectangular to nadir-zenith is another filter converted from Mathmap to G’MIC. It is specifically used for the processing of panoramas: it reconstructs both the Zenith and the Nadir regions of a panorama so that they can be easily modified (for instance to reconstruct missing parts), before being reprojected back into the input panorama.

    zenith1 Overview of the “Deformations / Equirectangular to nadir-zenith” filter in the G’MIC plug-in for GIMP.

    Morgan Hardwood has wrote a quite detailled tutorial, here on pixls.us, about the reconstruction of missing parts in the Zenith/Nadir of an equirectangular panorama. Check it out!

    Other various improvements

    Finally, here are other highlights about the G’MIC project:

    • Filter Rendering / Kitaoka Spin Illusion is another Mathmap filter converted to G’MIC by Martin “Souphead”. It generates a certain kind of optical illusions as shown below (close your eyes if you are epileptic!)
    spin2 Result of the “Kitaoka Spin Illusion” filter.
    • Filter Colors / Color blindness transforms the colors of an image to simulate different types of color blindness. This can be very helpful to check the accessibility of a web site or a graphical document for colorblind people. The color transformations used here are the same as defined on Coblis, a website that proposes to apply this kind of simulation online. The G’MIC filter gives strictly identical results, but it ease the batch processing of several images at once.
    gmic_cb Overview of the colorblindness simulation filter, in the G’MIC plug-in for GIMP.
    • Since a few years now, G’MIC has its own parser of mathematical expression, a really convenient module to perform complex calculations when applying image filters This core feature gets new functionalities: the ability to manage variables that can be complex, vector or matrix-valued, but also the creation of user-defined mathematical functions. For instance, the classical rendering of the Mandelbrot fractal set (done by estimating the divergence of a sequence of complex numbers) can be implemented like this, directly on the command line:
      $ gmic 512,512,1,1,"c = 2.4*[x/w,y/h] - [1.8,1.2]; z = [0,0]; for (iter = 0, cabs(z)<=2 && ++iter<256, z = z**z + c); 6*iter" -map 7,2
      
    gmic_mand Using the G’MIC math evaluator to implement the rendering of the Mandelbrot set, directly from the command line!_

    This clearly enlarge the math evaluator ability, as you are not limited to scalar variables anymore. You can now create complex filters which are able to solve linear systems or compute eigenvalues/eigenvectors, and this, for each pixel of an input image. It’s a bit like having a micro-(micro!)-Octave inside G’MIC. Note that the Brushify filter described earlier uses these new features extensively. It’s also interesting to know that the G’MIC math expression evaluator has its own JIT compiler to achieve a fast evaluation of expressions when applied on thousands of image values simultaneously.

    • Another great contribution has been proposed by Tobias Fleischer, with the creation of a new C API to invoke the functions of the libgmic library (which is the library containing all the G’MIC features, initially available through a C++ API only). As the C ABI is standardized (unlike C++), this basically means G’MIC can be interfaced more easily with languages other than C++. In the future, we can imagine the development of G’MIC APIs for languages such as Python for instance. Tobias is currently using this new C API to develop G’MIC-based plug-ins compatible with the OpenFX standard. Those plug-ins should be usable indifferently in video editing software such as After effects, Sony Vegas Pro or Natron. This is still an on-going work though.
    gmic_natron Overview of some G’MIC-based OpenFX plug-ins, running under Natron.
    gmic_blender2 Overview of a dedicated G’MIC script running within the Blender VSE.
    • You can find out G’MIC filters also in the opensource nonlinear video editor Flowblade, thanks to the hard work of Janne Liljeblad (Flowblade project leader). Here again, the goal is to allow the application of G’MIC effects and filters directly on image sequences, mainly for artistic purposes (as shown in this video or this one).
    gmic_flowblade Overview of a G’MIC filter applied under Flowblade, a nonlinear video editor.

    What’s next ?

    As you see, the G’MIC project is doing well, with an active development and cool new features added months after months. You can find and use interfaces to G’MIC in more and more opensource software, as GIMP, Krita, Blender, Photoflow, Flowblade, Veejay, EKD and Natron in a near future (at least we hope so!).

    At the same time, we can see more and more external resources available for G’MIC : tutorials, blog articles (here, here, here,…), or demonstration videos (here, here, here, here,…). This shows the project becoming more useful to users of opensource software for graphics and photography.

    The development of version 1.7.2 already hit the ground running, so stay tuned and visit the official G’MIC forum on pixls.us to get more info about the project developement and get answers to your questions. Meanwhile, feel the power of free software for image processing!

    May 13, 2016

    Blutella, a Bluetooth speaker receiver

    Quite some time ago, I was asked for a way to use the AV amplifier (which has a fair bunch of speakers connected to it) in our living-room that didn't require turning on the TV to choose a source.

    I decided to try and solve this problem myself, as an exercise rather than a cost saving measure (there are good-quality Bluetooth receivers available for between 15 and 20€).

    Introducing Blutella



    I found this pot of Nutella in my travels (in Europe, smaller quantities are usually in a jar that looks like a mustard glass, with straight sides) and thought it would be a perfect receptacle for a CHIP, to allow streaming via Bluetooth to the amp. I wanted to make a nice how-to for you, dear reader, but best laid plans...

    First, the materials:
    • a CHIP
    • jar of Nutella, and "Burnt umber" acrylic paint
    • micro-USB to USB-A and jack 3.5mm to RCA cables
    • Some white Sugru, for a nice finish around the cables
    • bit of foam, a Stanley knife, a CD marker

    That's around 10€ in parts (cables always seem to be expensive), not including our salvaged Nutella jar, and the CHIP itself (9$ + shipping).

    You'll start by painting the whole of the jar, on the inside, with the acrylic paint. Allow a couple of days to dry, it'll be quite thick.

    So, the plan that went awry. Turns out that the CHIP, with the cables plugged in, doesn't fit inside this 140g jar of Nutella. I also didn't make the holes exactly in the right place. The CHIP is tiny, but not small enough to rotate inside the jar without hitting the side, and the groove to screw the cap also have only one position.

    Anyway, I pierced two holes in the lid for the audio jack and the USB charging cable, stuffed the CHIP inside, and forced the lid on so it clipped on the jar's groove.

    I had nice photos with foam I cut to hold the CHIP in place, but the finish isn't quite up to my standards. I guess that means I can attempt this again with a bigger jar ;)

    The software

    After flashing the CHIP with Debian, I logged in, and launched a script which I put together to avoid either long how-tos, or errors when I tried to reproduce the setup after a firmware update and reset.

    The script for setting things up is in the CHIP-bluetooth-speaker repository. There are a few bugs due to drivers, and lack of integration, but this blog is the wrong place to track them, so check out the issues list.

    Apart from those driver problems, I found the integration between PulseAudio and BlueZ pretty impressive, though I wish there was a way for the speaker to reconnect to the phone I streamed from when turned on again, as Bluetooth speakers and headsets do, removing one step from playing back audio.

    New 3.0 development builds! (With a cool little new feature as well)

    Our kickstarter campaign has been running four days now, and we’re only 2000 euros short of being at 50% funded! Of course it’s Kickstarter, so it’s 100% or nothing, so we’ve still got work to do!

    In the meantime, Dmitry published an article on Geektimes and one of the comments had a tantalizing suggestion about locking the brush angles that when Wolthera, David Revoy and Raghukamath, resident artists on the Krita chat channel, saw the mockup of, they all cried, we want that!

    Since we could implement it without adding new strings, Dmitry took half a day off from bug fixing and added it! And David Revoy let himself be inspired by Cezanna and produced this introduction video:

    And then, among other things, we fixed the application icon on Windows, fixed issues with translations on Windows, fixed issues with the color picker, finished the Spriter scml export plugin, worked around some bugs in Qt that made popups and dialogs show on the wrong monitor, made sure author and title info gets saved to PNG images, fixed display artefacts when using Instant Preview, fixed the direction of the fade, distance and time brush engine sensors, fixed reading the random offset parameter in brush engines, improved custom shortcut handling and fixed some crashes. Oh, and we fixed the Krita Lime repository builds for Ubuntu 16.04, so you can replace the ancient 2.9.7 build Ubuntu provides with a shiny 2.9.11

    Krita 3.0 is getting stabler all the time; a new beta will be released next week, but we feel it’s good enough that we’ve added Bleeding Edge download links to the download page, too! For your convenience, here are the links to the latest builds:

    Windows Shell Extension package by Alvin Wong. Just install it and Windows Explorer will start showing preview and meta information for Krita files. (Disregard any warnings by virus checkers, because this package is built with the NSIS installer maker, some virus checkers always think it’s infected, it’s not.)

    Windows: Unzip and run the bin/krita.exe executable!

    The OSX disk image still has the known issue that if OpenGL is enabled, the brush outline cursor, grids, guides and so on are not visible. We’re working on that, but don’t expect to have rewritten the canvas before 3.0 will be released.

    The Linux appimage:After downloading, make the appimage executable and run it. No installation is needed. For CentOS 6 and Ubuntu 12.04, a separate appimage is provided with g’mic built without OpenMP (which makes it much slower)

    (As usual, you can use these builds without affecting your 2.9 installation.)

    Let’s finish up with cute Kiki!

    kiki

    May 09, 2016

    Blog backlog, Post 2, xdg-app bundles


    I recently worked on creating an xdg-app bundle for GNOME Videos, aka Totem, so it would be built along with other GNOME applications, every night, and made available via the GNOME xdg-app repositories.

    There's some functionality that's not working yet though:
    • No support for optical discs
    • The MPRIS plugin doesn't work as we're missing dbus-python (I'm not sure that the plugin will survive anyway, it's more suited to audio players, don't worry though, it's not going to be removed until we have made changes to the sound system in GNOME)
    • No libva/VDPAU hardware acceleration (which would require plugins, and possibly device access of some sort)
    However, I created a bundle that extends the freedesktop runtime, that contains gst-libav. We'll need to figure out a way to distribute it in a way that doesn't cause problems for US hosts.

    As we also have a recurring problem in Fedora with rpmfusion being out of date, and I sometimes need a third-party movie player to test things out, I put together an mpv manifest, which is the only MPlayer-like with a .desktop and a GUI when launched without any command-line arguments.

    Finally, I put together a RetroArch bundle for research into a future project, which uncovered the lack of joystick/joypad support in the xdg-app sandbox.

    Hopefully, those few manifests will be useful to other application developers wanting to distribute their applications themselves. There are some other bundles being worked on, and that can be used as examples, linked to in the Wiki.

    Let’s make Text and Vectors Awesome: 2016 Kickstarter

    Even while we’re still working on fixing the last bunch of bugs for what promises to become a great 3.0 release, we’re taking the next step! It’s time for the 2016 Krita Kickstarter!
    Last year, our backers funded a big performance improvement in the form of the Instant Preview feature and wickedly cool animation support, right in the core of Krita. And a bunch of stretch goals, some of which are already implemented in 3.0, some of which will come in Krita 3.1.

    This year, we’re focusing on two big topics: the text tool and the vector layers. Plus, there are a lot of exciting stretch goals for you to vote on!

    Krita’s text tool used to be shared with the rest of KOffice, later Calligra. It’s a complete word processor in a box, with bibliography, semantic markup, tables, columns and more! But not much fine typographic control and besides… It has always been a bad fit, it has never worked well!

    Now is the time to join us and make it possible to create an awesome text tool, one that is really suitable to what you need text for in Krita: real typographic and artistic control, support for various languages, for translations, for scripts from all over the world. One integrated text tool that is easy to use, puts you in control and can be extended over the years with new features.

    texteditor-mock

    The second topic is vector graphics. It’s related to the text tool, since both are vector layer features. Currently, our vector graphics are defined in the OpenDocument Graphics format, which is fine for office applications, but not great for artwork. There’s already a start for supporting the SVG standard instead, and now’s the time to finish the job! And once we’re SVG to the core, we can start improving the usability of the vector tools themselves, which also suffer from having been designed to work in a word processor, spreadsheet or presentation application. Now that Krita is no longer part of a suite of office applications, we can really focus on making all the tools suitable for artists! Let’s make working with vector art great!

    FlyingKonqui-animtim

    And of course, there are a bunch of stretch goals, ranging from small ones to a really big stretch goal, Python scripting. Check out the kickstarter page for a full list!

    support-krita-2016-3

    One of the fun parts of backing a kickstarter project are the rewards. For a Krita kickstarter, these are mostly small, fun things to remember a great campaign by. But we’re trying to do something special this year! After the kickstarter is funded, we will commission Krita artists from all over the world to create art for us that we will use in various rewards!

    Interview with Toby Willsmer

    rip-the-gasworks

    Could you tell us something about yourself?

    Sure, I am originally from the UK but now live in New Zealand. At 44 I have been drawing and illustrating for over 20 years but currently only for myself. I have a love of comics and graphic novels which is pretty much the style I have inherited over the years. By day I’m a Front End Developer and by night I like to let my mind run riot and then draw it.

    Do you paint professionally, as a hobby artist, or both?

    At the moment it’s a life long hobby for me although every now and then I’ll take on the odd commission for some one who wants to own one of my style pieces. I have a long term graphic novel that I’ve been working on for a few years now, maybe when that is done that will be the professional turning point?

    What genre(s) do you work in?

    I mostly illustrate in a comic book style, pretty much all my digital paintings are figurative in some sort of way.

    Whose work inspires you most — who are your role models as an artist?

    That’s an easy one for me, it has to be Simon Bisley and Frank Frazetta. Simon Bisley’s work is legendary in the comic/graphic novel world, he really pushes the boundaries and is a complete master of his medium. As for Frank Frazetta’s work, need I say more?

    How and when did you get to try digital painting for the first time?

    The first time I did anything digital art related in a computer would be in 1991 whilst at college. They had a paint program that used the mouse and keyboard, very basic but at that time it was amazing that you could draw pictures in a computer. I used a graphics tablet to try drawing with for the first time in around 2002 but I guess the first time I properly did a complete digital painting using a tablet would have been in 2007. I saw a small 8 inch tablet in my local supermarket (yep, they sold a small range of home office equipment) and bought it to try it out. I’ve never looked back since.

    What makes you choose digital over traditional painting?

    I still love traditional painting and still do it every now and then but with digital, the scope for colours, details, speed and of course good ol’ ctrl Z means you can really go for it. That and it’s a lot less messy! I mean having a room full of canvases and somewhere to actually paint in large scale is great but just not possible these days. Once I discovered digital on a level that meant I could create what was in my head at a speed that I wanted to, then the transition was easy for me.

    How did you find out about Krita?

    I used Painter and Photoshop for Windows for years, although I always felt a little let down by them. Then I changed over to the open source movement (Linux) a couple of years ago. This meant having to find another program to paint with. I went looking in Google and read through forums for an alternative that was dedicated to digital painting with a strong emphasis on keeping it as close to traditional painting as possible. Krita was one that kept popping up and had an enthusiastic following which I really liked.

    What was your first impression?

    Shortly after I installed it I remember thinking ‘this seems to be kinda like painting a traditional picture but on steroids’. It was just so easy to use for the first time and I could see that it would suit my style very quickly.

    What do you love about Krita?

    I guess if it has to be one thing, it’s got to the the brush engines. They are by far the best I have used in painting software. Very adaptable, easy to edit and make new ones, a real joy to mess around with. Oh and the transform tool… Oh and the right click docker… Oh and the…

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    There is always room for improvement and I guess everyone uses Krita in different ways. I only use Krita to paint, so for me I would like to see more improvements in the brush engines to really nail how they work across large brushes, to multiple headed brushes.

    One of the main things that annoys me is the brush lag when they are large but I see that’s up for being fixed for V3. Nothing really bothers me that much whilst using it.

    What sets Krita apart from the other tools that you use?

    You can really get a feel of live painting when you use it. It’s almost like you expect to have paint on your fingers when you are finished.

    If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

    terminator800This. It changes but at the moment this Terminator graphic novel style cover is my favourite. As I did it as a black and white ink drawing in Krita first, then coloured a year later.

    What techniques and brushes did you use in it?

    I still use the same techniques as I do when I paint with brushes and tubes of paint but of course the process is much faster. I started with a pencil sketch then inked it into a black and white finished piece, then coloured it later on.
    I used a 2b pencil brush and the ink 25 brush for the initial black and white. Then mostly 3 different wet bristle brushes for almost all of the main, titles and background tweaking them a little in the process Then some splatter brushes in the background to keep it a little messy. I keep to minimum layers only using one for the background, title and main part and sometimes just one depending on how it’s going.

    I have a set of about 15 brushes that I have tagged into a set as my defaults, most of them have been tweaked in some sort of way.

    Where can people see more of your work?

    The usual suspects on social sites. I mostly post finished pieces here
    https://www.facebook.com/tobywillsmerillustrator

    and I tend to post more random stuff here doodles/sketches, WIPs and the token pictures of my dog.
    https://www.instagram.com/tobywillsmer/

    Anything else you’d like to share?

    I hope people enjoyed reading this and will enjoy the illustrations I keep putting out. It will be interesting to see how Krita evolves over the next few years and to see how new people finding it will adapt and use it to create and push the digital art format. I for one am looking forward to the V3 release.

    May 08, 2016

    Setting "Emacs" key theme in gtk3 (and Firefox 46)

    I recently let Firefox upgrade itself to 46.0.1, and suddenly I couldn't type anything any more. The emacs/readline editing bindings, which I use probably thousands of times a day, no longer worked. So every time I typed a Ctrl-H to delete the previous character, or Ctrl-B to move back one character, a sidebar popped up. When I typed Ctrl-W to delete the last word, it closed the tab. Ctrl-U, to erase the contents of the urlbar, opened a new View Source tab, while Ctrl-N, to go to the next line, opened a new window. Argh!

    (I know that people who don't use these bindings are rolling their eyes and wondering "What's the big deal?" But if you're a touch typist, once you've gotten used to being able to edit text without moving your hands from the home position, it's hard to imagine why everyone else seems content with key bindings that require you to move your hands and eyes way over to keys like Backspace or Home/End that aren't even in the same position on every keyboard. I map CapsLock to Ctrl for the same reason, since my hands are too small to hit the PC-positioned Ctrl key without moving my whole hand. Ctrl was to the left of the "A" key on nearly all computer keyboards until IBM's 1986 "101 Enhanced Keyboard", and it made a lot more sense than IBM's redesign since few people use Caps Lock very often.)

    I found a bug filed on the broken bindings, and lots of people commenting online, but it wasn't until I found out that Firefox 46 had switched to GTK3 that I understood had actually happened. And adding gtk3 to my web searches finally put me on the track to finding the solution, after trying several other supposed fixes that weren't.

    Here's what actually worked: edit ~/.config/gtk-3.0/settings.ini and add, inside the [Settings] section, this line:

    gtk-key-theme-name = Emacs
    

    I think that's all that was needed. But in case that doesn't do it, here's something I had already tried, unsuccessfully, and it's possible that you actually need it in addition to the settings.ini change (I don't know how to undo magic Gnome settings so I can't test it):

    gsettings set org.gnome.desktop.interface gtk-key-theme "Emacs"
    

    May 06, 2016

    darktable 2.0.4 released

    we're proud to announce the fourth bugfix release for the 2.0 series of darktable, 2.0.4!

    the github release is here: https://github.com/darktable-org/darktable/releases/tag/release-2.0.4.

    as always, please don't use the autogenerated tarball provided by github, but only our tar.xz. the checksum is:

    $ sha256sum darktable-2.0.4.tar.xz
    80e448622ff060bca1d64bf6151c27de34dea8fe6b7ddb708e1e3526a5961e62  darktable-2.0.4.tar.xz
    $ sha256sum darktable-2.0.4.dmg 
    1e6306f623c3743fabe88312d34376feae94480eb5a38858f21751da04ac4550  darktable-2.0.4.dmg

    and the changelog as compared to 2.0.3 can be found below.

    New Features

    • Support grayscale input profiles
    • Add a BRG profile for testing purposes

    Bugfixes

    • Fix the GUI with GTK 3.20
    • Fix the color profiles we ship
    • Fix two deflicker (exposure iop, mode = automatic) issues
    • Fix trashing of files on OSX
    • Fix Rights field in Lua

    Base Support

    • Nikon D5
    • Sony ILCA-68

    White Balance Presets

    •  Pentax K-S1
    • Sony ILCA-68

    Noise Profiles

    • Canon PowerShot G15
    • Fujifilm X70
    • Olympus PEN-F
    • Panasonic DMC-GF7

    Translation Added

    • Slovenian

    Translations Updates

    • Catalan
    • Dutch
    • German
    • Hebrew
    • Slovak
    • Spanish

    May 05, 2016

    SVG Working Group Editor’s Meeting Report — London — 2016

    First, let me thank all the people that donated to Inkscape’s SVG Standards Work fund as well as to the Inkscape general fund that made my attendance possible.

    The subset of the SVG working group met in London after the LGM meeting to get down to the nitty gritty of getting the SVG 2 specification ready to move to the “Candidate Recommendation” (CR) stage. Three of the core group members (Nikos, Amelia, and myself) were joined some of the days by three other group members who do not normally participate in the weekly teleconferences. This was a great chance to get some new eyes looking at the spec.

    Most of the time was spent in reading the specification together. We managed to get through about half the chapters including the most problematic ones. When we found problems we either made changes on the fly if possible or filed issues if not. We recently switched to Github to keep track of issues which seems to be working well. You can see outstanding issues at our issue tracker. (Feel completely free to add your own comments!)

    Minutes of the meetings can be found at:

    As this was a meeting focused on getting the spec out the door, our discussions were pretty mundane. Nevertheless, let me give you a flavor of the kinds of things we addressed. It was brought up in the issue tracker that the specification is unclear on how text should be rendered if it follows a <textPath> element. It never occurred to me (and probably to most people) that you could have in an SVG file the following:

    <text x="50" y="150">Before<textPath xlink:href="#path">On Path</textPath>After</text>
    ]]>
    

    For an implementer, it is fairly straight forward to figure out where to position the “Before” (use the ‘x’ and ‘y’ attributes) and the “On Path” (use the path) but where should the “After” be rendered? Firefox won’t render it at all. Chrome will render the “After” starting at the end position of the ‘h’ in “On Path”. After some discussion we decided that the only really logical place to render the “After” was at the end of the path. This is the only point that is well defined (the ‘h’ can move around depending on the font used to render the text).

    Defining a fill area using <div&gt and floats.

    How the above text element is rendered according to the decision of the group at the London meeting. The starting point for text after the <textPath> element is at the end of the path (red dot).

    We will have another editor’s meeting in June in Amsterdam where hopefully we’ll finish the editting so we can move the spec to CR. We’ll then need to turn our attention to writing tests. Please consider making a donation to support my travel to this meeting at the link at the start of the post! Thanks.

    Blog backlog, Post 1, Emoji

    Short version


    dnf copr enable hadess/emoji
    dnf update cairo
    dnf install eosrei-emojione-fonts



    Long version

    A little while ago, I was reading this article, called "Emoji: how do you get from U+1F355 to 🍕?", which said, and I reluctantly quote: "[...] and I don’t know what Linux does, but it’s probably black and white and who cares [...]".

    Well. I care. And you probably do as well if your pizza slice above is black and white.

    So I set out to check on the status of Behdad Esfahbod (or just "Behdad" as we know him)'s patches to add colour font support to cairo, which he presented at GUADEC in Strasbourg Gothenburg. It adds support for the "bitmap in font" as Android does, and as freetype supports.

    It kind of worked, and Matthias Clasen reworked the patches a few times, completing the support. This is probably not the code that will be worked on and will land in cairo, but it's a good enough base for people interested in contributing to use.

    After that, we needed something to display using that feature. We ended up using the same font recommended in this article, the Emoji One font.


    There's still plenty to be done to support emojis, even after the cairo support is merged. We'd need a way to input emojis (maybe Lalo Martins is listening), and support in a lot of toolkits other than GNOME (Firefox only supports the SVG-in-OTF format, WebKit, Chrome, LibreOffice don't seem to know about colour fonts either).

    You can find more information about design interests in GNOME around Emoji on the Wiki.

    Update: Behdad's presentation was in Gothenburg, not Strasbourg. You can also see the video on YouTube.