July 30, 2014

A logo & icon for DevAssistant

logo-vert

This is a simple story about a logo design process for an open source project in case it might be informative or entertaining to you. :)

A little over a month ago, Tomas Redej contacted me to request a logo for DevAssistant. DevAssistant is a UI aimed at making developers’ lives easier by automating a lot of the menial tasks required to start up a software project – setting up the environment, starting services, installing dependencise, etc. His team was gearing up for a new release and really wanted a logo to help publicize the release. They came to me for help as colleagues familiar with some of the logo work I’ve done.

jumbotron-bg

When I first received Tomas’ request, I reviewed DevAsisstant’s website and had some questions:

  • Are there any parent or sibling projects to this one that have logos we’d need this to match up with?
  • Is an icon needed that coordinates with the logo as well?
  • There is existing artwork on the website (shown above) – should the logo coordinate with that? Is that design something you’re committed to?
  • Are there any competing projects / products (even on other platforms) that do something similar? (Just as a ‘competitive’ evaluation of their branding.)

He had some answers :) :

  • There aren’t currently any parent or sibling projects with logos, so from that persepctive we had a blank slate.
  • They definitely needed an icon, preferably in all the required sizes for the desktop GUI.
  • Tomas impressively had made the pre-existing artwork himself, but considered it a placeholder.
  • The related projects/products he suggested are: Software Collections, JBoss Forge, and Enide.

From the competition I saw a lot of clean lines, sharp angles, blues and greens, some bold splashes here and there. Software Collections has a logotype without a mark; JBoss Forge has a mark with an anvil (a construction tool of sorts); Enide doesn’t have a logo per se but is part of Node.js which has a very stylized logotype where letters are made out of hexagons.

I liked how Tomas’ placeholder artwork used shades of blue, and thought about how the triangles could be shaped such to make up the ‘D’ of ‘Dev’ and the ‘A’ of Assistant (similarly to how ‘node’ is spelled out with hexagons for each letter in the node.js logotype.) I played around a little be with the notion of ‘d’ and ‘a’ triangles and sketched some ideas out:

devassistant-logo-sketches

I grabbed an icon sheet template from the GNOME design icon repo and drew this out in Inkscape. This, actually, was pretty foolish of me since I hadn’t sent Tomas my sketches at this point and I didn’t even have a solid concept in terms of the mark’s meaning beyond being stylized ‘d’ and ‘a’ – it could have been a waste of time – but thankfully his team liked the design so it didn’t end up being a waste at all. :)

devassistant-logoidea-1

Then I thought a little about about meaning here. (Maybe this is backwards. Sometimes I start with meaning / concept, sometimes I start with a visual and try to build meaning into it. I did the latter this time; sue me!) I was thinking about how JBoss Forge used a construction tool in its logo (Logo copyright JBoss & Red Hat):

forge

And I thought about how Glade uses a carpenter’s square (another construction tool!) in its icon… hmmm… carpenter’s squares are essentially triangles… ! :) (Glade logo from the GNOME icon theme, LGPLv3+):

Glade_new_logo

I could think of a few other developer-centric tools that used other artifacts of construction – rulers, hard hats, hammers, wrenches, etc. – for their logo/icon design. It seemed to be the right family of metaphor anyway, so I started thinking the ‘D’ and ‘A’ triangles could be carpenter’s squares.

What I started out with didn’t yet have the ruler markings, or the transparency, and was a little hacky in the SVG… but it could have those markings. With Tomas’ go-ahead, I made the triangles into carpenter’s squares and created all of the various sizes needed for the icon:

devassistant-sheet

So we had a set of icons that could work! I exported them out to PNGs and tarred them up for Tomas and went to work on the logo.

Now why didn’t I start with the logo? Well, I decided to start with the icon just because the icon had the most amount of constraints on it – there’s certain requirements in terms of the sizes a desktop icon has to read at, and I wanted it to fit in with the style of other GNOME icons… so I figured, start where the most constraints are, and it’s easier to adapt what you come up with there in the arena where you have less constraints. This may have been a different story if the logo had more constraints – e.g., if there was a family of app brands it had to fit into.

So logos are a bit different than icons in that people like to print them on things in many different sizes, and when you pay for printed objects (especially screen-printed T-shirts) you pay for color, and it can be difficult to do effects like drop shadows and gradients. (Not impossible, but certainly more of a pain. :) ) The approach I took with the logo, then, was to simplify the design and flatten the colors down compared to the icon.

Anyhow, here’s the first set of ideas I sent to Tomas for the logomark & logotype:

logo-comps-1

From my email to him explaining the mockups:

Okay! Attached is a comp of two logo variations. I have it plain and flat in A & B (A is vertical, and B is a horizontal version of the same thing.) C & D are the same except I added a little faint mirror image frame to the blue D and A triangles – I was just playing around and it made me think of scaffolding which might be a nice analogy. The square scaffolding shape the logomark makes could also be used to create a texture/pattern for the website and associated graphics.

The font is an OFL font called Spinnaker – I’ve attached it and the OFL that it came with. The reason I really liked this font in particular compared to some of the others I evaluated is that the ‘A’ is very pointed and sharp like the triangles in the logo mark, and the ratio of space between the overall size of some of the lowercase letters (e.g., ‘a’ and ‘e’) to their enclosed spaces seemed similar to the ratio of the size of the triangles in the logomark and the enclosed space in the center of the logomark. I think it’s also a friendly-looking font – I would think an assistant to somebody would have a friendly personality to them.

Anyway, feel free to be brutal and let me know what you think, and we can go with this or take another direction if you’d prefer.

Tomas’ team unanimously favored the scaffolding versions (C&D), but were hoping the mirror image could be a bit darker for more contrast. So I did some versions with the mirror image at different darknesses:

scaffold-shade

I believe they picked B or C, and…. we have a logo.

Overall, this was a very smooth, painless logo design process for a very easy-going and cordial “customer.” :)

July 29, 2014

Prefeitura de Belo Horizonte

This is a project we did for a href=http://portalpbh.pbh.gov.br/pbh/ecp/comunidade.do?app=concursocentroadministrativo>competition for the new city hall of Belo Horizonte (Brazil). It didn't win (The link shows the winning entries), but we are pretty happy about the project anyway. The full presentation boards are at the bottom of this article, as well as the blender model. Below is...

July 24, 2014

Predicting planetary visibility with PyEphem

Part 1: Basic Planetary Visibility

All through the years I was writing the planet observing column for the San Jose Astronomical Association, I was annoyed at the lack of places to go to find out about upcoming events like conjunctions, when two or more planets are close together in the sky. It's easy to find out about conjunctions in the next month, but not so easy to find sites that will tell you several months in advance, like you need if you're writing for a print publication (even a club newsletter).

For some reason I never thought about trying to calculate it myself. I just assumed it would be hard, and wanted a source that could spoon-feed me the predictions.

The best source I know of is the RASC Observer's Handbook, which I faithfully bought every year and checked each month so I could enter that month's events by hand. Except for January and February, when I didn't have the next year's handbook yet by the time my column went to press and I was on my own. I have to confess, I was happy to get away from that aspect of the column when I moved.

In my new town, I've been helping the local nature center with their website. They had some great pages already, like a What's Blooming Now? page that keeps track of which flowers are blooming now and only shows the current ones. I've been helping them extend it by adding features like showing only flowers of a particular color, separating the data into CSV databases so it's easier to add new flowers or butterflies, and so forth. Eventually we hope to build similar databases of birds, reptiles and amphibians.

And recently someone suggested that their astronomy page could use some help. Indeed it could -- it hadn't been updated in about five years. So we got to work looking for a source of upcoming astronomy events we could use as a data source for the page, and we found sources for a few things, like moon phases and eclipses, but not much.

Someone asked about planetary conjunctions, and remembering how I'd always struggled to find that data, especially in months when I didn't have the RASC handbook yet, I got to wondering about calculating it myself. Obviously it's possible to calculate when a planet will be visible, or whether two planets are close to each other in the sky. And I've done some programming with PyEphem before, and found it fairly easy to use. How hard could it be?

Note: this article covers only the basic problem of predicting when a planet will be visible in the evening. A followup article will discuss the harder problem of conjunctions.

Calculating planet visibility with PyEphem

The first step was figuring out when planets were up. That was straightforward. Make a list of the easily visible planets (remember, this is for a nature center, so people using the page aren't expected to have telescopes):

import ephem

planets = [
    ephem.Moon(),
    ephem.Mercury(),
    ephem.Venus(),
    ephem.Mars(),
    ephem.Jupiter(),
    ephem.Saturn()
    ]

Then we need an observer with the right latitude, longitude and elevation. Elevation is apparently in meters, though they never bother to mention that in the PyEphem documentation:

observer = ephem.Observer()
observer.name = "Los Alamos"
observer.lon = '-106.2978'
observer.lat = '35.8911'
observer.elevation = 2286  # meters, though the docs don't actually say

Then we loop over the date range for which we want predictions. For a given date d, we're going to need to know the time of sunset, because we want to know which planets will still be up after nightfall.

observer.date = d
sunset = observer.previous_setting(sun)

Then we need to loop over planets and figure out which ones are visible. It seems like a reasonable first approach to declare that any planet that's visible after sunset and before midnight is worth mentioning.

Now, PyEphem can tell you directly the rising and setting times of a planet on a given day. But I found it simplified the code if I just checked the planet's altitude at sunset and again at midnight. If either one of them is "high enough", then the planet is visible that night. (Fortunately, here in the mid latitudes we don't have to worry that a planet will rise after sunset and then set again before midnight. If we were closer to the arctic or antarctic circles, that would be a concern in some seasons.)

min_alt = 10. * math.pi / 180.
for planet in planets:
    observer.date = sunset
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "is already up at sunset"

Easy enough for sunset. But how do we set the date to midnight on that same night? That turns out to be a bit tricky with PyEphem's date class. Here's what I came up with:

    midnight = list(observer.date.tuple())
    midnight[3:6] = [7, 0, 0]
    observer.date = ephem.date(tuple(midnight))
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "will rise before midnight"

What's that 7 there? That's Greenwich Mean Time when it's midnight in our time zone. It's hardwired because this is for a web site meant for locals. Obviously, for a more general program, you should get the time zone from the computer and add accordingly, and you should also be smarter about daylight savings time and such. The PyEphem documentation, fortunately, gives you tips on how to deal with time zones. (In practice, though, the rise and set times of planets on a given day doesn't change much with time zone.)

And now you have your predictions of which planets will be visible on a given date. The rest is just a matter of writing it out into your chosen database format.

In the next article, I'll cover planetary and lunar conjunctions -- which were superficially very simple, but turned out to have some tricks that made the programming harder than I expected.

July 23, 2014

Watch out for DRI3 regressions

DRI3 has plenty of necessary fixes for X.org and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.

Update: Wayland is already perfect, and doesn't use DRI3. The "DRI2" structures in Mesa are just that, structures. With Wayland, the DRI2 protocol isn't actually used.

Here’s a low-barrier way to help improve FLOSS apps – AppStream metadata: Round 1

UPDATE: This program is full now!

We are so excited that we’ve got the number of volunteers we needed to assign all of the developer-related packages we identified for this round! THANK YOU! Any further applications will be added to a wait list (in case any of the assignees need to drop any of their assigned packages.) Depending on how things go, we may open up another round in a couple of weeks or so, so we’ll keep you posted!

Thanks again!!

– Mo, Ryan, and Hughsie


appstream-logo

Do you love free and open source software? Would you like to help make it better, but don’t have the technical skills to know where you can jump in and help out? Here is a fantastic opportunity!

The Problem

There is an cross-desktop, cross-distro, Freedesktop.org project called AppStream. In a nutshell, AppStream is an effort to standardize metadata about free and open source applications. Rather than every distro have its own separate written description for Inkscape, for example, we’d have a shared and high-quality description of Inkscape that would be available to users of all distros. Why is this kind of data important? It helps free desktop users discover applications that might meet their needs – for example, via searching software center applications (such as GNOME Software and Apper.)

Screenshot of GNOME Software showing app metadata in action!

Screenshot of GNOME Software showing app metadata in action!

Running this project in a collaborative way is also a great way for us to combine efforts and come up with great quality content for everyone in the FLOSS community.

Contributors from Fedora and other distros have been working together to build the infrastructure to make this project work. But, we don’t yet have even close to full metadata coverage of the thousands of FLOSS applications we ship. Without metadata for all of the applications, users could be missing out on great applications or may opt out of installing an app that would work great for them because they don’t understand what the app does or how it could meet their needs.

The Plan

Ryan Lerch among other contributors have been working very hard for many weeks now generating a lot of the needed metadata, but as of today only have roughly 25% coverage for the desktop packages in Fedora. We’d love to see that number increase significantly for Fedora 21 and beyond, but we need your help to accomplish that!

Ryan, Richard Hughes, and I recently talked about the ongoing effort. Progress is slower than we’d like, we have less contributors than we’d like – but it is a great opportunity for new contributors, because of the low barrier to entry and big impact the work has!

So along that line, we thought of an idea for an ongoing program that we’d like to pilot: Basically, we’ll chunk the long list of applications that need the metadata into thematic lists – for example, graphics applications, development applications, social media applications, etc. etc. Each of those lists we’ll break into chunks of say 10 apps each, and volunteers can pick up those chunks and submit metadata for just those 10.

The specific metadata we are looking for in this pilot is a brief summary about what the application is and a description of what the application does. You do not need to be a coder to help out; you’ll need to be able and willing to research the applications in your chunk and draft an openly-licensed paragraph (we’ll provide specific guidelines) and submit it via a web form on github. That’s all you need to do.

This blog post will kick off our pilot (“round 1″) of this effort, and we’ll be focusing on applications geared towards developers.

Your mission

If you choose to participate in this program, your mission will be to research and write up both brief summaries about and long-form descriptions for each of ~10 free and open source applications.

You might want to check out the upstream sites for each application, see if any distros downstream have descriptions for the app, maybe install and try the app out for yourself, or ask current users of the app about it and its strengths and weaknesses. The final text you submit, however, will need to be original writing created by you.

Specifications

Summary field for application

The summary field is a short, one-line description of what the application enables users to do:

  • It should be around 5 – 12 words long, and a single sentence with no ending punctuation.
  • It should start with action verbs that describe what it allows the user to do, for example, “Create and edit Scalable Vector Graphics images” from the Inkscape summary field.
  • It shouldn’t contain extraneous information such as “Linux,” “open source,” “GNOME,” “gtk,” “kde,” “qt,” etc. It should focus on what the application enables the user to do, and not the technical or implementation details of the app itself.
Examples

Here are some examples of good AppStream summary metadata:

  • “Add or remove software installed on the system” (gpk-application / 8 words)
  • “Create and edit Scalable Vector Graphics images” (Inkscape / 7 words)
  • “Avoid the robots and make them crash into each other” (GNOME Robots / 10 words)
  • “View and manage system resources” (GNOME System Monitor / 5 words)
  • “Organize recipes, create shopping lists, calculate nutritional information, and more.” (Gourmet / 10 words)

Description field for application

The description field is a longer-form description of what the application does and how it works. It can be between 1 – 3 short paragraphs / around 75-100 words long.

Examples

Here are some examples of good AppStream description metadata:

  • GNOME System Monitor / 76 words:
    “System Monitor is a process viewer and system monitor with an attractive, easy-to-use interface.

    “System Monitor can help you find out what applications are using the processor or the memory of your computer, can manage the running applications, force stop processes not responding, and change the state or priority of existing processes.

    “The resource graphs feature shows you a quick overview of what is going on with your computer displaying recent network, memory and processor usage.”

  • Gourmet / 94 words:
    “Gourmet Recipe Manager is a recipe-organizer that allows you to collect, search, organize, and browse your recipes. Gourmet can also generate shopping lists and calculate nutritional information.

    “A simple index view allows you to look at all your recipes as a list and quickly search through them by ingredient, title, category, cuisine, rating, or instructions.

    “Individual recipes open in their own windows, just like recipe cards drawn out of a recipe box. From the recipe card view, you can instantly multiply or divide a recipe, and Gourmet will adjust all ingredient amounts for you.”

  • GNOME Robots / 102 words:
    “It is the distant future – the year 2000. Evil robots are trying to kill you. Avoid the robots or face certain death.

    “Fortunately, the robots are extremely stupid and will always move directly towards you. Trick them into colliding into each other, resulting in their destruction, or into the junk piles that result. You can defend yourself by moving the junk piles, or escape to safety with your handy teleportation device.

    “Your supply of safe teleports is limited, and once you run out, teleportation could land you right next to a robot, who will kill you. Survive for as long as possible!”

Content license

These summaries and descriptions are valuable content, and in order to be able to use them, you’ll need to be willing to license them under a license such that the AppStream project and greater free and open source software community can use them.

We are requesting that all submissions be licensed under the Creative Commons’ CC0 license.

What’s in it for you?

Folks who contribute metadata to this effort through this program will be recognized in the upstream appdata credits as official contributors to the project and will also be awarded a special Fedora Badges badge for contributing appdata!

appstream

When this pilot round is complete, we’ll also publish a Fedora Magazine article featuring all of the contributors – including you!

Oh, and of course – you’ll be making it easier for all free and open source software users (not just Fedora!) to find great FLOSS software and make their lives better! :)

Sign me up! How do I get started?

help-1

  1. First, if you don’t have one already, create an account at GitHub.
  2. In order to claim your badge and to interact with our wiki, you’ll need a Fedora account. Create a Fedora account now if you don’t alrady have one.
  3. Drop an email to appstream at lists dot fedoraproject [.] org with your GitHub username and your Fedora account username so we can register you as a contributor and assign you your applications to write metadata for!
  4. For each application you’ll need to write metadata for, we’ve generated an XML document in the Fedora AppStream GitHub repo. We will link you up to each of these when we give you your assignment.
  5. For each application, research the app via upstream websites, reviews, talking to users, and trying out the app for yourself, then write up the summary and description fields to the specifications given above.
  6. To submit your metadata, log into GitHub and visit the XML file for the given application we gave you in our assignment email. Take a look at this example appstream metadata file for an application called Insight. You’ll notice in the upper right corner there is an ‘Edit’ button – click on this, edit the ‘Summary’ and ‘Description’ fields, edit the copyright statement towards the very top of the file with your information, and then submit them using the form at the bottom.

Once we’ve received all of your submissions, we’ll update the credits file and award you your badge. :)

If you end up commiting to a batch of applications and end up not having the time to finish, we ask that you let us know so we can assign the apps to someone else. We’re asking that you take two weeks to complete the work – if you need more time, no problem, let us know. We just want to make sure we reopen up assigned apps for others to join in and help out with.

Let’s do this!

Ready to go? Drop us a line!

GUADEC 2014 Map

Want a custom map for GUADEC 2014?

Here’s a map I made that shows the venue, the suggested hotels, transit ports (airport/train station), vegetarian & veggie-friendly restaurants, and a few sights that look interesting.

I made this with Google Map Engine, exported to KML, and also changed to GeoJSON and GPX.

If you want an offline map on an Android phone, I suggest opening up the KML file with Maps.Me (proprietary OpenStreeMap-based app, but nice) or the GPX on OSMand (open source and powerful, but really clunky).

You can also use the Google Maps Engine version with Google Maps Engine on your Android phone, but it doesn’t really support offline mode all so well, so it’s frustratingly unreliable at best. (But it does have pretty icons!)

See you at GUADEC!

July 21, 2014

Development activity is moving to Github

Octicons_octoface_128

In just under a week’s time, on Sunday 27th July 2014, I’ll be moving MyPaint’s old Gitorious git repositories over to the new GitHub ones fully, and closing down the old location. For a while now we’ve been maintaining the codelines in parallel to give people some time to switch over and get used to the new site; it’s time to formally switch over now.

If you haven’t yet changed your remotes over on existing clones, now would be a very good time to do that!

The bug tracker is moving from Gna! to Github’s issues tracker too – albeit rather slowly. This is less a matter of just pushing code to a new place and telling people about the move; rather we have to triage bugs as we go, and the energy and will to do that has been somewhat lacking of late. Bug triage isn’t fun, but it needs to be done.

(Github’s tools are lovely, and we’re already benefiting from having more eyeballs focussed on the projects. libmypaint has started using Travis and Appveyor for CI, the MyPaint application’s docs will benefit tons from being more wiki-like to edit, and the issue tracker is just frankly better documented and nicer for pasting in screencaps and exception dumps)

FreeCAD release 0.14

This is certainly a bit overdue, since the official launch already happened more than two weeks ago, but at last,here it goes: The 0.14 version of FreeCAD has been released! It happened a long, long time after 0.13, about one year and a half, but we're decided to not let that happen again next time,...

July 19, 2014

Stellarium 0.13.0 has been released!

The Stellarium development team after 9 months of development is proud to announce the release of version 0.13.0 of Stellarium.

This release brings some interesting new features:
- New modulated core.
- Refactored shadows and introducing the normal mapping.
- Sporadic meteors and meteors has the colors now.
- Comet tails rendering.
- New translatable strings and new textures.
- New plugin: Equation of Time - provides solution for Equation of Time.
- New plugin: Field of View - provides shortcuts for quick changes field of view.
- New plugin: Navigational Stars - marks 58 navigational stars on the sky.
- New plugin: Pointer Coordinates - shows the coordinates of the mouse pointer.
- New plugin: Meteor Showers - provides visualization of meteor showers.
- New version of the Satellites plugin: introduces star-like satellites and bug fixes.
- New version of the Exoplanets plugin: displaying of the potential habitable exoplanets; improvements for performance and code refactoring.
- New version of the Angle Measure plugin: displaying of the position angle.
- New version of the Quasars plugin: improvements for performance; added marker_color parameter.
- New version of the Pulsars plugin: improvements for performance; display pulsars with glitches; setting color for marker for different types of the pulsars.
- New versions of the Compass Marks, Oculars, Historical Supernovae, Observability analysis and Bright Novae plugins: bug fixing, code refactoring and improvements.

There have also been a large number of bug fixes and serious performance improvements.

We have updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we highly recommended reset the settings when you will install the new version (you can choose required points in the installer).

A huge thanks to our community whose contributions help to make Stellarium better!

July 18, 2014

Fri 2014/Jul/18

July 17, 2014

Time-lapse photography: a simple Arduino-driven camera intervalometer

[Arduino intervalometer] While testing my automated critter camera, I was getting lots of false positives caused by clouds gathering and growing and then evaporating away. False positives are annoying, but I discovered that it's fun watching the clouds grow and change in all those photos ... which got me thinking about time-lapse photography.

First, a disclaimer: it's easy and cheap to just buy an intervalometer. Search for timer remote control or intervalometer and you'll find plenty of options for around $20-30. In fact, I ordered one. But, hey, it's not here yet, and I'm impatient. And I've always wanted to try controlling a camera from an Arduino. This seemed like the perfect excuse.

Why an Arduino rather than a Raspberry Pi or BeagleBone? Just because it's simpler and cheaper, and this project doesn't need much compute power. But everything here should be applicable to any microcontroller.

My Canon Rebel Xsi has a fairly simple wired remote control plug: a standard 2.5mm stereo phone plug. I say "standard" as though you can just walk into Radio Shack and buy one, but in fact it turned out to be surprisingly difficult, even when I was in Silicon Valley, to find them. Fortunately, I had found some, several years ago, and had cables already wired up waiting for an experiment.

The outside connector ("sleeve") of the plug is ground. Connecting ground to the middle ("ring") conductor makes the camera focus, like pressing the shutter button halfway; connecting ground to the center ("tip") conductor makes it take a picture. I have a wired cable release that I use for astronomy and spent a few minutes with an ohmmeter verifying what did what, but if you don't happen to have a cable release and a multimeter there are plenty of Canon remote control pinout diagrams on the web.

Now we need a way for the controller to connect one pin of the remote to another on command. There are ways to simulate that with transistors -- my Arduino-controlled robotic shark project did that. However, the shark was about a $40 toy, while my DSLR cost quite a bit more than that. While I did find several people on the web saying they'd used transistors with a DSLR with no ill effects, I found a lot more who were nervous about trying it. I decided I was one of the nervous ones.

The alternative to transistors is to use something like a relay. In a relay, voltage applied across one pair of contacts -- the signal from the controller -- creates a magnetic field that closes a switch and joins another pair of contacts -- the wires going to the camera's remote.

But there's a problem with relays: that magnetic field, when it collapses, can send a pulse of current back up the wire to the controller, possibly damaging it.

There's another alternative, though. An opto-isolator works like a relay but without the magnetic pulse problem. Instead of a magnetic field, it uses an LED (internally, inside the chip where you can't see it) and a photo sensor. I bought some opto-isolators a while back and had been looking for an excuse to try one. Actually two: I needed one for the focus pin and one for the shutter pin.

How do you choose which opto-isolator to use out of the gazillion options available in a components catalog? I don't know, but when I bought a selection of them a few years ago, it included a 4N25, 4N26 and 4N27, which seem to be popular and well documented, as well as a few other models that are so unpopular I couldn't even find a datasheet for them. So I went with the 4N25.

Wiring an opto-isolator is easy. You do need a resistor across the inputs (presumably because it's an LED). 380&ohm is apparently a good value for the 4N25, but it's not critical. I didn't have any 380&ohm but I had a bunch of 330&ohm so that's what I used. The inputs (the signals from the Arduino) go between pins 1 and 2, with a resistor; the outputs (the wires to the camera remote plug) go between pins 4 and 5, as shown in the diagram on this Arduino and Opto-isolators discussion, except that I didn't use any pull-up resistor on the output.

Then you just need a simple Arduino program to drive the inputs. Apparently the camera wants to see a focus half-press before it gets the input to trigger the shutter, so I put in a slight delay there, and another delay while I "hold the shutter button down" before releasing both of them.

Here's some Arduino code to shoot a photo every ten seconds:

int focusPin = 6;
int shutterPin = 7;

int focusDelay = 50;
int shutterOpen = 100;
int betweenPictures = 10000;

void setup()
{
    pinMode(focusPin, OUTPUT);
    pinMode(shutterPin, OUTPUT);
}

void snapPhoto()
{
    digitalWrite(focusPin, HIGH);
    delay(focusDelay);
    digitalWrite(shutterPin, HIGH);
    delay(shutterOpen);
    digitalWrite(shutterPin, LOW);
    digitalWrite(focusPin, LOW);
}

void loop()
{
    delay(betweenPictures);
    snapPhoto();
}

Naturally, since then we haven't had any dramatic clouds, and the lightning storms have all been late at night after I went to bed. (I don't want to leave my nice camera out unattended in a rainstorm.) But my intervalometer seemed to work fine in short tests. Eventually I'll make some actual time-lapse movies ... but that will be a separate article.

July 16, 2014

Wavelet Decompose (Again)

Yes, more fun things you can do with Wavelet Scales.

If you’ve been reading this blog for a bit (or just read through any of my previous postprocessing tutorials), then you should be familiar with Wavelet Decompose. I use them all the time for skin retouching as well as other things. I find that being able to think of your images in terms of detail scales opens up a new way of approaching problems (and some interesting solutions).

A short discussion on the GIMP Users G+ community led the member +Marty Keil to suggest a tutorial on using wavelets for other things (particularly sharpening). Since I tend to use wavelet scales often in my processing (including sharpening), I figured I would sit down and enumerate some ways to use them, like:



Wavelets? What?

For our purposes (image manipulation), wavelet decomposition allows us to consider the image as multiple levels of detail components, that when combined will yield the full image. That is, we can take an image and separate it out into multiple layers, with each layer representing a discrete level of detail.

To illustrate, let’s have a look at my rather fetching model:


It was kindly pointed out to me that the use of the Lena image might perpetuate the problems with the objectification of women. So I swapped out the Lena image with a model that doesn't carry those connotations.

The results of running Wavelet Decompose on the image yields these 6 layers. These are arranged in increasing order of detail magnitude (scales 1-5 + a residual layer).


Notice that each of the scales contains a particular set of details starting with the finest details and becoming larger until you reach the residual scale. The residual scale doesn’t contain any fine details, instead it consists mostly of color and tonal information.

This is very handy if you need to isolate particular features for modifications. Simply find the scale (or two) that contain the feature and modify it there without worrying as much about other details at the same location.

The Wavelet Decompose plug-in actually sets each of these layer modes (except Residual) as “Grain Merge”. This allows each layer to contribute their details to the final result (which will look identical to the original starting layer with no modifications). The “Grain Merge” layer blend mode means that pixels that are 50% value ( RGB(127,127,127) ) will not affect the final result. This also means that if we paint on one of the scale layers with gray, it will effectively erase those details from the final image (keep this in mind for later).

Skin Smoothing (Redux)

I previously talked about using Wavelet Decompose for image retouching:


The first link was my original post on how I use wavelets to smooth skin tones. The second and third are examples of applying those principles to portraits. The last two articles are complete walkthroughs of a postprocessing workflow, complete with full-resolution examples to download if you want to try it and follow along.

I guess my point is that I’m re-treading a well worn path here, but I have actually modified part of my workflow so it’s not for naught (sorry, I couldn’t resist).

Getting Started

So let’s have a look at using wavelets for skin retouching again. We’ll use my old friend, Mairi for this.

Pat David Mairi Headshot Base Image Wavelet Decompose Frequency Separation
Mairi

When approaching skin retouching like this, I feel it’s important to pay attention to how light interacts with the skin. The way that light will penetrate the epidermis and illuminate under the surface is important. Couple that with the different types of skin structures, and you get a complex surface to consider.

For instance, there are very fine details in skin such as faint wrinkles and pores. These often contribute to the perceived texture of skin overall. There is also the color and toning of the skin under the surface as well. These all contribute to what we will perceive.

Let’s have a look at a 100% crop of her forehead.

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation

If I decompose this image to wavelet scales, I can amplify the details on each level by isolating them over the original. So, turning off all the layers except the original, and the first few wavelet scales will amplify the fine details:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Wavelet scales 1,2,3 over the original image.

You may notice that these fine wavelet scales seem to sharpen up the image. Yes, but we’re not talking about them right now. Stick with me - we’ll look at them a little later.

On the same idea, if I leave the original and the two biggest scales visible, I’ll get a nicely exaggerated view of the sub-surface imperfections:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Wavelet scales 4,5 over the original image.

What we see here are uneven skin tones not caused by surface imperfections, but by deeper tones in the skin. It is this un-evenness that I often try to subdue, and that I think contributes to a more pleasing overall skin tone.

To illustrate, here I have used a bilateral blur on the largest detail scale (Wavelet scale 5) only. Consider the rather marked improvement over the original working on this single detail scale. Notice also that all of the finer details remain to keep skin texture looking real.

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Smoothing only the largest detail scale (Wavelet scale 5) results
(mouseover to compare to original)

Smoothing Skin Tones

With those results in mind, I can illustrate how I will generally approach this type of skin retouching on a face. I usually start by considering specific sections of a face. I try to isolate my work along common facial contours to avoid anything strange happening across features (like smile lines or noses).

Pat David Mairi Headshot Retouch Regions Wavelet Decompose Frequency Separation

I also like to work in these regions as shown because the amount of smoothing that may be needed is not always consistently the same. The forehead may require more than the cheeks, and both may require less than the nose for instance. This allows me to tailor the retouching I do for each region separately in order to arrive at a more consistent result across the entire face.

I’ll use the free-select tool to create a selection of my region, usually with the “Feather edges” option turned on with a large-ish radius (around 30-45 pixels usually). This lets my edits blend a little smoother into the untouched portions of the image.

These days I’ve adjusted my workflow to minimize how much I actually retouch. I’ll usually look at the residual layer first to check the color tones across an area. If they are too spotty or blotchy, I’ll use a bilateral blur to even them out. There is no bilateral blur built into GIMP directly, so on the suggestion of David Tschumperlé (G'MIC) I’ve started using G'MIC with:

Filters → G'MIC...
Repair → Smooth [bilateral]

Once I’m happy with the results on the residual layer (or it doesn’t need any work), I’ll look at the largest detail scale (usually Wavelet scale 5). Lately, this has been the scale level that usually produces the greatest impact quickly. I’ll usually use a Spatial variance of 10, and a Value variance of 7 (with 2 iterations) on the bilateral blur filter. Of course, adjust these as necessary to suit your image and taste.

Here is the result of following those steps on the image of Mairi (less than 5 minutes of work):

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation Residual
Bilateral smoothing on Residual and Wavelet scale 5 only
(mouseover to compare to original)

This was only touching the Residual and Wavelet scale 5 with a bilateral blur and nothing else. As you can see this method provides for a very easy way to get to a great base to begin further work on (spot healing as needed, etc.).

Sharpening

I had actually mentioned this in each of my previous workflow tutorials, but it’s worth repeating here. I tend to use the lowest couple of wavelet scales to sharpen my images when I’m done. This is really just a manual version of using the Wavelet Sharpen plugin.

The first couple of detail scales will contain the highest frequency details. I’ve found that using them to sharpen an image up works fantastic. Here, for example, is our photo of Mairi from above after retouching, but now I use a copy of Wavelet scales 1 & 2 over the image to sharpen those details:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation Sharpen
Wavelet scale 1 & 2 copied over the result to sharpen.
(mouseover to compare)

I’ve purposefully left both of the detail scales on full opacity to demonstrate the effect. I feel this is a far better method for sharpening compared to regular sharpen (I’ve never gotten good results using it) or even to Unsharp Mask (USM). USM can tend to halo around high contrast areas depending on the settings.

I would also adjust the opacity of the scales to adjust how much they would sharpen. If I wanted to avoid sharpening the background for instance, I would also either mask it out or just paint gray on the detail scale to erase the data in that area.

It doesn’t need to stop just a fine detail sharpening, though. The nature of the wavelet decomposition is that you will also get other scale data that can be useful for enhancing contrast on larger details as well. For instance, if I wanted to enhance the local contrast in the sweater of my image, I could use one of the larger scales over the image again and use a layer mask to control the areas that are affected.

To illustrate, here I have also copied scales 3, 4, and 5 over my image. I’ve applied layer masks to the layers to only allow them to affect the sweater. Using these scales allows a nice local contrast to be applied, adding a bit of “pop” to the sweater texture without increasing contrast on the models face or hair.

Pat David Mairi Headshot Wavelet Decompose Frequency Separation Sharpen Enhance
Using coarser detail scales to add some local contrast to the texture of the sweater
(mouseover to compare to previous)

Normally, if I didn’t have a need to work on wavelet scales, I would just use the Wavelet Sharpen plugin to add a touch of sharpening as needed. If I do find it useful (for whatever reason) to work on detail scales already, then I normally just use the scales directly for manually sharpening the image. Occasionally I’ll create the wavelet scales just to have access to the coarse detail levels to bump local contrast to taste, too.

Once you start thinking in terms of detail scales, it’s hard to not get sucked in to finding all sort of uses for them that can be very, very handy.

Stain Removal

What if the thing we want to adjust is not sub-dermal skin retouching, but rather something more like a stain on a childs clothing? As far as wavelets are concerned, it’s the same thing. So let’s look at something like this:

Pat David Wavelet Frequency Separation Stain
30% of the food made it in!

So there’s a small stain on the shirt. We can fix this easy, right?!

Let’s zoom 100% into the area we are interested in fixing:

Pat David Wavelet Frequency Separation Stain Zoom

If we run a Wavelet decomposition on this image, we can see that the areas that we are interested in are mostly confined to the coarser scales + residual (mostly scales 4, 5, and residual):

Pat David Wavelet Frequency Separation Stain Zoom

More importantly, the very fine details that give texture to her shirt, like the weave of the cotton and stitching of the letters, are nicely isolated on the finer detail scales. We won’t really have to touch the finer scales to fix the stains - so it’s trivially easy to keep the texture in place.

As a comparison, imagine having to use a clone or heal tool to accomplish this. You would have a very hard time getting the cloth weave to match up correctly, thus creating a visual break that would make the repair more obvious.

I start on the residual scale, and work on getting the broad color information fixed. I like to use a combination of the Clone tool, and the Heal tool to do this. Paying attention to the color areas I want to keep, I’ll use the Clone tool to bring in the correct tone with a soft-edged brush. I’ll then use the Heal tool to blend it a bit better into the surrounding textures.

For example, here is the work I did on the Residual scale to remove the stain color information:

Pat David Wavelet Frequency Separation Stain Zoom Residual Fix
Clone/Heal of the Wavelet Residual layer
(mouseover to compare to original)

Yes, I know it’s not a pretty patch, but just a quick pass to illustrate what the results can look like. Here is what the above changes to the Wavelet residual layer produces:

Pat David Wavelet Frequency Separation Stain Zoom Residual Fix
Composite image with retouching only on the Wavelet Residual layer
(mouseover to compare to original)

Not bad for a couple of minutes work on a single wavelet layer. I follow the same method on the next two wavelet scales 4 & 5. Clone similar areas into place and Heal to blend into the surrounding texture. After a few minutes, I arrive at this result:

Pat David Wavelet Frequency Separation Stain Zoom Repair Fix
Result of retouching Wavelet residual, 4, and 5 layers only
(mouseover to compare to original)

Perfect? No. It’s not. It was less than 5 minutes of work total. I could spend another 5 minutes or so and get a pretty darn good looking result, I think. The point is more about how easy it is once the image is considered with respect to levels of detail. Look where the color is and you’ll notice that the fabric texture remains essentially unchanged.

As a father of a three year old, believe me when I say that this technique has proved invaluable to me the past few years...

Conclusion

I know I talk quite a bit about wavelet decomposition for retouching. There is just a wonderful bunch of tasks that become much easier when considering an image as a sum of discrete detail parts. It’s just another great tool to keep in mind as you work on your images.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


July 15, 2014

Fanart by Anastasia Majzhegisheva – 11

Anastasia keeps playing with a Morevna’s backstory and this time she brings a short manga/comic strip.2014-07-14-manga-morevna

July 14, 2014

Notes from Calligra Sprint. Part 2: Memory fragmentation in Krita fixed

During the second day of Calligra sprint in Deventer we split into two small groups. Friedrich, Thorsten, Jigar and Jaroslaw were discussing global Calligra issues, while Boud and me concentrated on the performance of Krita and its memory consumption.

We tried to find out why Krita is not fast enough for painting with big brushes on huge images. For our tests we created a two-layer image 8k by 8k pixels (which is 3x256 MiB (2 layers + projection)) and started to paint with 1k by 1k pixels brush. Just to compare, SAI Painting Tool simply forbids creating images more than 5k by 5k pixels and brushes more than 500 pixels wide. And during these tests we found out a really interesting thing...

I guess everyone has at least once read about custom memory management in C++. All these custom new/delete operators, pool allocators usually seem so "geekish" and for "really special purposes only". To tell you the truth, I though I would never need to use them in my life, because standard library allocators "should be enough for everyone". Well, until curious things started to happen...

Well, the first sign of the problems appeared quite long ago. People started to complain that according to system monitor tools (like 'top') Krita ate quite much memory. We could never reproduce it. And what's more 'massif' and internal tile counters always showed we have no memory leaks. We used exactly the number of tiles we needed to store the image of a particular size.

But while making these 8k-image tests, we started to notice that although the number of tiles doesn't grow, the memory reported by 'top' grows quite significantly. Instead of occupying usual 1.3 GiB, which such image would need (layers data + about 400MiB for brushes and textures) reported memory grew up to 3 GiB and higher until OOM Killer woke up and killed Krita. This gave us a clear evidence that we have some problems with fragmentation.

Indeed, during every stoke we have to create about 15000(!) 16KiB objects (tiles). It is quite probable that after a couple of strokes the memory becomes rather fragmented. So we decided to try boost::pool for allocation of these chunks... and it worked! Instead of growing the memory footprint stabilized on 1.3GiB. And that is not counting the fact that boost::pool doesn't free the free'd memory until destruction or explicit purging [0]

Now this new memory management code is already in master! According to some synthetic tests, the painting should become a bit fasted. Not speaking about the much smaller memory usage.

Conclusion:

If you see unusually high memory consumption in your application, and the results measured by massif significantly differ from what you see in 'top', you probably have some fragmentation problem. To proof it, try not to return the memory back to the system, but reuse it. The consumption might fall significantly, especially is you allocate memory in different threads.



[0] - You can release unused memory by explicitly calling release_memory(), but 1) the pool must be ordered, which is worse performance; 2) the release_memory() operation takes about 20-30 seconds(!), so there is no use of it for us.



July 13, 2014

Notes from Calligra Sprint in Deventer. Part 1: Translation-friendly code

Last weekend we had a really nice sprint Deventer, which was hosted by Irina and Boudewijn (thank you very much!). We spent two days on discussions, planning, coding and profiling our software, which had many fruitful results.

On Saturday we were mostly talking and discussing our current problems, like porting Calligra to Qt5 and splitting libraries more sanely (e.g. we shouldn't demand mobile applications compile and link QWidget-based libraries). Although these problems are quite important, I will not describe them now (the other people will blog about it very soon). Instead I'm going to tell you about a different problem we also discussed — translations.

The point is, when using i18n() macro, it is quite easy to make mistakes which will make translator's life a disaster, so we decided to make a set of rules of thumb which developers should follow for not creating such issues. Here are these five short rules:

  1. Avoid passing a localized string into a i18n macro
  2. Add context to your strings
  3. Undo commands must have (qtundo-format) context
  4. Use capitalization properly
  5. Beware of sticky strings
Next we will talk about each of the rules in details:

1. Avoid passing a localized string into a i18n macro

They might be not compatible in case, gender or anything else you have no idea about

// Such code is incorrect in 99% of the cases
QString str = i18n(“foo bar”);
i18n(“Some nice string %1”, str);


Example 1

// WRONG:
wrongString = i18n(“Delete %1”, XXX ? i18n(“Layer”) : i18n(“Mask”))

// CORRECT:

correctString = XXX ? i18n(“Delete Layer”) : i18n(“Delete Mask”)
 

Such string concatenation is correct in English, but it is completely inappropriate in many languages in which a noun can change its form depending on the case. The problem is that in macro i18n(“Mask”) the word "Mask" is used in nominative case (is a subject), but in expression "Delete Mask” it is in accusative case (is an object). For example is Russan the two strings will be different and the translator will not be able to solve the issue easily.

Example 2

// WRONG:
wrongString = i18n(“Last %1”, XXX ? i18n(“Monday”) : i18n(“Friday”))

// CORRECT:
correctString = XXX ? i18n(“Last Monday”) : i18n(“Last Friday”)

This case is more complicated. Both words "Monday" and "Friday" are used in the nominative case, so they will not change their form. But "Monday" and "Friday" have different gender in Russian, so the adjective "Last" must change its form depending on the second word used. Therefore we need to separate strings for the two terms.

The tricky thing here is that we have 7 days in a week, so ideally we should have 7 separate strings for "Last ...", 7 more strings for "Next ..." and so on.

Example 3 — Using registry values

// WRONG:
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply %1”, filter->name())

// CORRECT: is there a correct way at all?
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply: \”%1\””, filter->name())

Just imagine how many objects can be stored inside the registry. It can be a dozen, a hundred or a thousand of objects. We cannot control the case, gender and form of each object in the list (can we?). The easiest approach here is to put the object name in quotes and "cite" that literally. This will hide the problem in most of the languages.

2. Add context to your strings

Prefer adding context to your strings rather than expecting translators reading your thoughts

Here is an example of three strings for blur filter. They illustrate the three most important translation contexts

i18nc(“@title:window”, “Blur Filter”)

Window titles are usually nouns (and translated as nouns). There is no limit on the size of the string.

i18nc(“@action:button”, “Apply Blur Filter”)

Button actions are usually verbs. The length of the string is also not very important.

i18nc(“@action:inmenu”, “Blur”)

Menu actions are also verbs, but the length of the string should be as short as possible.

3. Undo commands must have (qtundo-format) context

Adding this context tells the translators to use “Magic String” functionality. Such strings are special and are not reusable anywhere else.

In Krita and Calligra this context is now added automatically, because we use C++ type-checking mechanism to limit the strings passed to an undo command:

KUndo2Command(const KUndo2MagicString &text, KUndo2Command *parent);

4. Use capitalization properly

See KDE policy for details.

5. Beware of sticky strings

When the same string without a context is reused in different places (and especially in different files), doublecheck whether it is appropriate.

E.g. i18n("Duplicate") can be either a brush engine name (noun) or a menu action for cloning a layer (verb). Obviously enough not all the languages have the same form of a word for both verb and noun meanings. Such strings must be split by assigning them different contexts.

Alexander Potashev has created a special python script that can iterate through all the strings in a .po file and report all the sticky strings in a convenient format.

Conclusion

Of course all these rules are only recommendation. They all have exceptions and limitations, but following them in the most trivial cases will make the life of translators much easier.

In the next part of my notes from the sprint I will write how Boud and me were hunting down memory fragmentation problems in Krita on Sunday... :)

July 12, 2014

Trapped our first pack rat

[White throated woodrat in a trap] One great thing about living in the country: the wildlife. I love watching animals and trying to photograph them.

One down side of living in the country: the wildlife.

Mice in the house! Pack rats in the shed and the crawlspace! We found out pretty quickly that we needed to learn about traps.

We looked at traps at the local hardware store. Dave assumed we'd get simple snap-traps, but I wanted to try other options first. I'd prefer to avoid killing if I don't have to, especially killing in what sounds like a painful way.

They only had one live mousetrap. It was a flimsy plastic thing, and we were both skeptical that it would work. We made a deal: we'd try two of them for a week or two, and when (not if) they didn't work, then we'd get some snap-traps.

We baited the traps with peanut butter and left them in the areas where we'd seen mice. On the second morning, one of the traps had been sprung, and sure enough, there was a mouse inside! Or at least a bit of fur, bunched up at the far inside end of the trap.

We drove it out to open country across the highway, away from houses. I opened the trap, and ... nothing. I looked in -- yep, there was still a furball in there. Had we somehow killed it, even in this seemingly humane trap?

I pointed the open end down and shook the trap. Nothing came out. I shook harder, looked again, shook some more. And suddenly the mouse burst out of the plastic box and went HOP-HOP-HOPping across the grass away from us, bounding like a tiny kangaroo over tufts of grass, leaving us both giggling madly. The entertainment alone was worth the price of the traps.

Since then we've seen no evidence of mice inside, and neither of the traps has been sprung again. So our upstairs and downstairs mice must have been the same mouse.

But meanwhile, we still had a pack rat problem (actually, probably, white-throated woodrats, the creature that's called a pack rat locally). Finding no traps for sale at the hardware store, we went to Craigslist, where we found a retired wildlife biologist just down the road selling three live Havahart rat traps. (They also had some raccoon-sized traps, but the only raccoon we've seen has stayed out in the yard.)

We bought the traps, adjusted one a bit where its trigger mechanism was bent, baited them with peanut butter and set them in likely locations. About four days later, we had our first captive little brown furball. Much smaller than some of the woodrats we've seen; probably just a youngster.

[White throated woodrat bounding away] We drove quite a bit farther than we had for the mouse. Woodrats can apparently range over a fairly wide area, and we didn't want to let it go near houses. We hiked a little way out on a trail, put the trap down and opened both doors. The woodrat looked up, walked to one open end of the trap, decided that looked too scary; walked to the other open end, decided that looked too scary too; and retreated back to the middle of the trap.

We had to tilt and shake the trap a bit, but eventually the woodrat gathered up its courage, chose a side, darted out and HOP-HOP-HOPped away into the bunchgrass, just like the mouse had.

No reference I've found says anything about woodrats hopping, but the mouse did that too. I guess hopping is just what you do when you're a rodent suddenly set free.

I was only able to snap one picture before it disappeared. It's not in focus, but at least I managed to catch it with both hind legs off the ground.

Call to translators

We plan to release Stellarium 0.13.0 around July 20.

There are new strings to translate in this release because we have several new plugins and features, and refactored GUI. If you can assist with translation to any of the 132 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

Thank you!

July 11, 2014

This Land Is Mine is yours

Due to horrific recent events, This Land Is Mine has gone viral again.

Here’s a reminder that you don’t need permission to copy, share, broadcast, post, embed, subtitle, etc. Copying is an act of love, please copy and share. Yes means yes.

copying is an act of love, please copy and shareAs for the music, it is Fair UseThis Land Is Mine is a PARODY of “The Exodus Song.” That music was sort of the soundtrack of American zionism in the 1960′s and 70′s. It was supposed to express Jewish entitlement to Israel. By putting the song in the mouth of every warring party, I’m critiquing the original song.

 

Share/Bookmark

flattr this!

July 09, 2014

Invert the colors of qcad3 icons

QCad is an open-source 2D CAD program I've already been kind of fond of. It runs on Windows, Mac and Linux, its version 2 has been the base of LibreCAD, and version 3, which is a couple of months old already, is a huge evolution after version 2. Their developers have always struggled between the...

July 08, 2014

Big and contrasty mouse cursors

[Big mouse cursor from Comix theme] My new home office with the big picture windows and the light streaming in come with one downside: it's harder to see my screen.

A sensible person would, no doubt, keep the shades drawn when working, or move the office to a nice dim interior room without any windows. But I am not sensible and I love my view of the mountains, the gorge and the birds at the feeders. So accommodations must be made.

The biggest problem is finding the mouse cursor. When I first sit down at my machine, I move my mouse wildly around looking for any motion on the screen. But the default cursors, in X and in most windows, are little subtle black things. They don't show up at all. Sometimes it takes half a minute to figure out where the mouse pointer is.

(This wasn't helped by a recent bug in Debian Sid where the USB mouse would disappear entirely, and need to be unplugged from USB and plugged back in before the computer would see it. I never did find a solution to that, and for now I've downgraded from Sid to Debian testing to make my mouse work. I hope they fix the bug in Sid eventually, rather than porting whatever "improvement" caused the bug to more stable versions. Dealing with that bug trained me so that when I can't see the mouse cursor, I always wonder whether I'm just not seeing it, or whether it really isn't there because the kernel or X has lost track of the mouse again.)

What I really wanted was bigger mouse cursor icons in bright colors that are visible against any background. This is possible, but it isn't documented at all. I did manage to get much better cursors, though different windows use different systems.

So I wrote up what I learned. It ended up too long for a blog post, so I put it on a separate page: X Cursor Themes for big and contrasty mouse cursors.

It turned out to be fairly complicated. You can replace the existing cursor font, or install new cursor "themes" that many (but not all) apps will honor. You can change theme name and size (if you choose a scalable theme), and some apps will honor that. You have to specify theme and size separately for GTK apps versus other apps. I don't know what KDE/Qt apps do.

I still have a lot of unanswered questions. In particular, I was unable to specify a themed cursor for xterm windows, and for non text areas in emacs and firefox, and I'd love to know how to do that.

But at least for now, I have a great big contrasty blue mouse cursor that I can easily see, even when I have the shades on the big windows open and the light streaming in.

Important AppData milestone

Today we reached an important milestone. Over 25% of applications in Fedora now ship AppData files. The actual numbers look like this:

  • Applications with descriptions: 262/1037 (25.3%)
  • Applications with keywords: 112/1037 (10.8%)
  • Applications with screenshots: 235/1037 (22.7%)
  • Applications in GNOME with AppData: 91/134 (67.9%)
  • Applications in KDE with AppData: 5/67 (7.5%)
  • Applications in XFCE with AppData: 2/20 (10.0%)
  • Application addons with MetaInfo: 30

We’ve gone up a couple of percentage points in the last few weeks, mostely from the help of Ryan Lerch, who’s actually been writing AppData files and taking screenshots for upstream projects. He’s been concentrating on the developer tools for the last week or so, as this is one of the key groups of people we’re targetting for Fedora 21.

One of the things that AppData files allow us to do is be smarter suggesting “Picks” on the overview page. For 3.10 and 3.12 we had a farly short static list that we chose from at random. For 3.14 we’ve got a new algorithm that tries to find similar software to the apps you already have installed, and also suggests those. So if I have Anjunta and Devhelp installed, it might suggest D-Feet or Glade.

July 04, 2014

Detecting wildlife with a PIR sensor (or not)

[PIR sensor] In my last crittercam installment, the NoIR night-vision crittercam, I was having trouble with false positives, where the camera would trigger repeatedly after dawn as leaves moved in the wind and the morning shadows marched across the camera's field of view. I wondered if a passive infra-red (PIR) sensor would be the answer.

I got one, and the answer is: no. It was very easy to hook up, and didn't cost much, so it was a worthwhile experiment; but it gets nearly as many false positives as camera-based motion detection. It isn't as sensitive to wind, but as the ground and the foliage heat up at dawn, the moving shadows are just as much a problem as they were with image-based motion detection.

Still, I might be able to combine the two, so I figure it's worth writing up.

Reading inputs from the HC-SR501 PIR sensor

[PIR sensor pins]

The PIR sensor I chose was the common HC-SR501 module. It has three pins -- Vcc, ground, and signal -- and two potentiometer adjustments.

It's easy to hook up to a Raspberry Pi because it can take 5 volts in on its Vcc pin, but its signal is 3.3v (a digital signal -- either motion is detected or it isn't), so you don't have to fool with voltage dividers or other means to get a 5v signal down to the 3v the Pi can handle. I used GPIO pin 7 for signal, because it's right on the corner of the Pi's GPIO header and easy to find.

There are two ways to track a digital signal like this. Either you can poll the pin in an infinfte loop:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 1

GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)

while True:
    if GPIO.input(pir_pin):
        print "Motion detected!"
    time.sleep(sleeptime)

or you can use interrupts: tell the Pi to call a function whenever it sees a low-to-high transition on a pin:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 300

def motion_detected(pir_pin):
    print "Motion Detected!"

GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)

GPIO.add_event_detect(pir_pin, GPIO.RISING, callback=motion_detected)

while True:
    print "Sleeping for %d sec" % sleeptime
    time.sleep(sleeptime)

Obviously the second method is more efficient. But I already had a loop set up checking the camera output and comparing it against previous output, so I tried that method first, adding support to my motion_detect.py script. I set up the camera pointing at the wall, and, as root, ran the script telling it to use a PIR sensor on pin 7, and the local and remote directories to store photos:

# python motion_detect.py -p 7 /tmp ~pi/shared/snapshots/
and whenever I walked in front of the camera, it triggered and took a photo. That was easy!

Reliability problems with add_event_detect

So easy that I decided to switch to the more efficient interrupt-driven model. Writing the code was easy, but I found it triggered more often: if I walked in front of the camera (and stayed the requisite 7 seconds or so that it takes raspistill to get around to taking a photo), when I walked back to my desk, I would find two photos, one showing my feet and the other showing nothing. It seemed like it was triggering when I got there, but also when I left the scene.

A bit of web searching indicates this is fairly common: that with RPi.GPIO a lot of people see triggers on both rising and falling edges -- e.g. when the PIR sensor starts seeing motion, and when it stops seeing motion and goes back to its neutral state -- when they've asked for just GPIO.RISING. Reports for this go back to 2011.

On the other hand, it's also possible that instead of seeing a GPIO falling edge, what was happening was that I was getting multiple calls to my function while I was standing there, even though the RPi hadn't finished processing the first image yet. To guard against that, I put a line at the beginning of my callback function that disabled further callbacks, then I re-enabled them at the end of the function after the Pi had finished copying the photo to the remote filesystem. That reduced the false triggers, but didn't eliminate them entirely.

Oh, well, The sun was getting low by this point, so I stopped fiddling with the code and put the camera out in the yard with a pile of birdseed and peanut suet nuggets in front of it. I powered on, sshed to the Pi and ran the motion_detect script, came back inside and ran a tail -f on the output file.

I had dinner and worked on other things, occasionally checking the output -- nothing! Finally I sshed to the Pi and ran ps aux and discovered the script was no longer running.

I started it again, this time keeping my connection to the Pi active so I could see when the script died. Then I went outside to check the hardware. Most of the peanut suet nuggets were gone -- animals had definitely been by. I waved my hands in front of the camera a few times to make sure it got some triggers.

Came back inside -- to discover that Python had gotten a segmentation fault. It turns out that nifty GPIO.add_event_detect() code isn't all that reliable, and can cause Python to crash and dump core. I ran it a few more times and sure enough, it crashed pretty quickly every time. Apparently GPIO.add_event_detect needs a bit more debugging, and isn't safe to use in a program that has to run unattended.

Back to polling

Bummer! Fortunately, I had saved the polling version of my program, so I hastily copied that back to the Pi and started things up again. I triggered it a few times with my hand, and everything worked fine. In fact, it ran all night and through the morning, with no problems except the excessive number of false positives, already mentioned.

[piñon mouse] False positives weren't a problem at all during the night. I'm fairly sure the problem happens when the sun starts hitting the ground. Then there's a hot spot that marches along the ground, changing position in a way that's all too obvious to the infra-red sensor.

I may try cross-checking between the PIR sensor and image changes from the camera. But I'm not optimistic about that working: they both get the most false positives at the same times, at dawn and dusk when the shadow angle is changing rapidly. I suspect I'll have to find a smarter solution, doing some image processing on the images as well as cross-checking with the PIR sensor.

I've been uploading photos from my various tests here: Tests of the Raspberry Pi Night Vision Crittercam. And as always, the code is on github: scripts/motioncam with some basic documentation on my site: motion-detect.py: a motion sensitive camera for Raspberry Pi or other Linux machines. (I can't use github for the documentation because I can't seem to find a way to get github to display html as anything other than source code.)

July 02, 2014

Anaconda Crash Recovery

Whoah! Another anaconda post! Yes! You should know that the anaconda developers are working hard at fixing bugs, improving features, and adding enhancements all the time, blog posts about it or not. :)

Today Chris and I talked about how the UI might work for anaconda crash recovery. So here’s the thing: Anaconda is completely driven by kickstart. Every button, selection, or thing you type out in the UI gets translated into kickstart instructions in memory. So, why not save that kickstart out to disk when anaconda crashes? Then, any configuration and customization you’ve done would be saved. You could then load up anaconda afterwards with the kickstart and it would pre-fill in all of your work so you could continue where you left off!

However! Anaconda is a special environment, of course. We can’t just save to disk. I mean, okay, we could, but then we can’t use that disk as an install target after restarting the installer post crash because we’d have to mount it for reading the kickstart file off of it! Eh. So it’s a bit complicated. Chris and I thought it’d be best to keep this simple (at least to start) and allow allow for saving the kickstart to an external disk to avoid these kind of hairy issues.

Chris and I talked about how it would be cool if the crash screen could just say, “insert a USB disk if you’d like to save your progress,” and we could auto-detect when the disk was inserted, save, and report back to the user that we saved. However, blivet (the storage library used by anaconda) doesn’t yet have support for autodetecting devices. So what I thought we could do instead is have a “Save kickstart” button, and that button would kick off the process of searching for the new disk, reporting to the user if they still needed to insert one or if there was some issue with the disk. Finally, once the kickstart is saved out, it could report a status that it was successfully saved.

Another design consideration I talked over with bcl for a bit – it would be nice to keep this saving process as simple as possible. Can we avoid having a file chooser? Can we just save to the root of the inserted disk and leave it at that? That would save users a lot of mental effort.

The primary use case for this functionality is crash recovery. It crashes, we offer to save your work. One additional case is that you’re quitting the installer and want to save your work – this case is rarer, but maybe it would be worth offering to save during normal quit too.

So here are my first cuts at trying to mock something out here. Please fire away and poke holes!

So this is what you’d first see when anaconda crashes:
00-CrashDialog

You insert the disk and then you hit the “Save kickstart” button, and it tries to look for the disk:
01-MountingDisk

Success – it saved out without issue.
02A-Success

Oooopsie! You got too excited and hit “Save kickstart” without inserting the disk.
02B-NoDiskFound

Maybe your USB stick is bricked? Something went wrong. Maybe the file system’s messed up? Better try another stick:
02C-MountingError

Hope this makes sense. My Inkscape SVG source is available if you’d like to tweak or play around with this!

Comments / feedback / ideas welcomed in the comments or on the anaconda-devel list.

Blurry Screenshots in GNOME Software?

Are you a pixel perfect kind of maintainer? Frustrated by slight blurriness in screenshots when using GNOME Software?

If you have one screenshot, capture a PNG of size 752×423. If you have more than one screenshot use a size of 624×351.

If you use any other 16:9 aspect ratio resolution, we’ll scale your screenshot when we display it. If you use some crazy non-16:9 aspect ratio, we’ll add padding and possibly scale it as well, which is going to look pretty bad. That said, any screenshot is better than no screenshot, so please don’t start removing <screenshot> tags.

June development results

Last month we have worked to improve user interface of Synfig and now we are happy to share results of our work....

July 01, 2014

KDE aux RMLL 2014

Post in French, English translation below…

Dans quelques jours débuteront les 15em Rencontres Mondiales du Logiciel Libre à Montpellier, du 5 au 11 Juillet.
Ces rencontres débuteront par un week-end grand public dans le Village du Libre, dans lequel nous aurons un stand de démonstration des logiciels de la communauté KDE.

Ensuite durant toute la semaine se tiendront des conférences sur différents thèmes, la programmation complète se trouve ici . J’aurai le plaisir de présenter une conférence sur les nouveautés récentes concernant les logiciels libre pour l’animation 2D, programmée le Jeudi à 10h30, et suivie par un atelier de crétion libre sur le logiciel de dessin Krita de 14h à 17h.

Passez nous voir au stand KDE ou profiter des conférences et ateliers si vous êtes dans le coin!

En passant, un petit rappel pour deux campagnes importantes de financement participatif:
-Le Kickstarter pour booster le dévelopment de la prochaine version de Krita vient de passer le premier palier d’objectif! Il reste maintenant 9 jours pour atteindre le second palier qui nous permettrai d’embaucher Sven avec Dmitry pour les 6 prochains mois.

-La campagne pour financer le Randa Meeting 2014, réunion permettant aux contributeurs de projets phares de la communauté KDE de concentrer leurs efforts. Celle ci se termine dans 8 jours.

Pensez donc si ce n’est déjà fait à soutenir ces projets ;)

show_RMLL_small

meeting_RMLL_small

banner_RMLL_hz_566x73

In a few days will begin the 15th “Rencontres Moniales du Logiciel Libre” in Montpellier, from 5th to 11th of July. This event will begin with a week-end for general publicaudience at the “Village du Libre”, where we will have a KDE stand to show the cool software from our community.

Then for the whole week there will be some conferences about several topics, the full schedule is here. I’ll have the pleasure to present a talk about recent news on free software for 2D animation on thursday 10.30 am, followed by a workshop about free creation on Krita painting software from 2 to 5 pm.

Come say hello at the KDE stand or enjoy the conferences and workshop if you’re around!

On a side note, a little reminder for two crowdfunding campaign:
-The Kickstarter to boost Krita development just reached the first step today! We now have 9 days left to reach the next step that will allow us to hire Sven together with Dmitry for the next 6 months.

-The Randa Meeting 2014 campaign, this meeting will allow contributors from key KDE projects to gather and get even more productive than usual.

So think about helping those projects if ou haven’t already ;)

Success!

With 518 backers and 15,157 euros, we've passed the target goal and we're 100% funded. That means that Dmitry can work on Krita for the next six months, adding a dozen hot new features and improvements to Krita. We're not done with the kickstarter, though, there are still eight days to go! And any extra funding will go straight into Krita development as well. If we reach the 30,000 euro level, we'll be able to fund Sven Langkamp as well, and that will double the number of features we can work on for Krita 2.9.


And then there's the super-stretch goal... We already have a basic package for OSX, but it needs some really heavy development. It currently only runs on OSX 10.9 Mavericks, krita only seees 1GB of memory, there are OpenGL issues, there GUI issues, there are missing dependencies, missing brush engines. Lots of work to be done. But we've proven now that this goal is attainable, so please help us get there!

It would be really cool to be able to release the next version of Krita for Linux, Windows and OSX, wouldn't it :-)

 And now it's also possible to select your reward and use Paypal -- which Kickstarter still doesn't offer.

Reward Selection

June 30, 2014

WebODF v0.5.0 released: Highlights

Today, after a long period of hard work and preparation, having deemed the existing WebODF codebase stable enough for everyday use and for integration into other projects, we have tagged the v0.5.0 release and published an announcement on the project website.

Some of the features that this article will talk about have already made their way into various other projects a long time ago, most notably ownCloud Documents and ViewerJS. Such features will have been mentioned before in other posts, but this one talks about what is new since the last release.

The products that have been released as ‘supported’ are:

  • The WebODF library
  • A TextEditor component
  • Firefox extension

Just to recap, WebODF is a JavaScript library that lets you display and edit ODF files in the browser. There is no conversion of ODF to HTML. Since ODF is an XML-based format, you can directly render it in a browser, styled with CSS. This way, no information is lost in translation. Unlike other text editors, WebODF leaves your file structure completely intact.

The Editor Components

WebODF has had, for a long time, an Editor application. This was until now not a feature ‘supported’ to the general public, but was simply available in the master branch of the git repo. We worked over the months with ownCloud to understand how such an editor would be integrated within a larger product, and then based on our own experimentation for a couple of awesome-new to-be-announced products, designed an API for it.

As a result, the new “Wodo” Editor Components are a family of APIs that let you embed an editor into your own application. The demo editor is a reference implementation that uses the Wodo.TextEditor component.

There are two major components in WebODF right now:

  1. Wodo.TextEditor provides for straightforward local-user text editing,by providing methods for opening and saving documents. The example implementation runs 100% client-side, in which you can open a local file directly in the editor without uploading it anywhere, edit it, and save it right back to the filesystem. No extra permissions required.
  2. Wodo.CollabTextEditor lets you specify a session backend that communicates with a server and relays operations. If your application wants collaborative editing, you would use this Editor API. The use-cases and implementation details being significantly more complex than the Wodo.TextEditor component, this is not a ‘supported’ part of the v0.5.0 release, but will, I’m sure, be in the next release(s) very soon. We are still figuring out the best possible API it could provide, while not tying it to any specific flavor of backend. There is a collabeditor example in WebODF master, which can work with an ownCloud-like HTTP request polling backend.

These provide options to configure the editor to switch on/off certain features.

Of course, we wholeheartedly recommend that people play with both components, build great things, and give us lots of feedback and/or Pull Requests. :)

New features

Notable new features that WebODF now has include:

  • SVG Selections. It is impossible to have multiple selections in the same window in most modern browsers. This is an important requirement for collaborative editing, i.e., the ability to see other people’s selections in their respective authorship colors. For this, we had to implement our own text selection mechanism, without totally relying on browser-provided APIs.
    Selections are now smartly computed using dimensions of elements in a given text range, and are drawn as SVG polygon overlays, affording numerous ways to style them using CSS, including in author colors. :)
  • Touch support:
    • Pinch-to-zoom was a feature requested by ownCloud, and is now implemented in WebODF. This was fairly non-trivial to do, considering that no help from touch browsers’ native pinch/zoom/pan implementations could be taken because that would only operate on the whole window. With this release, the document canvas will transform with your pinch events.
    • Another important highlight is the implementation of touch selections, necessitated by the fact that native touch selections provided by the mobile versions of Safari, Firefox, and Chrome all behave differently and do not work well enough for tasks which require precision, like document editing. This is activated by long-pressing with a finger on a word, following which the word gets a selection with draggable handles at each end.
Touch selections

Drawing a selection on an iPad

  • More collaborative features. We added OT (Operation Transformations) for more new editing operations, and filled in all the gaps in the current OT Matrix. This means that previously there were some cases when certain pairs of simultaneous edits by different clients would lead to unpredictable outcomes and/or invalid convergence. This is now fixed, and all enabled operations transform correctly against each other (verified by lots of new unit tests). Newly enabled editing features in collaborative mode now include paragraph alignment and indent/outdent.

  • Input Method Editor (IME). Thanks to the persistent efforts of peitschie of QSR International, WebODF got IME support. Since WebODF text editing does not use any native text fields with the assistance of the browser, but listens for keystrokes and converts them into operations, it was necessary to implement support for it using JavaScript using Composition Events. This means that you can now do this:

Chinese - Pinyin (IBUS)

Chinese – Pinyin (IBUS)

and type in your own language (IBUS is great at transliteration!)

Typing in Hindi

Typing in Hindi

  • Benchmarking. Again thanks to peitschie, WebODF now has benchmarks for various important/frequent edit types. benchmark

  • Edit Controllers.  Unlike the previous release when the editor had to specifically generate various operations to perform edits, WebODF now provides certain classes called Controllers. A Controller provides methods to perform certain kinds of edit ‘actions’ that may be decomposed into a sequence smaller ‘atomic’ collaborative operations. For example, the TextController interface provides a removeCurrentSelection method. If the selection is across several paragraphs, this method will decompose this edit into a complex sequence of 3 kinds of operations: RemoveText, MergeParagraph, and SetParagraphStyle. Larger edits described by smaller operations is a great design, because then you only have to write OT for very simple operations, and complex edit actions all collaboratively resolve themselves to the same state on each client. The added benefit is that users of the library have a simpler API to deal with.

On that note…

We now have some very powerful operations available in WebODF. As a consequence, it should now be possible for new developers to rapidly implement new editing features, because the most significant OT infrastructure is already in place. Adding support for text/background coloring, subscript/superscript, etc should simply be a matter of writing the relevant toolbar widgets. :) I expect to see some rapid growth in user-facing features from this point onwards.

A Qt Editor

Thanks to the new Components and Controllers APIs, it is now possible to write native editor applications that embed WebODF as a canvas, and provide the editor UI as native Qt widgets. And work on this has started! The NLnet Foundation has funded work on writing just such an editor that works with Blink, an amazing open source SIP communication client that is cross-platform and provides video/audio conferencing and chat.

To fulfill that, Arjen Hiemstra at KO has started work on a native editor using Qt widgets, that embeds WebODF and works with Blink! Operations will be relayed using XMPP.

Teaser:
blink-prototype

Other future tasks include:

  • Migrating the editor from Dojo widgets to the Closure Library, to allow more flexibility with styling and integration into larger applications.
  • Image manipulation operations.
  • OT for annotations and hyperlinks.
  • A split-screen collaborative editing demo for easy testing.
  • Pagination support.
  • Operations to manipulate tables.
  • Liberating users from Google’s claws cloud. :)

If you like a challenge and would like to make a difference, have a go at WebODF. :)


Krita Kickstarter

Krita Kickstarter

I know that I primarily write about photography here, but sometimes something comes along that’s too important to pass up talking about.

Krita just happens to be one of those things. Krita is a digital painting and sketching software by artists for artists. While I love GIMP and have seen some incredible work by talented artists using it for painting and sketching, sometimes it’s better to use a dedicated tool for the job. This is where Krita really shines.

The reason I’m writing about Krita today is that they are looking for community support to accelerate development through their Kickstarter campaign.


That is where you come in. It doesn’t take much to make a difference in great software, and every little bit helps. If you can skip a fancy coffee, pastry, or one drink while out this month, consider using the money you saved to help a great project instead!

There’s only 9 days left in their Kickstarter, and they are less than €800 to hitting their goal of €15,000!


Metamorphosis by Enrico Guarnieri

Of course, the team makes it hard to keep up with them. They seem to be rapidly implementing goals in their Kickstarter before they even get funding. For instance, their “super-stretch” goal was to get an OSX implementation of Krita running. Then this shows up in my feed this morning. A prototype already!

I am in constant awe at the talent and results from digital artists, and this is a great tool to help them produce amazing works. As a photographer I am deeply indebted to those who helped support GIMP development over they years, and if we all pull together maybe we can be the ones who future Krita users thank for helping them get access to a great program...

Skip a few fancy coffee drinks, possibly inspire a future artist? Count me in!


Krita Kickstarter


Still here? Ok, how about some cool video tutorials by Ramón Miranda to help support the campaign?

















If you still need more, check out his YouTube channel.

Ok, that's enough out of me.

Go Donate!
Krita Kickstarter

Last week in Krita — weeks 25 & 26

This last two weeks have been very exiting with the kickstarter campaign getting closer and closer to the pledge objective. At the time of writing we just crossed 13k! And with the wave of new users, drawn by the great word spreading labor of collaborators and enthusiasts, we have been very busy bringing new functions and building beta versions for you.

And now there's also the first public build for OSX:

http://www.valdyas.org/~boud/krita_osx_2.8.79.0.tgz

It is just a prototype, with stuff missing and rather detailed instructions on getting it to run... But if you've always wanted to have Krita on OSX, this is your chance to help us make it happen!

Before getting into the new hot stuff in the code I can’t go without mentioning the useful videos from Ramon Miranda. Aiming to improve the common knowledge of Krita features and capabilities as a painting software for those who hear about it for the first time, he has created a short series of video tips: Short video introductions to many functions and fundamentals. Even for the initiated these are a good resource. I wasn’t aware of some depicted functions on the videos. All tips and info on kickstarter post, Ramon youtube channel

Week 25 & 26 progress

Amongst the notable changes and develops we can cite the efforts of Boudewijn to create a building environment to eventually allow the creation of an alpha version for OSX users. Still in experimental phase, the current steps show steady progress as its now possible to open the program and do minor paintings. Of course this is far from been a version to distribute, but if we remember the Windows humble beginnings, this is a great sign. go krita!

In other news. Somsubhra, developer of Krita Animation spin, has added, aside many bug fixes and tweaks, a first rough animation player. I wanted to make a short video for you, but still the build is very fragile and on my system it crashed after creating the document. You can see the player in action in a video made by Sohsumbra

{youtube}VEHJ-JIunII{/youtube}

This week’s new features:

  • Implemented “Delayed Stroke” feature for brush smoothing. (Dmitry Kazakov)
  • Edit Selection Mask. (Dmitry Kazakov)
  • Add import/export for r8 and r16 heightmaps, extensions .r8 and .r16. (Boudewijn Rempt)
  • Add ability to zoom and sync for resource item choosers (Ctrl + Wheel). (Sven Langkamp)
  • Brush stabilizer by Juan Luis Boya García. (Juan Luis Boya García)
  • Allow activation of the Isolated Mode with Alt+click on a layer. (Dmitry Kazakov)

This week’s main Bug fixes

  • Make ABR brush loading code more robust. (Boudewijn Rempt)
  • FIX #319279: Drop the full brush image after loading it to save memory. (Boudewijn Rempt)
  • Enable the vector shape in Krita. This make possible to show embedded svg images. (Boudewijn Rempt)
  • CCBUG #333451: Add basic svg support to the vector shape. (Boudewijn Rempt)
  • FIX #335041: Fix crash when installing a bundle. (Boudewijn Rempt)
  • FIX #33592: Fix saving the lock status. (Boudewijn Rempt)
  • Fix crash when trying to paint in scratchpad. (Dmitry Kazakov)
  • Don’t crash if deleting the last layer. (Boudewijn Rempt)
  • FIX #336470: Fix Lens Blur filter artifacts when used as an Adjustment Layer. (Dmitry Kazakov)
  • FIX #334538: Fix anisotropy in Color Smudge brush engine. (Dmitry Kazakov)
  • FIX #336478: Fix convert of clone layers into paint layers and selection masks. (Dmitry Kazakov)
  • CCBUG #285420: Multilayer selection: implement drag & drop of multiple layers. (Boudewijn Rempt)
  • FIX #336476: Fix edge duplication in Clone tool. (Dmitry Kazakov)
  • FIX #336473: Fixed crash when color picking from a group layer. (Dmitry Kazakov)
  • FIX #336115: Fixed painting and color picking on global selections. (Dmitry Kazakov)
  • Fixed moving of the global selection with Move Tool. (Dmitry Kazakov)
  • CCBUG #285420: Add an action to merge the selected layers. (Boudewijn Rempt)
  • CCBUG #285420: Layerbox multi-selection. Make it possible to delete multiple layers. (Boudewijn Rempt)
  • FIX #336804: (Boudewijn Rempt)
  • FIX #336803: (Boudewijn Rempt)
  • FIX #330479: Fix memory leak in KoLcmsColorTransformation. (Boudewijn Rempt)
  • And many code optimizations, memory leak patching, spelling and translation updates and other fixes.

Delay stroke and brush stabilizer

A new way of creating smooth controlled lines. The new Stabilizer smooth mode works using both the distance of the stroke and the speed. It uses 3 important options that can be described as follows:

  • Distance: The less distance the weaker the stabilization force.
  • Delay: When activated, it adds a halo around the cursor. This area is defined as a “Dead Zone”, no stroke is made while the cursor is inside it. Very useful when you need to create a controlled line with explicit angles in it. The Pixel value defines the size of the halo.
  • Finish line: If switched off, line rendering will stop in the spot it was when the pen lifted. Otherwise, it will draw the missing gap between the current brush position and the cursor last location.

Multiselection

Developers have been working to re implement working on multiple layers. This time they made possible to select more than one layer to reorganize the stack, merge and delete actions. After selecting multiple layers you can:

  • Drag and drop from on location to another
  • Drag and drop layers inside a group
  • Click the erase layer button to remove all selected layers
  • Go to “Layer -> Merge selected layers” to merge.

This first implementation allows a much more faster workflow when dealing with many layers. However it is still necessary to use groups to make some actions, like transform, on multiple layers.

NEW :Layer -> Merge selected layers

Edit selection mask (global selection)

To activate go to Selection menu and turn on “Show global Selection Mask”.

When activated all global selections will appear in the layer stacks as local selection do. You can deactivate the selection, hide or edit using any available tool, like transform, brushes or filters.

At the moment it is not possible to see the effect a tool causes on all tools, but you can transform the selection to a painter layer to make finer adjustments.

NEW :Selection -> Show Global Selection Mask

Alt + click isolated layer

Added a new action to toggle isolate layer.

NEW :[ALt] + [Click] over a layer in layer docker.

This actions shows instantly the selected layer hiding all other. It will return to normal mode after another layer is selected, but while the isolated layer mode is on its possible to paint, transform and adjust visualizing only the isolated layer.

June 26, 2014

A Raspberry Pi Night Vision Camera

[Mouse caught on IR camera]

When I built my http://shallowsky.com/blog/hardware/raspberry-pi-motion-camera.html (and part 2), I always had the NoIR camera in the back of my mind. The NoIR is a version of the Pi camera module with the infra-red blocking filter removed, so you can shoot IR photos at night without disturbing nocturnal wildlife (or alerting nocturnal burglars, if that's your target).

After I got the daylight version of the camera working, I ordered a NoIR camera module and plugged it in to my RPi. I snapped some daylight photos with raspstill and verified that it was connected and working; then I waited for nightfall.

In the dark, I set up the camera and put my cup of hot chocolate in front of it. Nothing. I hadn't realized that although CCD cameras are sensitive in the near IR, the wavelengths only slightly longer than visible light, they aren't sensitive anywhere near the IR wavelengths that hot objects emit. For that, you need a special thermal camera. For a near-IR CCD camera like the Pi NoIR, you need an IR light source.

Knowing nothing about IR light sources, I did a search and came up with something called a "Infrared IR 12 Led Illuminator Board Plate for CCTV Security CCD Camera" for about $5. It seemed similar to the light sources used on a few pages I'd found for home-made night vision cameras, so I ordered it. Then I waited, because I stupidly didn't notice until a week and a half later that it was coming from China and wouldn't arrive for three weeks. Always check the shipping time when ordering hardware!

When it finally arrived, it had a tiny 2-pin connector that I couldn't match locally. In the end I bought a package of female-female SchmartBoard jumpers at Radio Shack which were small enough to make decent contact on the light's tiny-gauge power and ground pins. I soldered up a connector that would let me use a a universal power supply, taking a guess that it wanted 12 volts (most of the cheap LED rings for CCD cameras seem to be 12V, though this one came with no documentation at all). I was ready to test.

Testing the IR light

[IR light and NoIR Pi camera]

One problem with buying a cheap IR light with no documentation: how do you tell if your power supply is working? Since the light is completely invisible.

The only way to find out was to check on the Pi. I didn't want to have to run back and forth between the dark room where the camera was set up and the desktop where I was viewing raspistill images. So I started a video stream on the RPi:

$ raspivid -o - -t 9999999 -w 800 -h 600 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264

Then, on the desktop: I ran vlc, and opened the network stream:
rtsp://pi:8554/
(I have a "pi" entry in /etc/hosts, but using an IP address also works).

Now I could fiddle with hardware in the dark room while looking through the doorway at the video output on my monitor.

It took some fiddling to get a good connection on that tiny connector ... but eventually I got a black-and-white view of my darkened room, just as I'd expect under IR illumination. I poked some holes in the milk carton and used twist-ties to seccure the light source next to the NoIR camera.

Lights, camera, action

Next problem: mute all the blinkenlights, so my camera wouldn't look like a christmas tree and scare off the nocturnal critters.

The Pi itself has a relatively dim red run light, and it's inside the milk carton so I wasn't too worried about it. But the Pi camera has quite a bright red light that goes on whenever the camera is being used. Even through the thick milk carton bottom, it was glaring and obvious. Fortunately, you can disable the Pi camera light: edit /boot/config.txt and add this line

disable_camera_led=1

My USB wi-fi dongle has a blue light that flickers as it gets traffic. Not super bright, but attention-grabbing. I addressed that issue with a triple thickness of duct tape.

The IR LEDs -- remember those invisible, impossible-to-test LEDs? Well, it turns out that in darkness, they emit a faint but still easily visible glow. Obviously there's nothing I can do about that -- I can't cover the camera's only light source! But it's quite dim, so with any luck it's not spooking away too many animals.

Results, and problems

For most of my daytime testing I'd used a threshold of 30 -- meaning a pixel was considered to have changed if its value differed by more than 30 from the previous photo. That didn't work at all in IR: changes are much more subtle since we're seeing essentially a black-and-white image, and I had to divide by three and use a sensitivity of 10 or 11 if I wanted the camera to trigger at all.

With that change, I did capture some nocturnal visitors, and some early morning ones too. Note the funny colors on the daylight shots: that's why cameras generally have IR-blocking filters if they're not specifically intended for night shots.

[mouse] [rabbit] [rock squirrel] [house finch]

Here are more photos, and larger versions of those: Images from my night-vision camera tests.

But I'm not happy with the setup. For one thing, it has far too many false positives. Maybe one out of ten or fifteen images actually has an animal in it; the rest just triggered because the wind made the leaves blow, or because a shadow moved or the color of the light changed. A simple count of differing pixels is clearly not enough for this task.

Of course, the software could be smarter about things: it could try to identify large blobs that had changed, rather than small changes (blowing leaves) all over the image. I already know SimpleCV runs fine on the Raspberry Pi, and I could try using it to do object detection.

But there's another problem with detection purely through camera images: the Pi is incredibly slow to capture an image. It takes around 20 seconds per cycle; some of that is waiting for the network but I think most of it is the Pi talking to the camera. With quick-moving animals, the animal may well be gone by the time the system has noticed a change. I've caught several images of animal tails disappearing out of the frame, including a quail who visited yesterday morning. Adding smarts like SimpleCV will only make that problem worse.

So I'm going to try another solution: hooking up an infra-red motion detector. I'm already working on setting up tests for that, and should have a report soon. Meanwhile, pure image-based motion detection has been an interesting experiment.

June 25, 2014

Firewalls and per-network sharing

Firewalls

Fedora has had problems for a long while with the default firewall rules. They would make a lot of things not work (media and file sharing of various sorts, usually, whether as a client or a server) and users would usually disable the firewall altogether, or work around it through micro-management of opened ports.

We went through multiple discussions over the years trying to break the security folks' resolve on what should be allowed to be exposed on the local network (sometimes trying to get rid of the firewall). Or rather we tried to agree on a setup that would be implementable for desktop developers and usable for users, while still providing the amount of security and dependability that the security folks wanted.

The last round of discussions was more productive, and I posted the end plan on the Fedora Desktop mailing-list.

By Fedora 21, Fedora will have a firewall that's completely open for the user's applications (with better tracking of what applications do what once we have application sandboxing). This reflects how the firewall was used on the systems that the Fedora Workstation version targets. System services will still be blocked by default, except a select few such as ssh or mDNS, which might need some tightening.

But this change means that you'd be sharing your music through DLNA on the café's Wi-Fi right? Well, this is what this next change is here to avoid.

Per-network Sharing

To avoid showing your music in the caf, or exposing your holiday photographs at work, we needed a way to restrict sharing to wireless networks where you'd already shared this data, and provide a way to avoid sharing in the future, should you change your mind.

Allan Day mocked up such controls in our Sharing panel which I diligently implemented. Personal File Sharing (through gnome-user-share and WedDAV), Media Sharing (through rygel and DLNA) and Screen Sharing (through vino and VNC) implement the same per-network sharing mechanism.

Make sure that your versions of gnome-settings-daemon (which implements the starting/stopping of services based on the network) and gnome-control-center match for this all to work. You'll also need the latest version of all 3 of the aforementioned sharing utilities.

(and it also works with wired network profiles :)



June 24, 2014

Fedora.next Branding Update

So we’ve gone through a lot of iterations of the Fedora.next logo design based on your feedback; here’s the full list of designs and mockups that Ryan, Sarup, and myself have posted:

That’s a lot of work, a lot of feedback, and a lot of iteration. The dust has settled over the past 2 weeks and I think from the feedback we’ve gotten that there is a pretty clear winner that we should move forward with:

Let’s walk through some points about this here:

  • F/G and H I think should both be valid logo treatments. I think that F/G is good for contexts in which it’s clear we’re talking about Fedora (e.g., a Fedora website with a big Fedora logo in the corner), and H is good for contexts in which we need to promote Fedora as well (e.g., a conference T-shirt with other distro logos on it.)
  • Single-color versions of F/G & H are of course completely fine to use as well.
  • F/G are exactly the same except the texture underneath is shifted and scaled a bit. I think it should be okay to play with the texture and change it up. We can talk about this, though.
  • Feedback seemed a bit divided about the cloud mark – it was about 50/50, folks liking it full height on all three bars vs. liking it with some of the bars shorter so it looked like a stylized cloud. I think we should go with the full-height version since it’s a stronger mark (it’s bolder and stands out more) and these are clearly all abstract marks, anyway.
  • Several folks suggested trying to replace the circles in version H with the Fedora speech bubble. I did play around with this, and Ryan and I both agreed that the speech bubble shape complicates things – it makes the marks inside look off-center when they are centered, and it also creates some awkward situations when the entire logo has to interact with other elements on a page or screen, so we thought it’d be better to keep things simple and stick with a simpler shape like a circle.
  • We’ll definitely build some official textures using the pattern in F/G and make them available so you can use them! Ryan has a very cool Inkscape technique for creating these so I’m still hoping to make a screencast showing how to do it.
  • Did I forget a particular point you brought up and would like some more discussion about? Let me know.

We’ll definitely need some logo usage guidelines written up and we’ll have to create a supplemental logo pack that can be dispensed via the logo at fedoraproject.org logo queue. Those things aren’t quite ready yet – if you want to help with that, let us know at the Fedora design team list or here in the comments.

Anyway, thanks for watching and participating in this process. It’s always a lot of fun to work on designs in the open with everyone like this :)

June 23, 2014

2.5D Parallax Animated Photo Tutorial (using Free Software)

I had been fiddling with creating these 2.5D parallax animated photos for quite a few years now, but there had recently been a neat post by Joe Fellows that brought it into the light again.

The reason I had originally played with the idea is part of a long, sad story involving my wedding and an out-of-focus camcorder that resulted in my not having any usable video of my wedding (in 2008). I did have all of the photographs, though. So as a present to my wife, I was going to re-create the wedding with these animated photos (I’m 99% sure she doesn't ever read my blog - so if anyone knows her don’t say anything! I can still make it a surprise!).

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP

So I had been dabbling with creating these in my spare time over a few years, and it was really neat to see the work done by Joe Fellows for the World Wildlife Fund. Here is that video:


He followed that up with a great video walking through how he does it:



I'm writing here today to walk through the methods I had been using for a while to create the same effect, but entirely with Free/Open Source Software...




Open Source Software Method

Using nothing but Free/Open Source Software, I was able to produce the same effect here:



Joe uses Adobe software to create his animations (Photoshop & After Effects). I neither have, nor want, Photoshop or After Effects.

What I do have is GIMP and Blender!

Blender Logo + GIMP Logo = Heart Icon

What I also don’t have (but would like) is access to the World Wildlife Fund photo archive. Large, good photographs make a huge difference in the final results you’ll see.

What I do have access to are some great old photographs of Ziegfeld Follies Girls. For the purposes of this tutorial we’ll use this one:

Pat David Ziegfeld Follies Woman Reclining
Click here to download the full size image.

This is a long post.
It’s long because I’ve written hopefully detailed enough steps that a completely new user of Blender can pick it up and get something working. For more experienced users, I'm sorry for the length.

As a consolation prize, I’ve linked to my final .blend file just below if anyone wants to download it and see what I finally ended up with at the end of the tutorial. Enjoy!

Here’s an outline of my steps if it helps...

  1. Pick a good image
  2. Find something with good fore/middleground and background separation (and clean edges).
  3. Think in planes
  4. Pay attention to how you can cut up the image into planes.
  5. Into GIMP
    1. Isolate woman as new layer
    2. Mask out everything except the subject you want.
    3. Rebuild background as separate layer (automatically or manually)
    4. Rebuild the background to exclude your subject.
    5. Export isolated woman and background layer
    6. Export each layer as its own image (keep alpha transparency).
  6. Into Blender
    1. Enable “Import Images as Planes” Addon
    2. Enable this ridiculously helpful Addon.
    3. Import Images as Planes
    4. Import your image as separate planes using the Addon.
    5. Basic environment setup
    6. Some Blender basics, and set viewport shade mode to “Texture”
    7. Add some depth
    8. Push background image/plane away from camera and scale to give depth.
    9. Animate woman image mesh
    10. Subdivide subject plane a bunch, then add Shape Keys and modify.
    11. Animate camera
    12. Animate camera position throughout timeline as wanted.
    13. Animate mesh
    14. Set keyframes for Shape Keys through timeline.
    15. Render

File downloads:
Download the .blend file [Google Drive]
These files are being made available under a Creative Commons Attribution, Non-Commercial, Share Alike license (CC-BY-SA-NC).
You're free to use them, modify them, and share them as long as you attribute me, Pat David, as the originator of the file.

Consider the Source Material

What you probably want to look for if you are just starting with these are images with a good separation between a fore/middle ground subject and the background. This will make your first attempts a bit easier until you get the hang of what’s going on. Even better if there are mostly sharp edges differentiating your subject from the background (to help during masking/cutting).

You’ll also want an image bigger than your rendering size (for instance, mine are usually rendered at 1280×720). This is because you want to avoid blowing up your pixels when rendering if possible. This will make more sense later, but for now just try to use source material that’s larger than your intended render size.

Thinking in Planes

The trick to pre-visualizing these is to consider slicing up your image into separate planes. For instance, with our working image, I can see immediately that it’s relatively simple. There is the background plane, and one with the woman/box:

Pat David Ziegfeld Follies Woman Reclining Plane Example
Simple 2-plane visualization of the image.

This is actually all I did in my version of this in the video. This is nice because for the most part the edges are all relatively clean as well (making the job of masking an easier one).

One of my previous tests had shown this idea of planes a little bit clearer:

Yes, that’s my daughter at the grave of H.P. Lovecraft with a flower.

So we’ve visualized a simple plan - isolate the woman and platform from the background. Great!

Into GIMP!

So I will simply open the base image in GIMP to do my masking and exporting each of the individual image planes. Remember, when we’re done, we want to have 2 images, the background and the woman/platform (with alpha transparency):

Pat David Ziegfeld Follies Woman Reclining Background Clean
What my final cleaned up backplate should look like.
(click here to download the full size)

Pat David Ziegfeld Follies Woman Reclining Clean Transparent
My isolated woman/platform image.
(click here to download the full size)

Get Ready to Work

Once in GIMP I will usually duplicate the base image layer a couple of times (this way I have the original untouched image at the bottom of layer stack in case I need it or screw up too badly). The top-most layer is the one I will be masking the woman from. The middle layer will become my new background plate.

Isolating the Woman

To isolate the woman, I’ll need to add a Layer Mask to the top-most layer (if you aren’t familiar with Layer Masks, the go back and read my previous post on them to brush up.

I initialize my layer mask to White (full opacity). Now anywhere I paint black on my layer mask, will make that area transparent on this layer. I also usually turn off the visibility of all the other layers when I am working (so I can see what I’m doing - otherwise the layers beneath would show through and I wouldn’t know where I was working). This is what my layer dialog looks like at this point:


Masking the Woman

Some of these headings are beginning to sound like book titles (possibly romance?) “Isolating the Woman”, “Masking the Woman”...

There’s a few different ways you can proceed at this point to isolate the woman. Really it depends on what you’re most comfortable with. One way is to use Paths to trace out a path around the subject. Another way is to paint directly on the Layer Mask.

All of them suck.

Sorry. There is no fast and easy method of doing this well. This is also one of the most important elements to getting a good result, so don’t cheap out now. Take your time and pull a nice clean mask, whatever method you choose.

For this tutorial, we can just paint directly onto our Layer Mask. Check to make sure the Layer Mask is active (white border around it that you won't be able to see because the mask is white) in the Layer palette, and make sure your foreground color is set to Black. Then it’s just a matter of choosing a paintbrush you like, and start painting around your subject.

I tend to use a simple round brush with a 75% hardness. I'll usually start painting, then take advantage of the Shift key modifier to draw straight lines along my edges. For finer details I'll drop down into a really small brush, and stay a bit larger for easier things.

To illustrate, here’s a 3X speedrun of me pulling a quick mask of our image:



To erase the regions that are left, I'll usually use the Fuzzy Select Tool, grow the selection by a few pixels, and then Bucket Fill that region with black to make it transparent (you can see me doing it at about 2:13 in the video).

Now I have a layer with the woman isolated from the background. I can just select that layer and export it to a .PNG file to retain transparency.

File → Export

Name the file with a .png extension, and make sure that the “Save color values from transparent pixels” is checked to save the alpha transparency.

Rebuilding the Background

Now that you have the isolated woman as an image, it’s time to remove her and the platform from the background image to get a clean backplate. There’s a few ways to go about this, the automated way, or the manual way.

Automated Background Rebuild

The automated way is to use an Inpainting algorithm to do the work for you. I had previously written about using the new G'MIC patch-based Inpainting algorithm, and it does a pretty decent job on this image. If you want to try this method you should first read up about using it here (and have G'MIC installed of course).

To use it in this case was simple. I had already masked out the woman with a layer mask, so all I had to do was Right-Click on the layer mask, and choose “Mask to Selection” from the menu.


Then just turn on the visibility of my “Background” layer (and toggle the visibility of the isolated woman layer off) and activate my “Background” layer by clicking on it.

Then I would grow the selection by a few pixels:

Select → Grow

I grew it by about 4 pixels, then sharpened the selection to remove anti-aliasing:

Select → Sharpen

Finally, make sure my foreground color is pure red (255, 0, 0), and bucket fill that selection. Now I can just run the G'MIC Inpainting [patch-based] against it to Inpaint the region:

Filters → G'MIC
Repair → Inpaint [patch-based]

Let it run for a while (it’s intensive), and in the end my layer now looks like this:


Not bad at all, and certainly usable for our purposes!

If I don’t want to use it as is, it’s certainly a better starting point for doing some selective healing with the Heal Tool to clean it up.

Manual Background Rebuild

Manually is exactly as it sounds. We basically want to erase the woman and platform from the image to produce a clean background plate. For this I would normally just use a large radius Clone Tool for mostly filling in areas, and then the Heal Tool for cleaning it up to look smoother and more natural.


It doesn't have to be 100% perfect, remember. It only needs to look good just behind the edges of your foreground subjects (assuming the parallax isn’t too extreme). Not to mention one of the nice things about this workflow is that it’s relatively trivial later to make modifications and push them into Blender.

Rinse & Repeat

For this tutorial we are now done. We’ve got a clean backplate and an isolated subject that we will be animating. If you wanted to get a little more complex just continue the process starting with the next layer closer to the camera. An example of this is the last girl in my video, where I had separated her from the background, and then her forearm from her body. In that case I had to rebuild the image of her chest that was behind her forearm to account for the animation.


Example of a three-layer separation (notice the rebuilt dress texture)

Into Blender

Now that we have our source material, it’s time to build some planes. Actually, this part is trivially easy thanks to the Import Images as Planes Blender addon.

The key to this addon is that it will automatically import an image into Blender, and assign it to a plane with the correct aspect ratio.

Enable Import Images as Planes

This addon is not enabled by default (at least in my Blender), so we just need to enable it. You can access all of the addons by first going to User Preferences in Blender:


Then into the Addons window:


I find it faster to get what I need by searching in this window for “images”:


To enable the Addon, just put check the small box in the upper right corner of the Addon. Now you can go back into the 3D View.

Back in the 3D View, you can also select the default cube and lamp (Shift - Right Click), and delete them (X key). (Selected objects will have an orange outline highlighting them).

Import Images as Planes

We can now bring in the images we exported from GIMP earlier. The import option is available in:

File → Import → Images as Planes


At this point you’ll be able to navigate to the location of your images and can select them for import (Shift-Click to select multiple):


Before you do the import though, have a look at the options that are presented to you (bottom of the left panel). We need to turn on a couple of options to make things work how we want:


For the Import Options we want to Un-Check the option to “Align Planes”. This will import all of the image planes already stacked with each other in the same location.

Under Material Settings we want to Check both Shadeless and Use Alpha so our image planes will not be affected by lamps and will use the transparency that is already there. We also want to make sure that Z Transparency is pressed.

Everything else can stay at their default settings.

Go ahead and hit the “Import Images as Planes” button now.

Some Basic Setup Stuff

At this point things may look less than interesting. We’re getting there. First we need to cover just a few basic things about getting around in Blender for those that might be new to it.

In the 3D window, your MouseWheel controls the zoom level, and your Middle Mouse button controls the orbit. Right-Click selects objects, and Left-Click will place the cursor. Shift-Middle Click will allow you to pan.

At this point your image planes should already be located in the center of the view. Go ahead and roll your MouseWheel to zoom into the planes a bit more. You should notice that they just look like boring gray planes:


I thought you said we were importing images?!

To see what we’re doing in 3D View, we’ll need to get Blender to show the textures. This is easily accomplished in the toolbar for this view by changing the Viewport Shading:


Now that’s more like it!

At this point I personally like to get my camera to an initial setup as well, so zoom back out and Right-Click on your camera:


We want to reset all of the default camera transformations and rotations by setting those values to 0 (zero). This will place your camera at the origin facing down.

Now change your view to Camera View (looking through the actual camera) by hitting zero (0) on your keyboard numberpad (not 0 along the top of your alpha keys).


Yes, this zero, not the other one!

You’ll be staring at a blank gray viewport at this point. All we have to do now is move the camera back (along the Z-axis), until we can see our working area. I like to use the default Blender grid as a rough approximation of my working area.

To pull the camera back, hit the G key (this will move the active object), and then press the Z key (this will constrain movement along the Z-axis. Slowly pull your mouse cursor away from the center of the screen, and you should see the camera view getting further away from your image planes. As I said, I like to use the default grid as a rough approximation, so I’ll zoom out until I am just viewing the width of the grid:


I’ve also found that working at small scales is a little tough, so I like to scale my image planes up to roughly match my camera view/grid. So we can select all the image planes in the center of our view by pressing the B key and dragging a rectangle over the image planes.

To scale them, press the S key and move the mouse cursor away from the center again. Adjust until the images just fill the camera view:


Image planes scaled up to just fit the camera/grid

This will make the adjustments a little easier to do. Now we’re ready to start fiddling with things!

Adding Some Depth

What we have now is all of our image planes in the exact same location. What we want to do is to offset the background image further away from the camera view (and the other planes).

Right-click on your image planes. If you click multiple times you will cycle through each object under your cursor (in this case between the background/woman image planes). With your background image plane selected, hit the G key to move it, and the Z key again to constrain movement along the Z-axis. (If you find that you’ve accidentally selected the woman image plane, just hit the ESC key to escape out of the action).

This time you’ll want to move the mouse cursor towards the center of the viewport to push the background further back in depth. Here’s where I moved mine to:


We also need to scale that image plane back up so that its apparent size is similar to what it was before we pushed it back in depth. With the background image plane still selected, hit the S key and pull the mouse away from the center again to scale it up. Make it around the same size as it was before (a little bigger than the width of the camera viewport):


Keep in mind that the further back the background plane is, the more pronounced the parallax effect will be. Use a relatively light touch here to maintain a realistic sense of depth.

What’s neat at this point is that if we were not going to animate any of the image planes themselves, we would be about done. For example, if you select the camera again (Right-click on the camera viewport border) you can hit the G key and move the camera slightly. You should be able to clearly see the parallax effect of the background being behind the woman.




Animating the Image Plane

After Effects has a neat tool called “Puppet Tool” that allowed Joe to easily deform his image to appear animated. We don’t have such a tool exactly in Blender at the moment, but it’s trivial to emulate the effects on the image plane using Shape Keys.

What Shape Keys does is simple. You will take a base mesh, add a Shape Key, and then deform the mesh in any way you’d like. Then you can animate the Shape Key deformation of the mesh over time. Multiple Shape Keys will blend together.

We are going to use this function to animate our woman (as opposed to some much more complex animation abilities in Blender).

Before we can deform the woman image plane, though, we need a good mesh to deform. At the moment the woman plane contains only 4 vertices in the mesh. We are going to make this much denser before we do anything else.

We want to subdivide the image plane with the woman. So Right-click to select the woman image plane. Then hit the Tab key to change into edit mode. All of the vertices should already be active (selected), they will all be highlighted if they are (if not, hit the A key to toggle selection of all vertices until they are):


What we want to do is to Subdivide the mesh until we get a desired level of density. With all of the vertices in the plane selected, hit the W key and choose Subdivide from the menu. Repeat until the mesh is sufficiently dense for you. In my case, I subdivided six times and the result looks like this:


If you’ve got a touch of OCD in you, you might want to reduce the unused vertices in the mesh. This is not necessary, but might make things a bit cleaner to look at. To remove those vertices, first hit the A key to de-select all the vertices. Then hit the C key to circle-select. Your should see a circle select region where your mouse is. You can increase/decrease the size of the circle using your MouseWheel. Just click now on areas that are NOT your image to select those vertices:


Select all the vertices in a rough outline around your image, and press the X key to invoke the Delete menu. You can just choose Vertices from this menu. You should be left with a simpler mesh containing only your woman image. Hit the Tab key again to exit Edit mode.

Here is what things look like at the moment:


To clear a little space while I work, I am going to hide the Transform and Object Tools palettes from my view. They can be toggled on/off by pressing the N key and T key respectively.

I am also going to increase the size of the Properties panel on the right. This can be done by clicking and dragging on it’s edge (the cursor will change to a resize cursor):


We will want to change the Properties panel to show the Object Data for the woman image plane. Click on that icon to show the Object Data panel. You will see the entry for Shape Keys in this panel.

We want to add a new Shape Key to this mesh, so press the &plus button two times to add two new keys to this mesh (one key will be the basis, or default position, while the other will be for the deformation we want). After doing this, you should see this in the Shape Keys panel:


Now, the next time we are in Edit mode for this mesh, it will be assigned to this particular Shape Key. We can just start editing vertices by hand now if we want, but there’s a couple of things we can do to really make things much easier.

Proportional Editing Mode

We should turn on Proportional Editing Mode. This will make the deformation of our mesh a bit smoother by allowing our changes to effect nearby vertices as well. So in your 3D View press the Tab key again to enter edit mode.


Once in Edit mode, there is a button for accessing Proportional Editing Mode. Once here, just click on Enable to turn it on.

To test things out, you can Right-click to select a vertex in your mesh, and use the G key to move it around. You should see nearby vertices being pulled along with it. Rolling your MouseWheel up or down will increase/decrease the radius of the proportional pull. Remember, to get out of the current action you can just hit the ESC key on your keyboard to exit without making any changes.

If you really screw up and accidentally make a mess of your mesh, it’s easy to get back to the base mesh again. Just hit Tab to get out of Edit mode, then in the Shape Keys panel you can hit the “−” button to remove that shape key. Just don’t forget to hit “&plus” again to add another key back when you want to try again.

Pivot Point

Blender lets you control where the current pivot point of any modifications you make to the mesh should be. By default it will be the median point of all selected objects, which is fine. You may occasionally want to specify where the point of rotation should be manually.


The button for adjusting the pivot point is in the toolbar of the 3D View. I’ll usually only use Median Point or 3D Cursor when I'm doing these. Remember: Left-clicking the mouse in 3D View will set the cursor location. You can leave it at Median Point for now.

To Animate!

Ok, now we can actually get to the animating of the mesh. We need to decide what we’d like the mesh to look like it’s doing first, though. For this tutorial let’s do a couple of simple animations to get a feel for how the system works. I'm going to focus on changing two things.

First we will rotate the womans head slightly down from it’s base position and second we will rotate her arm down slightly as well.

Let’s start with rotating her head. I will use the circle-select in the 3D View again to select a bunch of vertices in the center of her head (no need to exactly select all the vertices all the way around):


In the 3D View, press the R key to rotate those vertices. With Proportional Editing turned on you should see not only your selected vertices, but nearby vertices also rotating. While in this operation, the mousewheel will adjust the radius of the proportional editing influence (the circle around my rotation in my screenshot shows where my radius was set):


Remember: hit the ESC key if you need to cancel out of any operation without applying anything. Go ahead and rotate the head down a bit until you like how it looks. When you get it where you’d like it, just Left-click the mouse to set the rotation. Subtle is the name of the game here. Try small movements at first!

Now let’s move on to rotating the arm a bit. Hit the A key to de-select all the vertices, and choose a bunch of vertices along the arm (again, I use the circle-select C key to select a bunch at once easily):


If you end up selecting a couple of vertices you don’t want, remember that you can Shift + Right-click to toggle adding/removing vertices to the selection set. For example, in my image above I didn't want to select any vertices that were too close to her face to avoid warping it too much. I also went ahead and made sure to select as many vertices around the arm as I could.

I also Left-click in the location you see in my screenshot to place the cursor roughly at her shoulder. For the arm I also changed the Pivot Point to be the 3D Cursor because I want the arm to pivot at a natural location.

Again, hit the R key to begin rotating the arm. If you find the rotation pulls vertices from too far away and modifies them, scroll your mousewheel to decrease the radius of the proportional editing. In my example I had the radius of influence down very low to avoid warping the womans face too much.

As before, rotate to where you like it, and Left-click the mouse when you’re happy.


Finally, you can test how the overall mesh modifications will look with your Shape Key. Hit the Tab key to get out of Edit Mode and back into Object Mode. All of your mesh modifications should snap back to what they were before you changed anything.

Don’t Panic.

What has happened is that the mesh is currently set so that the Shape Key we were modifying has a zero influence value right now:


The Value slider for the shape key is 0 right now. If you click and drag in this slider you can change the influence of this key from 0 - 1. As you change the value you should be seeing your woman mesh deform from it’s base position at 0, up to it’s fully deformed state at 1. Neat!

Once we’re happy with our mesh modifications, we can now move on to animating the sequence to see how things look!

Animating

So what we now want to do is to animate two different things over the course of time in the video. First we want to animate the mesh deformation we just created with Shape Keys, and second we want to animate the movement of the camera through our scene.

If you have a look just below the 3D View window, you should be seeing the Timeline window:


The Timeline window at the bottom

What we are going to do is to set keyframes for our camera and mesh at the beginning and end of our animation timeline (1-250 by default).

We should already be on the first frame by default, so let’s set the camera keyframe now. In the 3D View, Right-click on the camera border to select it (will highlight when selected). Once selected, hit the I key to bring up the keyframe menu.


You’ll see all of the options that you can keyframe here. The one we are interested in is the first, Location. Click it in the menu. This tells Blender that at frame 1 in our animation, the camera should be located at this position.

Now we can define where we’d like our camera to be at the end of the animation. So we should move the frame to 250 in the timeline window. The easiest way to do this is to hit the button to jump to the last frame in the range:


This should move the current frame to 250. Now we can just move the location of our camera slightly, and set a new keyframe for this frame. I am going to just move the camera straight up slightly:


Once position, hit the I key again and set a Location keyframe.

At this point, if you wanted to preview what the animation would look like you can press Alt-A to preview the animation so far (hit ESC when you’re done).

Now we want to do the same thing, but for the Shape Keys to deform over time from the base position to the deformed position we created earlier. In the Timeline window, get back to frame 1 by hitting the jump to first frame in range button:


Once back at frame 1, take a look at the Shape Keys panel again:


Make sure the value is 0, then Right-click on the slider and choose the first entry, Insert Keyframe:


Just like with the camera, now jump back to the last frame in the range. Then set the value slider for the Shape Keys to 1.000. Then Right-click on the Value slider again, and insert another keyframe.

This tells Blender to start the animation with no deformation on the mesh, and at the end to transition to full deformation according to the Shape Key. Conveniently Blender will calculate all of the vertices locations between the two for us for a smooth transition.

As before, now try hitting Alt-A to preview the full animation.

Congratulations, you made it!

Getting a Video Out

If you’re happy with the results, then all that’s left now is to render out the video! There are a few settings we need to specify first, though. So switch over to the Render tab in Blender:


The main settings you’ll want to adjust here are the resolution X & Y and the frame rate. I rendered out at 1280×720 at 100% and Frame Rate of 30 fps. Change your settings as appropriate.

Finally, we just need to choose what format to render out to...


If you scroll down the Render panel you’ll find the options for changing the Output. The first option allows you to choose where you’d like the output file to get rendered to (I normally just leave it as /tmp - it will be C:\tmp in windows). I also change the output format to a movie rendering type. In my screenshot it shows “H.264”, by default it will probably show “PNG”. Change it to H.264.

Once changed, you’ll see the Encoding panel become available just below it. For this test you can just click on the Presets spinner and choose H264 there as well.

Scroll back up to the top of the Render panel, and hit the big Animation button in the top center (see the previous screenshot).

Go get a cup of coffee. Take a walk. Get some fresh air. Depending on the speed of your machine it will take a while...

Once it’s finished, in your tmp directory you’ll find a file called 0001-250.avi. Fire it up and marvel at your results (or wince). Here’s the result of mine directly from the above results:



Holy crap, we made it to the end. That was really, really long.

I promise, though, that it just reads long. If you’re comfortable moving around in Blender and understand the process this takes about 10-15 minutes to do once you get your planes isolated.

Well, that’s about it. I hope this has been helpful, and that I didn’t muck anything up too badly. As always, I’d love to see others results!

[Update]
Reader David notes in the comments that if the render results look a little ‘soft’ or ‘fuzzy’, increasing the Anti-Aliasing size can help sharpen things up a bit (it’s located on the render panel just below render dimensions). Thanks for the tip David!

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


June 21, 2014

Mirror a website using lftp

I'm helping an organization some website work. But I'm not the only one working on the website, and there's no version control. I wanted an easy way to make sure all my files were up-to-date before I start to work on one ... a way to mirror the website, or at least specific directories, to my local disk.

Normally I use rsync -av over ssh to mirror directories, but this website is on a server that only offers ftp access. I've been using ncftp to copy files up one by one, but although ncftp's manual says it has a mirror mode and I found a few web references to that, I couldn't find anything telling me how to activate it.

Making matters worse, there are some large files that I don't need to mirror. The first time I tried to use get * in ncftp to get one directory, it spent 15 minutes trying to download a huge powerpoint file, then stalled and lost the connection. There are some big .doc and .docx files, too. And ncftp doesn't seem to have a way to exclude specific files.

Enter lftp. It has a mirror mode (with documentation, even!) which includes a -X to exclude files matching specified patterns.

lftp includes a -e to pass commands -- like "mirror" -- to it on the command line. But the documentation doesn't say whether you can use more than one command at a time. So it seemed safer to start up an lftp session and pass a series of commands to it.

And that works nicely. Just set up the list of directories you want to mirror, and you can write a nice shell function you can put in your. .zshrc or .bashrc:

sitemirror() {
commands=""
for dir in thisdir thatdir theotherdir
do
  commands="$commands
mirror --only-newer -vvv -X '*.ppt' -X '*.doc*' -X '*.pdf' htdocs/$dir $HOME/web/webmirror/$dir"
done

echo Commands to be run:
echo $commands
echo

lftp <<EOF
open -u 'user,password' ftp.example.com
$commands
bye
EOF
}

Super easy -- all I do is type sitemirror and wait a little. Now I don't have any excuse for not being up to date.

June 19, 2014

Heisenbug

heisenbug

You know. In case you needed a Heisenbug. Here he is.

The SVG source is available on the OpenClipart.org page for the design.

June 18, 2014

Fuzzy house finch chicks

[house finch chick] The wind was strong a couple of days ago, but that didn't deter the local house finch family. With three hungry young mouths to feed, and considering how long it takes to crack sunflower seeds, poor dad -- two days after Father's Day -- was working overtime trying to keep them all fed. They emptied by sunflower seed feeder in no time and I had to refill it that evening.

The chicks had amusing fluffy "eyebrow" feathers sticking up over their heads, and one of them had an interesting habit of cocking its tail up like a wren, something I've never seen house finches do before.

More photos: House finch chicks.

New Krita Videos – and a Kickstarter to help Krita

Ramón Miranda has published 4 short Krita tutorial videos as a support of the Kickstarter campaign to accelerate Krita development.

Krita is a Digital painting application for artists created by artists and is available for Linux and Windows. Maybe MacOS in future. It’s one of the Kickstarter goals for overfunding.

If you like Krita, throw some money at them. Money is so unpersonal, but it helps a lot. ;-) The intended 24 goals are on the web site of Kickstarter – and they look really worth to have!

Krita Kickstarter

Krita tip 01. Monotone image with Transparency Masks

Krita tip 02. Color curves with filter masks

Krita tip 03. Texturize your images. The easy way

Krita tip 04 Digital color mixer

Don’t forget that I’ll give away a copy of the Muses DVD by Ramón Miranda – if you want it, put a comment below Episode 199!

Episode 200 is under way, progress report are on the top of the side bar!

flattr this!

Krita: "Edit Global Selection" feature

Our Kickstarter project is over 50% of the goal now! To say "Thank you!" to our supporters we decided to implement another small feature in Krita: editing of the global selection. Now you can view, transform, paint and do whatever you like on any global selection!

To activate the feature, just go to main menu and activate Selection->Show Global Selection Mask check box. Now you will see your global selection as a usual mask in the list of layers in the docker. Activate it and do whatever you want with it. You can use any Krita tool to achieve your goal: from trivial Brush tool to complex Transformation or Warp tool. When you are done, just deactivate the check box to save precious space of the Layers docker.

Add usual global selection

Deform with Warp Tool and paint with Brush Tool

This feature (as well as the Isolated Mode one) was a really low hanging fruit for us. It was possible to implement due to a huge refactoring we did in the selections area about two years ago, so adding it was only extending existing functionality. It is really a pity that the other features from the Kickstarter list cannot be implemented so easily :) Now I'm going to dig deep into the Vanishing Points and Transformations problem. Since Saturday I've been trying to struggle through the maths of it, but with limited success...

Next Windows and Krita Lime will have this feature included!

And don't forget to your friends about our Kickstarter!

https://www.kickstarter.com/projects/krita/krita-open-source-digital-painting-accelerate-deve




Last week in Krita — week 23 & 24

Even with the effort of designing, launching and running the kickstarterwe haven’t stopped developing!

In the last two weeks, besides the coding work on the git repositories, Boudewijn has made available a hefty number of testing builds for the windows community. This builds brings up the latest novelties and features developed in the master branch. Note, however, not all feature sets are finished and it is not recommended for production use. Get the bleeding edge build

In other development: The community is slowly building a new site for Krita. Planning and designing has been done mainly through the forums. Still in the early stages of develop, which consists of mock ups and concept designs, the project is shaping up for a brilliant future for this website. Join the forums! and take part in the community!

It’s time to review this last two weeks commited hard work.

This week’s new features:

This was a busy period, Boudewijn worked pretty hard to improve the file saving dialog behavior. This is much more difficult than it sounds since every system uses a different file open/save dialog each one working slightly different, because of the differences the implementation needs to take into account how each one asks and gives back data. The changes should make the dialog behave as expected on all systems.

Dmitry in particular, aside from the many bugs fixed, enhanced the stroke smoothing options adding a scaling factor to the weight option. This allows the weighted smoothing to behave exactly the same at different zoom levels.

New features:

  • Improvements to the clone tool. (Boudewijn Rempt)
  • Scalable Distance option in Weighted Smoothing. (Dmitry Kazakov)
  • Make it possible to zoom in all resource selectors, like brushes, presets, gradients (Sven Langkamp)
  • Make the grabbing area for Transform Tool handles twice lager than the handles themselves. (Dmitry Kazakov)
  • Add image sizes and textures for games. (Boudewijn Rempt and Paul Geraskin)
  • Restore the new view action to the view menu. (Boudewijn Rempt)
  • Add Y + mouse click + movement shortcut for Exposure correction. (Dmitry Kazakov)
  • Add shortcuts to switch between painting blending modes. (Boudewijn Rempt)

Gamma and exposure new cursors

General bugfixes and features

  • FIX #334933: Make the grabbing area for Transform Tool handles twice lager than the handles themselves. (Dmitry Kazakov)
  • FIX #335834: Added a Scalable Distance option to Weighted Smoothing. (Dmitry Kazakov)
  • FIX #335647: Fix painting checkers on openGL mode. (Dmitry Kazakov)
  • FIX #335649: Fix hiding the brush outline when the cursor is outside the canvas. (Dmitry Kazakov)
  • FIX #335660: Use flake/always as activation id for a Krita tools. (Sven Langkamp)
  • FIX #335746: Floating message block input under Gnome. (Boudewijn Rempt)
  • FIX #335745: Hide Pseudo Infinite canvas decoration when in Wraparound mode. (Dmitry Kazakov)
  • FIX #335670: Artistic Color Selector lightstrip display non working when floating. (Boudewijn Rempt)
  • FIX #331358: Fix artifacts when rotation stylus sensor is activated. (Dmitry Kazakov)
  • FIX #332773: Add option to disable touch capabilities of the Wacom tablets on canvas only mode. To activate, in kritarc, please add disableTouchOnCanvas=true. (Dmitry Kazakov)
  • FIX #335048: Rename stable category to Brush engines. (Sven Langkamp)
  • Created new cursors for Exposure/Gamma gestures. (David Revoy)
  • Fix “Move layer into group” icon design. (Timothée Giet)
  • Fix file dialog filter types and use native dialogs when possible. (Boudewijn Rempt)
  • Tweak mirror axes display and default position of move handles. (Arjen Hiemstra)

Clone tool improvements

Clone tool has change now makes it possible to clone from one layer to a different one. This works as follows:

  • [CTRL + CLICK] FIRST Time: Define source layer and source area to clone. You can change layer after this to clone to a different layer
  • [CTRL + CLICK]: Adjusts, changes source coordinates. Note that source clone layer remains the same.
  • [CTRL + ALT + CLICK]: Change source layer and source area to clone.

Layer blending modes shortcuts

Following Photoshop default shortcuts for blending modes developers added the same shortcuts to Krita for painting blending modes. this is awesome as there is no loose in focus during painting sessions just to change brush blending mode.

To promote the use of the new added shortcuts here is a list for you. Painting blending mode shortcuts

File dialogs

File dialogs were reworked to work and behave as the user expects, Boudewijn worked amongst other things to ensure dialogs remember the last used directory and to ensure the correct format output on any system. It’s no longer necessary to select the desired format from the list, it’s enough to write the extension after naming the file for Krita to know exactly what format you want to save the file to. Many other enhancements and tweaks were also necessary to make it possible to work with native dialogs on the main systems: KDE, GNOME and Windows.

krita gemini and sketch

Krita Gemini and Krita sketch received much love to make it run smoothly and consistent in all systems, most changes were in the underlying code to optimize and prepare the code for better development. Some changes to note:

  • Use the desktop dialog for opening images from the welcome screen. (Arjen Hiemstra)
  • Fix colouring of the layer controls and tweak them to look good. (Arjen Hiemstra)
  • Properly disable warnings about floating messages. (Arjen Hiemstra)
  • Add duplicate/clear layer buttons to the Layer panel. (Arjen Hiemstra)
  • Add a clear and clone method to LayerModel. (Arjen Hiemstra)
  • Enable floating messages for Gemini and disable them properly for sketch. (Arjen Hiemstra)
  • Fix the save page and reduce the number of errors/warnings. (Arjen Hiemstra)

Code optimization and cleanup

The ever going process of optimization continues with a huge amount of developer commits. The process might not look like much when reading the git logs, but renaming, moving and getting rid of old naming schemes, paves the way for bigger changes. I couldn’t list them all as I list all else since you will probably get bored, but let’s just say they did a lot of housekeeping this past two weeks.

Kickstarter

We're over 50% funded now for the basic goal of having Dmitry another work for six months on Krita. But it's time to put in some sprinting to get to the stretch goals! Help Krita by spreading the word:

June 17, 2014

Netflix Top 50 Covers by Genre (Averaged & Normalized)

In my (apparently) never-ending quest to average all the things, I happened to be surfing around Netflix the other evening looking for something to watch. Then a little light bulb went off!


I had previously blended many different variations of movie posters with varying success, but figured it might be interesting to see mean blends based on Netflix genres (and suggestions for me to watch). So, here are my results across a few different genres:


I found a couple of surprising and interesting things in these results...

For instance, I am not surprised at the prevalence of Teal/Orange in Sci-Fi covers. I was surprised to find that the other genre with such prominent color grading happened to be Comedies (I would have guessed Thrillers or Action).

I had also didn't think that Romance would look like such a hot mess (there's a cosmic sign there, I think). I can also sort of make out abstract faces in Thrillers and Action. Horror is relatively tame by comparison!

Title location seems mostly consistent across genres, though there's a marked preference for top-centered titles in Comedies apparently. You can also see that MST3K must have at least a few titles in my Sci-Fi list (a fact of which I am proud of).

I may finish my work on the movie posters and post them in the future. In the meantime, here is one that I did blending all of the movie posters that the legendary Saul Bass created:


In case your curious, here's the list of the movies these posters are from:

The Shining (1980)
Such Good Friends (1971)
Bunny Lake is Missing (1968)
In Harm's Way (1965)
The Cardinal (1963)
It's a Mad, Mad, Mad, Mad World (1963)
Advise & Consent (1962)
Bird Man of Alcatraz (1962)
One, Two, Three (1961)
Exodus (1960)
Anatomy of a Murder (1959)
North by Northwest (1959)
Vertigo (1958)
Love in the Afternoon (1957)
The Man With the Golden Arm (1955)


DNF v.s. Yum

A lot has been said on fedora-devel in the last few weeks about DNF and Yum. I thought it might be useful to contribute my own views, considering I’ve spent the last half-decade consuming the internal Yum API and the last couple of years helping to design the replacement with about half a dozen of the packaging team here at Red Hat. I’m also a person who unsuccessfully tried to replace Yum completely with Zif in fedora a few years ago, so I know quite a bit about packaging systems and metadata parsing.

From my point of view, the hawkey depsolving library that DNF is designed upon is well designed, optimised and itself built on a successful low-level SAT library that SUSE has been using for years on production level workloads. The downloading and metadata parsing component used by DNF, librepo, is also well designed and complements the hawkey API nicely.

Rather than use the DNF framework directly, PackageKit uses librepo and hawkey to share 80% of the mechanism between PK and DNF. From what I’ve seen of the DNF codebase it’s nice, with unit tests and lots of the older compatibility cruft removed and the only reason it’s not used in PK was that the daemon is written in C and didn’t want to marshal everything via python for latency reasons.

So, from my point of view, DNF is a new command line tool built on 3 new libraries. It’s history may be of a fork from yum, but it resembles more of a 2014 rebuilt American hot-rod with all new motor-sport parts apart from the 1965 modified and strengthened chassis. Renaming DNF to Yum2 would be entirely the wrong message; it’s a new project with a new team and new goals.

Linux Pro Magazine GIMP Handbook Special Edition

I received a few Promo copies of the Linux Pro Magazine GIMP Handbook Special Edition in the mail a couple of weeks ago. I’ve just been too busy to sit down and have a look at it in depth.


I did give it a few flip-throughs when I had a moment, though.

This is apparently a “Special Edition” magazine aimed entirely at GIMP and using it. As such, it’s mostly (unlike my blog) ad free. So that means there’s over 100+ (full color) pages of good content!


It’s broken down nicely by sections:

  • Get Started
    • Compiling GIMP
    • GIMP 2.8 and 2.9
    • User Interfaces
    • Settings
    • Basic Functions
  • Highlights
    • Basic Functions
    • Colors
    • Light and Shadow
    • Animation with GIMP
  • Practice
    • Layers
    • Selection
    • Colors
    • Paths
    • Text and Logos
  • Photo Processing
    • Sharpening
    • Light and Shadow
    • Retouch
    • GIMP Maps
  • Know How
    • G'MIC
    • UFRaw
    • Painting
    • Fine Art HDR Processing
    • Animation with GIMP

The sections appear well written and are clearly laid out. They cover external resources where it makes sense as well (referencing the GIMP Registry quite a few times for some of the more useful and popular scripts).

As an image editing program, GIMP provides a variety of useful and advanced functions for any possible task. Before you use them, however, you should be aware of some simple things. 
— Get Started: Basic Functions

The writing is nice and clear, and there is a good range of topics covered that should get most beginners on track for exploring further in GIMP.

For example, in the “Photo Processing” section, while discussing Sharpening an image, they make mention of most of what users are likely to need (and some more obscure ones): Standard Sharpening, Unsharp Mask, High-Pass Sharpening, Wavelet Sharpen, and more.

The instructions are clearly presented, and having all full-color pages really helps as there are many examples showing the concepts. There’s also some neat coverage of associated/support programs as well like G'MIC, UFRaw, and LuminanceHDR!

There’s a bundled DVD included that will boot to ArtistX linux as well (and includes GIMP install files for OSX/Win).

All in all, quite a bit of content for $15.99.

To sweeten the deal, the nice folks at LinuxPro Magazine are offering a discount code to save 20% off the price!

The discount code is: GIMP20DISCOUNT and it’ll be valid through July 15th.

Of course... A Give-Away!

I do have a few copies they sent to me as promotional copies. I thought it might be more fun to go ahead and pay them forward to someone else who might get more use out of them than me.

So, all you need to do is either leave a comment below, or share any blog post of mine on Twitter or Google+ with the hashtags #patdavid #GIMP.

You can also email me with "GIMP Giveaway" in the subject.

If you do enter, make sure I have a way to reach you.

Oh, and disclaimer: I’ve never done a give away for anything before, so please bear with me if I suck at it...

I’ll sift through the hashtags and whatnot later next week and randomly pick three folks to send these copies to!

[Update]
I chose the winners earlier this week using http://www.random.org. So congratulations to Stan, Anand, and Doug! I'll be dropping your magazines in the mail this week!

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


June 16, 2014

datarootdir v.s. datadir

Public Service Announcement: Debian helpfully defines datadir to be /usr/share/games for some packages, which means that the AppData and MetaInfo files get installed into /usr/share/games/appdata which isn’t picked up by the metadata parsers.

It’s probably safer to install the AppData files into $datarootdir/appdata as this will work even if a distro has redefined datadir to be something slightly odd. I’ve changed the examples on the AppData page, but if you maintain a game on Debian with AppData then this might affect you when Debian starts extracting AppSpream metadata in the next few weeks. Anyone affected will be getting email in the next few days, although it only looks to affect very few people.

Krita team starts implementing features declared for the Kickstarter!

During the first week of our Kickstarter campaign we collected more than 6500 eur, which is about 43% of the goal. That is quite a good result, so we decided to start implementing the features right now, even though the campaign is not finished yet :)

I started to work on three low-hanging fruits in parallel: perspective transformation of the image basing on vanishing points, editing of the global selection and enhanced isolate layer mode.

It turned out that the vanishing point problem is not so "low-hanging" as I thought in the beginning, so right now I'm in the middle of my way of searching the math solution for it: to provide this functionality to the user the developer must find right perspective transformation matrix basing only on points in the two coordinate systems. This problem is inverse to what everyone is accustomed to solve at school :) This is quite interesting ans challenging task and I'm sure we will solve it soon!

Until I found the solution for maths task, I decided to work on small enhancements of our Isolate Layer mode. We have this feature already for a long time, but it was too complicated to use because the only way to activate it was to select "Isolate Layer" item in the right-click menu. Now this problem is gone! You can just press Alt key and click of the layer or mask in the Layers docker. This enables many great use cases for artists, which now can be done with a single Alt-click:
  • Show/Edit the contents of a single layer, without other layers interferring
  • Show/Edit the masks. This includes Selection Masks, so you can edit non-global selections very easily (modifying global selections will be simplified later)
  • Inspect all the layers you have by simply switching between them and looking into their contents in isolated environment

A comic page by Timothée Giet

There are several more things we are planning to do with the Isolated Mode like adding a bit of optimizations, but that is not for today :) 

The next packages in Krita Lime will include this new feature. They are now building at Launchpad and will be available tonight!

And don't forget to spread the word about our Kickstarter!

https://www.kickstarter.com/projects/krita/krita-open-source-digital-painting-accelerate-deve

Fanart by Anastasia Majzhegisheva – 10

Here comes one more illustration for backstory.

Postcard Artwork by Anastasia Majzhegisheva

Artwork by Anastasia Majzhegisheva

I don’t remember if I mentioned that or not, but a few months ago Anastasia is switched to Krita for all her latest art works. Krita is an awesome open-source painting software for artists and they are running a Kickstarter campaign now to make it even better!

Draft 1 Draft 2

June 15, 2014

Vim: Set wrapping and indentation according to file type

Although I use emacs for most of my coding, I use vim quite a lot too, for quick edits, mail messages, and anything I need to edit when logged onto a remote server. In particular, that means editing my procmail spam filter files on the mail server.

The spam rules are mostly lists of regular expression patterns, and they can include long lines, such as:
gift ?card .*(Visa|Walgreen|Applebee|Costco|Starbucks|Whitestrips|free|Wal.?mart|Arby)

My default vim settings for editing text, including line wrap, don't work if get a flood of messages offering McDonald's gift cards and decide I need to add a "|McDonald" on the end of that long line.

Of course, I can type ":set tw=0" to turn off wrapping, but who wants to have to do that every time? Surely vim has a way to adjust settings based on file type or location, like emacs has.

It didn't take long to find an example of Project specific settings on the vim wiki. Thank goodness for the example -- I definitely wouldn't have figured that syntax out just from reading manuals. From there, it was easy to make a few modifications and set textwidth=0 if I'm opening a file in my procmail directory:

" Set wrapping/textwidth according to file location and type
function! SetupEnvironment()
  let l:path = expand('%:p')
  if l:path =~ '/home/akkana/Procmail'
    " When editing spam filters, disable wrapping:
    setlocal textwidth=0
endfunction
autocmd! BufReadPost,BufNewFile * call SetupEnvironment()

Nice! But then I remembered other cases where I want to turn off wrapping. For instance, editing source code in cases where emacs doesn't work so well -- like remote logins over slow connections, or machines where emacs isn't even installed, or when I need to do a lot of global substitutes or repetitive operations. So I'd like to be able to turn off wrapping for source code.

I couldn't find any way to just say "all source code file types" in vim. But I can list the ones I use most often. While I was at it, I threw in a special wrap setting for mail files:

" Set wrapping/textwidth according to file location and type
function! SetupEnvironment()
  let l:path = expand('%:p')
  if l:path =~ '/home/akkana/Procmail'
    " When editing spam filters, disable wrapping:
    setlocal textwidth=0
  elseif (&ft == 'python' || &ft == 'c' || &ft == 'html' || &ft == 'php')
    setlocal textwidth=0
  elseif (&ft == 'mail')
    " Slightly narrower width for mail (and override mutt's override):
    setlocal textwidth=68
  else
    " default textwidth slightly narrower than the default
    setlocal textwidth=70
  endif
endfunction
autocmd! BufReadPost,BufNewFile * call SetupEnvironment()

As long as we're looking at language-specific settings, what about doing language-specific indentation like emacs does? I've always suspected vim must have a way to do that, but it doesn't enable it automatically like emacs does. You need to set three variables, assuming you prefer to use spaces rather than tabs:

" Indent specifically for the current filetype
filetype indent on
" Set indent level to 4, using spaces, not tabs
set expandtab shiftwidth=4

Then you can also use useful commands like << and >> for in- and out-denting blocks of code, or ==, for indenting to the right level. It turns out vim's language indenting isn't all that smart, at least for Python, and gets the wrong answer a lot of them time. You can't rely on it as a syntax checker the way you can with emacs. But it's a lot better than no language-specific indentation.

I will be a much happier vimmer now!

June 14, 2014

Krita Sprint 2014

Past

First Krita sprint that I attended was back in 2010, in Netherlands, Deventer. We were hosted by Krita maintainer, Boudewijn Rempt. The main topic back then was to define product vision for Krita . It was a great idea to invite Peter Sikking. He helped us define very cool vision for Krita. If you have a vision, you can decide easily what to include into your application, what to implement and what can be provided by external applications. It helps you to decide whether you have to agree with user complaining in bug report or not.

Next Krita sprint was in 2011. The location was very cool: Blender Instituite, Netherlands, Amsterdam. For me the most important moment from that sprint was when artists (Timothee, David Revoy,…) demonstrated how they use Krita. It helped us to identify the biggest problems, focus us on polishing parts of Krita that were causing problems for professional painters. We managed to fix those issues since then, Krita has changed a lot since then, it is used by professionals these days!

Painting session on the roof

Timothee painted me in his painting session on the roof

We failed to organize Krita sprint 2012 in Bilbao, Boud and me were too busy. We probably did not plan Krita sprint 2013 for the very same reason.

Present

Krita sprint 2014 was held in Netherlands, Deventer again. We were hosted by Boud and Irina and it was as great as back in 2010. I arrived on Friday, 16th of May. After travelling for a little bit more than 11 hours using public transport, I managed to arrive to Deventer safely. Train, tram, bus, airplane and train and there I was! When I plan such a journey, I tend to add time paddings to have enough time for travel connections, so that’s why it takes so long. My traveling agony was healed by Wi-Fi almost everywhere!

The main discussion was on Saturday and it is nicely described in DOT KDE article. I personally was wondering how Krita Foundation is doing. So far Krita Foundation is not able to employ all Krita hackers full-time, but I hope one day it will be possible!

One of the steps towards that goal is Krita Kickstarter: it will allow our community developers Dmitry and Sven to work on Krita full-time for a longer time period. I hope it will be a huge success and maybe next fund-raiser will allow more community developers (e.g. me :) ) to work on Krita full-time again. Well, it is more complicated for me now when I have my own family, my wife and a little son to care about.

There are full-time developers who work on Krita projects commercially in KO Gmbh: Boud, who is the maintainer of Krita and few employees, who entered Krita development through KO Gmbh: Stuart, Leinir and Arjen. First two were also present at the sprint! Their work is mostly on Krita Sketch, Krita Gemini and Krita on Steam and this commercial work is feeding them. All their work on Krita is open-source development!

We had nice lunch outside together on Saturday. Then we had hacking time til dinner and after dinner. I was working on G’MIC integration into Krita. This time I was fixing problems with internet filter definition updates. Then I spent some time with Dmitry discussing problems I faced when integrating G’mic into Krita. There are some use-cases that our processing framework, responsible for multi-threaded processing of the layers in Krita, is not counting on: I suppose that it is because we did not have enough focus on image filters. Krita is painting application and filters are usually needed for photo-manipulation. But painters also need to post-process their paintings, so that’s why I’m working on G’mic integration.

Sunday was dedicated to artists: Steven, Timothee and Wolthera showing us how they use Krita, what they struggle with and what can be better. Observing artists using Krita allowed us to see where Krita is right now. In 2011 it was not yet suitable for professional painting, in 2014 we could see that Krita made it into professional league of painting applications.

We spent some time just hanging around and discussing stuff around Krita. We had nice walk to the park, spent time on the roof of our hosting place or walked in the town. It was great to meet old friends and meet some new faces.

Hackstreet boys

We also got some presents from Krita Foundation, nice t-shirts for everybody, one t-shirt was  made specially for my little son — thank you! I brought something for Irina and Boud: some Slovak cheese products and slovak wine – oštiepok was especially enjoyed by Boud and Irina.

Big thank you goes to Boudewijn and Irina for excellent hosting and to KDE e.V. for making the sprint possible.

Related Krita sprint 2014 blogs:
Timothee: http://timotheegiet.com/blog/floss/a-report-from-the-krita-sprint-2014.html
Sven: http://slangkamp.wordpress.com/2014/05/31/krita-sprint-2014/
Dmitry: http://dimula73.blogspot.sk/2014/06/some-notes-from-krita-sprint-2014.html
Boud: http://mail.kde.org/pipermail/kimageshop/2014-May/012313.html
Dot KDE Article: https://dot.kde.org/2014/06/04/2014-krita-sprint-deventer-netherlands

Adwaita 3.14

Now that the controversial 3.12 tab design has been validated by Apple, we’re ready to tackle new challenges with the widgetry™.

Adwaita has grown into a fairly complex theme. We make sure unfocused windows are less eye-grabbing (flat). We provide a less light-polluting variant for visually-heavy content apps (Adwaita:dark). And last but not least we provide a specific wigdet style for overlay controls (OSD). All this complexity has made Adwaita quite a challenge to maintain and evolve. Since we were to relocate Adwaita directly into gtk+, we had to bite the bullet and perform quite a surgery on it.

There’s a number of improvements we aimed to achieve. Limiting the number of distinct colors and making most colors derived makes it easier to adjust the overall feel of the theme and I’m sure 3rd party themers will enjoy this too. Not relying on image assets for majority of the drawing makes the workflow much more flexible as well. Many of the small graphical elements now make use of the icon theme assets so these remain recolorable based on the context, similar to how text is treated.

Benjamin has been working hard to move the theme closer to the familiar CSS box model, further minimizing the reliance on odd property hacks and engines (Adwaita no longer makes use of any engine drawing).

We still rely on some image assets, but even that is much more manageable with SASS.

Anything gtk related never happens without the giant help from Matthias, Cosimo and Benjamin, but I have to give extra credits to Lapo Calamandrei, without whom these dark caverns would be impossible for me to enter. Another major piece that I’m grateful for living right inside the toolkit, ready to be brought up any time, is the awesome inspector. Really happy to see it mature and evolve.

June 13, 2014

glibc select weakness fixed

In 2009, I reported this bug to glibc, describing the problem that exists when a program is using select, and has its open file descriptor resource limit raised above 1024 (FD_SETSIZE). If a network daemon starts using the FD_SET/FD_CLR glibc macros on fdset variables for descriptors larger than 1024, glibc will happily write beyond the end of the fdset variable, producing a buffer overflow condition. (This problem had existed since the introduction of the macros, so, for decades? I figured it was long over-due to have a report opened about it.)

At the time, I was told this wasn’t going to be fixed and “every program using [select] must be considered buggy.” 2 years later still more people kept asking for this feature and continued to be told “no”.

But, as it turns out, a few months later after the most recent “no”, it got silently fixed anyway, with the bug left open as “Won’t Fix”! I’m glad Florian did some house-cleaning on the glibc bug tracker, since I’d otherwise never have noticed that this protection had been added to the ever-growing list of -D_FORTIFY_SOURCE=2 protections.

I’ll still recommend everyone use poll instead of select, but now I won’t be so worried when I see requests to raise the open descriptor limit above 1024.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

New in Krita: Painting with Exposure and Gamma

Running a kickstarter campaign can be quite exhausting! But that doesn't mean that coding stops -- here is one new Krita 2.9 feature that we prepared earlier: painting with exposure and gamma on HDR images. HDR (high-dynamic-range) images have greater dynamic range of light than ordinary images. If you make one with your camera, you'll combine a set of images made from the same subject at different exposures. If you want to create a HDR image from scratch, you can create one in Krita by selecting the 16 or 32 bit float RGB colorspace.

HDR images are rendered on your decidedly non-HDR monitor by picking an exposure level and checking what would be visible at that level. That's something Krita has been able to do since 2005! With Krita 2.7, Krita started supporting the VFX industry's standard library, OpenColorIO. But it was always hard to select a color for a particular exposure. Not anymore!

Check out this video by Timothee Giet showing off painting with exposure and gamma in Krita:

 {youtube}esSzKzXVWQE{/youtube}

Here's how it works, technically:

Krita has two methods of rendering the image on screen: internal and using Open Color IO.

  1. Internal. In this mode the image data is converted into the display profile that the user configured in Settings->Color Management dialog. It ensures that all the colors of the image color space are displayed correctly on screen.  To use this mode you need to:
    1. Generate an icc profile for your monitor using any hardware device available on market
    2. Load the “Video Card Gamma Table” part of the generated profile (vcgt icc tag) into the LUT of your video card. You can use ‘xcalib’ to do that.
    3. Choose the profile in Settings->Color Management dialog
  2. Open Color IO mode. In this mode Krita does not do any internal color correction for the displayed image. Instead it passes raw image data to the OCIO engine, which handles the color conversions and color proofing itself. To configure OCIO pipeline one should either:
    1. Get the configuration here.
    2. Set the $OCIO environment variable to the path of your config directory
    3. Select the path to the configuration in the Krita docker manually


smoking_common.png

Smoking Figure by Timothée Giet


field_common.png

Landscape by Wolthera van Hövell tot Westerflier

Using Exposure and Gamma for painting

Now when using Open Color IO engine (even in “Internal” mode) one can paint on the image, switching the exposure levels on the fly. Just press ‘Y’ key and drag your mouse upward or downward and the amount of light will either increase or decrease! Creation of High Dynamic Range images never have been so easy! This feature can be used for prototyping the scenes that are going to have dynamic light, for example:

A bomb has been blown. The scene becomes filled with light! surely enough, the shadow areas are now becoming well-lightened areas with lots of details!

The characters are in a room. Suddenly the light goes out and the viewer can see only eyes, fire and cigarettes glowing here and there. The highlight areas are now well-detailed.

Color Selectors and Open Color IO

The good news! The color selectors in Krita now know not only about your monitor profile, but also about Exposure, Gamma and Open Color IO proofing settings! So it will show you the color exactly as it will look on the canvas!

Internally, it works with HSV values which you are expected to see on screen. It takes these values, applies exposure and gamma correction, executes color management routines and displays them on screen. It allows you not to think about current exposure value or color space constraints. You will always see the color exactly as it will be painted on canvas!

That is not all! After you chose any color as active, changing exposure will not alter its visual representation.

Giving it a try

Krita Lime packages for Saucy and Trusty contain this feature (older versions of Ubuntu Linux do not have OpenColorIO). Windows users can get the latest Krita build: http://heap.kogmbh.net/downloads/krita_x64_2.8.79.4.msi. Take care! These builds are made directly from the development branch and may contain features (like the resource manager) that are not done yet. Not everything will be working, though development builds are usually stable enough for daily work.

Jestedska Odysea Longboard

Some shots with the gopro from last weekend. Music by LuQuS.

Jestedska Odysea Longboard from jimmac on Vimeo.

June 12, 2014

Comcast actually installed a cable! Or say they did.

The doorbell rings at 10:40. It's a Comcast contractor.

They want to dig across the driveway. They say the first installer didn't know anything, he was wrong about not being able to use the box that's already on this side of the road. They say they can run a cable from the other side of the road through an existing conduit to the box by the neighbor's driveway, then dig a trench across the driveway to run the cable to the old location next to the garage.

They don't need to dig across the road since there's an existing conduit; they don't even need to park in the road. So no need for a permit.

We warn them we're planning to have driveway work done, so the driveway is going to be dug up at some point, and they need to put it as deep as possible. We even admit that we've signed a contract with CenturyLink for DSL. No problem, they say, they're being paid by Comcast to run this cable, so they'll go ahead and do it.

We shrug and say fine, go for it. We figure we'll mark the trench across the driveway afterward, and when we finally have the driveway graded, we'll make sure the graders know about the buried cable. They do the job, which takes less than an hour.

If they're right that this setup works, that means, of course, that this could have been done back in February or any time since then. There was no need to wait for a permit, let alone a need to wait for someone to get around to applying for a permit.

So now, almost exactly 4 months after the first installer came out, we may have a working cable installed. No way to know for sure, since we've been happily using DSL for over a month. But perhaps we'll find out some day.

The back story, in case you missed it: Getting cable at the house: a Comcast Odyssey.

June 11, 2014

Application Addons in GNOME Software

Ever since we rolled out the GNOME Software Center, people have wanted to extend it to do other things. One thing that was very important to the Eclipse developers was a way of adding addons to the main application, which seems a sensible request. We wanted to make this generic enough so that it could be used in gedit and similar modular GNOME and KDE applications. We’ve deliberately not targeted Chrome or Firefox, as these applications will do a much better job compared to the package-centric operation of GNOME Software.

So. Do you maintain a plugin or extension that should be shown as an addon to an existing desktop application in the software center? If the answer is “no” you can probably stop reading, but otherwise, please create a file something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name Here <your@email.com> -->
<component type="addon">
<id>gedit-code-assistance</id>
<extends>gedit.desktop</extends>
<name>Code Assistance</name>
<summary>Code assistance for C, C++ and Objective-C</summary>
<url type="homepage">http://projects.gnome.org/gedit</url>
<metadata_license>CC0-1.0</metadata_license>
<project_license>GPL-3.0+</project_license>
<updatecontact>richard_at_hughsie.com</updatecontact>
</component>

This wants to be installed into /usr/share/appdata/gedit-code-assistance.metainfo.xml — this isn’t just another file format, this is the main component schema used internally by AppStream. Some notes when creating the file:

  • You can use anything as the <id> but it needs to be unique and sensible and also match the .metainfo.xml filename prefix
  • You can use appstream-util validate gedit-code-assistance.metainfo.xml if you install appstream-glib from git.
  • Don’t put the application name you’re extending in the <name> or <summary> tags — so you’d use “Code Assistance” rather than “GEdit Code Assistance
  • You can omit the <url> if it’s the same as the upstream project
  • You don’t need to create the metainfo.xml if the plugin is typically shipped in the same package as the application you’re extending
  • Please use <_name> and <_summary> if you’re using intltool to translate either your desktop file or the existing appdata file and remember to add the file to POTFILES.in if you use one

Please grab me on IRC if you have any questions or concerns, or leave a comment here. Kalev is currently working on the GNOME Software UI side, and I only finished the metadata extractor for Fedora today, so don’t expect the feature to be visible until GNOME 3.14 and Fedora 21.

FillHoles revamp

Hi all

FillHoles FillHoles Precise

This is a small but important revamp for FillHoles tool.  Completely changed its behaviour to allow more consistent holes boudary treatment. previously all actions where applied at the end of click.
Now holes are saved and different actions are applied one by one at user choice.
Also solved an important bug in Contour smoothing causing the contour to sligthly rotate in each iteration. now it behaves at should be.

 

Cheers


June 10, 2014

Krita Lime is updated again!

It was about a month when we last updated Krita Lime. And it is not because we had leisure time and did nothing here ;) In reverse, we got so many features merged into master so it became a bit unstable for a short period of time. And now we fixed all the new problems so you can see a nice build of Krita with lots of shiny features!

Wacom Artpen rotation sensor improved

If you are a happy owner of a Wacom Artpen stylus, now you can use its rotation sensor efficiently: now it works on both Linux and Windows, and more, it works exactly the same way on both operating systems! For those who are not accustomed to work with drivers directly it might come as surprise, but the direction of rotation reported by the Windows and Linux drivers are opposite, not speaking about an offset by 180°. The good news: now it is gone! Just use it!

Avoid stains on the image when using touch-enabled Wacom device

The most popular advice you get from an experienced artist concerning Wacom touch-enabled usually sounds like: "disable touch right after connecting it". Well, it has some grounds... The problem is that the artist while painting with the stylus can easily tap on a tablet with a finger and soil the image with stains of paint. Yes, most of the taps will be filtered by the Wacom driver (it disables touch while stylus is in proximity), but sometimes it doesn't work. Anyway, now the problem is solved, though only on Linux.

Now Linux version of Krita has a special "hidden" configuration option called disableTouchOnCanvas. If you add line

disableTouchOnCanvas=true

to the beginning of your kritarc file, the touch will not disturb you with extra points on the canvas anymore! Though it will continue to work with UI elements as usual!

OpenColorIO-enabled color selectors for HDR images

This is a very huge and really nice feature, on which we were working hard. There will be a separate article about it soon! Just subscribe to us and wait a bit! ;)

Many small nice enhancements

  •  '[' and ']' shortcuts for changing image size now don't have hysteresis, and scale smoothly with the brush size
  • Zooming, Rotation and Mirroring now have a floating window showing the current state of the operation. This is highly necessary when working in full-screen mode or without the status bar shown.
  • Pseudo Infinite Canvas will not make your canvas too huge. It will grow the image by 100% and no more. Now you can use this feature for doubling any dimension of the canvas with a single click.
  • Added "Scalable Smoothing Distance" feature. When using Weighted Smoothing, the distance will be automatically corrected according to your zoom level, so the behavior of the stylus will not alter with changing the canvas zoom.
  • Made the handles of the Transform Tool easier to click on. Now clickable area is twice wider! Just click!

Kickstarter campaign

And yes, today we started our first donation campaign on Kickstarter! It will last for 30 days only, during which we are going to raise money for 2.9
release!

Direct link: http://www.krita.org/kickstarter.php

Help us, and together we will make Krita awesome!

Give Krita 2.9 a Kickstart!

After the successful release of Krita 2.8, the advanced open source digital painting application, the we're kicking off the work on the next release with a kickstarter campaign!

 

Krita 2.8, released on Linux and Windows, has been a very successful release, with hundreds of thousands of downloads. The buzz has simply been insane! And of course, Krita 2.8 really was a very good release, polished, full of productivity enhancing features.

 

Part of the secret was Dmitry Kazakov’s full-time work sponsored by the Krita Foundation which provided Krita with an insane number of productivie and innovative features. 

 

So for Krita 2.9, we are going for a repeat performance! And we’re going to try and double it, too, and have two people work on Krita full-time. Next to Dmitry, there's Sven, who's just finished university. Sven's been working on Krita for about ten years now. That’s the first stretch goal.

 

And as a super-stretch goal, we intend to port Krita to OS X, too!

 

Together with our artist community we created a set of goals to work on, ranging from improved compatibility with Photoshop to making the transform tool the most awesome ever seen. Check out the work package for Krita 2.9 on the kickstarter campaign page.

http://www.krita.org/kickstarter.php

June 09, 2014

DIY Ringflash

The world probably needs another DIY ringflash tutorial like they need a hole in the head. There’s already quite a few different tutorials around explaining how to create one…

So here’s mine! :)


At LGM this year I hacked together a quick ringflash using parts I picked up in a €1 store while walking through the city with Rolf (he helped me pick out and find parts - Thank You Rolf!). I built one while I was there because it was way less hassle than trying to bring mine from home all the way to Leipzig, Germany (they don’t really collapse into any sort of smaller size, so they’re cumbersome to transport).



Anyway, I got some pretty strange looks from folks as I was hacking away at the plastic colander I was using. The trick to making and using these DIY ringflashes is to not care what others think ...

Pat David Ringflash DIY LGM Tobias Ellinghaus houz Meet the GIMP
Using the ringflash on Johannes..., by Tobias Ellinghaus

Because you will look strange building it, and even stranger using it. If you can get past that, the results are pretty neat, I think.

Pat David Ringflash DIY GIMP
The results of using the ringflash on Johannes...

There’s a nice even light that helps to fill in and remove harsh shadows across a subjects face (very flattering for reducing the appearance of wrinkles for instance). At lower power with other lighting it makes a fantastic fill light as well.

So, after seeing the results I had a few people ask me if I was going to write up a quick guide on building one of these things. I wasn’t intending to originally, but figured it might make for a fun post, so here we are.

Now, normally I would take fancy photos of the ringflash to illustrate how to go about making one, but I realized that it would be hard to account for all the different types, sizes, and styles that one could be made in.

Oh, and more importantly I wasn’t about to try and lug it all the way back to the states. So I left it there (I think Simon from darktable ended up taking it).

So I’ll improvise...

Building a DIY Ringflash


The Parts

The actual parts list is pretty simple. A large bowl, a small bowl, a cup (or tube), and some tape:

Pat David Ringflash DIY GIMP

Pat David Ringflash DIY GIMP

The material can be anything you can easily cut and modify, and hopefully that won’t throw it’s own color cast. White is best on the inside of the large bowl and outside of the small bowl and cup. Silver or metal is fine too, but the color temperature will trend to cooler.

The thing I mainly look for are sizes to fit my intended lens(es) I will be using:

Pat David Ringflash DIY GIMP

Each of the components needs to have a diameter bigger than my lens to fit through.

I’ll also look for a bowl that has a flat bottom as it usually makes it easier for me to cut holes through it (as well as tape things together).

All the other dimensions and sizes are arbitrary, and are usually dictated by the materials I can find. In Leipzig I used a colander for the large bowl, a cup, and the same small cheap white plastic bowls they served soup in at the party for the smaller bowl*.

* I did not actually use a soup bowl from the party, I just happened to purchase the same type bowl.

The Cuts

The first cut I’ll usually make is to open a hole in the side of the large bowl to allow the flash to poke through. I almost always just sort of wing it, and cut it by hand (or rotary tool if you have one):

Pat David Ringflash DIY GIMP

It doesn’t have to be placed perfectly in any particular location, though I usually try to place it as close to the bottom of the large bowl as possible.

Once the hole is cut, the flash should fit into place. I try to err on the side of caution and cut on the smaller side just in case. I can always remove more material if I need to, putting it back is harder.

Pat David Ringflash DIY GIMP


Then I’ll usually place the cup into the bowl and trace out it’s diameter onto the large bowl. When I’m cutting this hole, I try to make it on the smaller side of my mark lines to leave me some room to tape the cup into place.

Pat David Ringflash DIY GIMP

I’ll go ahead and cut the bottom of the cup now as well:

Pat David Ringflash DIY GIMP

Pat David Ringflash DIY GIMP

As with the large bowl, I’ll also trace the cup diameter on the small bowl and mark it. I’ll cut this hole a little small as well just in case.

Pat David Ringflash DIY GIMP

Pat David Ringflash DIY GIMP

That’s all there is to cut the parts! Now go grab a roll of tape to tape the whole thing together. If you want to get fancy I suppose you could use some glue or epoxy, but we’re making a DIY ringflash from a bunch of cheap bowls - why get fancy now?

Assembly

It should be self apparent how the whole thing goes together.

One thing I do try to watch for is to not tape things together where the light will be falling. So to tape the cup to the large bowl, I’ll apply the tape from the outside back of the bowl, into the cup.

Pat David Ringflash DIY GIMP

Tape the front (small) bowl into place in the same way. It might not be pretty, but it should work!

Pat David Ringflash DIY GIMP

Result

When all is said and done, this should be sort of what you’re looking at:

Pat David Ringflash DIY GIMP

I’ve made a few of these now, and they honestly don’t take long at all to throw together. If I’m in a store for any reason I’ll at least check out cheap bowls for sale just in case there might be something good to play with (especially if it’s cheap).


She may not look like much, but she's got it where it counts, kid.
I've made a lot of special modifications myself.

Using It

If you’ve got the power in your flash, it’s pretty easy to drop the ambient to near black (these were mostly all shot around f/5 1250s ISO 200):

Pat David Ringflash DIY LGM Party ginger coons
Pat David Ringflash DIY LGM Party Claire

Pat David LGM DIY Ringflash Ville GIMP

All the shots I took at the LGM 2014 party in Leipzig used this contraption (either as a straight ringflash, or a quick hand-held beauty dish). The whole set is here on Flickr.

Of course, if you put your subject up against a wall, you’ll get that distinctive halo-shadow that a ringflash throws:

Pat David DIY Ringflash Whitney Wall Pretty Smile Girl
Whitney

Pat David DIY Ringflash
Sirko

I also normally don’t do anything to attach the flash to the bowls, or the entire contraption to my camera. The reason is that it actually makes a half decent beauty dish in a pinch if needed. All you have to do is move the ringflash off to a side (left side for me, as I shoot right-handed):




Conclusion

Well, that’s it. I hope the instructions were clear, and at least somebody out there tries building and playing with one of these (share the results if you do!).

As you can see, there’s not much to it. The hardest part is honestly finding bowls and cups of appropriate size...

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


June 06, 2014

Santa Fe Highway Art, and the Digestive Deer

Santa Fe is a city that prides itself on its art. There are art galleries everywhere, glossy magazines scattered around town pointing visitors to the various art galleries and museums.

Why, then, is Santa Fe county public art so bad?

[awful Santa Fe art with eagle, jaguar and angels] Like this mural near the courthouse. It has it all! It combines motifs of crucifixions, Indian dancing, Hermaphroditism, eagles, jaguars, astronomy, menorahs (or are they power pylons?), an angel, armed and armored, attempting to stab an unarmed angel, and a peace dove smashing its head into a baseball. All in one little mural!

But it's really the highway art north of Santa Fe that I wanted to talk about today.

[roadrunner highway art] [horned toad highway art] [rattlesnake highway art] Some of it isn't totally awful. The roadrunner and the horned toad are actually kind of cute, and the rattlesnake isn't too bad.

[rooster highway art] [turkey highway art] On the other hand, the rooster and turkey are pretty bad ...

[rabbit highway art] and the rabbit is beyond belief.

As you get farther away from Santa Fe, you get whole overpasses decorated with names and symbols:
[Posuwaegeh and happy dancing shuriken]

[Happy dancing shuriken] I think of this one near Pojoaque as the "happy dancing shuriken" -- it looks more like a Japanese throwing star, a shuriken, than anything else, though no doubt it has some deeper meaning to the Pojoaque pueblo people.

But my favorite is the overpass near Cuyamungue.

[K'uuyemugeh and digestive deer]

See those deer in the upper right and left corners?

[Cuyamungue digestive deer highway art] Here it is in close-up. We've taken to calling it "the digestive deer".

I can't figure out what this is supposed to tell us about a deer's alimentary tract. Food goes in ... and then we don't want to dwell on what happens after that? Is there a lot of foliage near Cuyamungue that's particularly enticing to deer? A "land of plenty", at least for deer? Do they then go somewhere else to relieve themselves?

I don't know what it means. But as we drive past the Cuyamungue digestive deer on the way to Santa Fe ... it's hard to take the city's airs of being a great center of art and culture entirely seriously.

June 05, 2014

Son of more Fedora.next logos!

Where we left off last time was basically a brain dump of some random ideas. Thank you again for all of the great feedback and commentary around the designs. It really seems that folks are digging the “C” series of logo designs the most – here’s a bunch of iterations of that concept:

00-c-series

A lot of comments focused on the cloud logo not looking quite like a cloud, but the other logos worked pretty well. Other comments talk about how the cloud logo represented ‘scaling up,’ which is a good thing to represent. I poked around a bit with the cloud logo, keeping the vertical design for “scale up,” but varying the heights of the components to suggest a cloud a bit more:

01-cloud-shapes

Here’s those shape variations in context with the other logo designs:

02-cloud-shapes-in-context

Ryan and I talked a little bit about how these shapes are so simple, we could do a lot of cool treatments with them. One issue with making a logo design too ‘cool’ or ‘trendy’ is that the logo tends to get dated quickly. The basic shape of the C series logo designs, though, is that they are so simple they could have a timeless quality about them. You could dress them up with a particular treatment and then go back to basics or use a different treatment to keep up with trends but also not date the logo as trends change. We both really like this recent geometric pixelly fill kind of design (there are a few examples in the pixely pinterest board I put together), and Ryan came up with a great workflow to create these textures using the prerelease version of Inkscape from his copr repo. (We need to document and blog that too, of course. :) I promised banas I’d make it happen!)

Anyway, here are the original C series mockups with that kind of treatment:

03-treatment

Well, anyway! That’s just an update on my thinking here using your feedback. banas has also put together a blog post with some great sketches / iterations for the logo, and I suggest you take a look at his ideas and give him some feedback too:

Dreaming up Fedora.next logos

I know he had some computer issues that prevented him from being able to do these up in Inkscape, but he agreed to share his sketches as a work-in-progress – I think this is a great open way of sharing ideas.

Of course, as I hope is clear by now – your ideas and sketches are most certainly welcome as well, and we’d really love it if you riffed off the ideas that have already been posed by myself, Ryan, and banas. I think together we’ll end up with something really awesome. :)

In case you want to have a play with any of the stuff posted here, I’ve uploaded the SVG containing the assets:

http://duffy.fedorapeople.org/fedora-logos/next/fedora-next.5.svg

Enjoy :) Productive feedback welcomed in the comments or on the design-team mailing list.

June 04, 2014

Meet Sarah

Pat David Meet Sarah Headshot Photek Portrait softlighter

Sarah has been shooting with my friend Brian for a while now. He and I recently tried to organize to shoot together, but unfortunately he was called away at the last moment. Which was a bummer, because I had also just purchased a 60” Photek Softlighter II, and was super-eager to put it through its paces...

Luckily Sarah was still good to do the shoot! (There were some initial hiccups, but we were able to sort them out and finally get started).

There were two aspects to obtaining these results I thought it might be nice to talk about briefly here, the lighting and the post work...

The Lighting



Meet Sarah - YN560 firing into the 60" softlighter, camera left, really really close in.

If you’ve ever followed my photographs, I like to keep things simple. This is mostly for two reasons: 1) I’m cheap, and simple is inexpensive. 2) I’m not smart, and simple is easy to understand.

Well, I also happen to really like the look of simply lit images if possible. Some folks can get a little nuts about lighting.

The real creativity is when you can make a simply lit image look good . This is the hard part, I think, and the thing I spend most of my time working to achieve and learn about.

I had previously built myself a small-ish softbox to control my light a bit more, and it has seen quite a bit of use in the time that I’ve had it. It works in a pinch for a neat headshot key light, but I was bumping into the limits of using it due to size. I was mostly constrained with tight-in headshots to keep the light relatively soft. (It was only about 20" square).

So I had been looking for quite some time at a larger modifier. Something I could use for full body images if I wanted, but still keep the nice control and diffusion of a large softbox. Thanks to David Hobby, I finally decided that I simply had to have the Photek Softlighter.

To help me understand faster/better what the light is doing and how it effects a photograph, I try to minimize the other variables as much as I can. Shooting in a controlled environment allows me this luxury, so all these images I took of Sarah are shot with a single light only. This way I can better understand it’s contribution to the image if I want to add any other modifiers or lights (without having to worry about nuking the ambient at the same time).

Plus, chiaroscuro!

Postprocessing



For these images I wanted to try something a little different for a change. I wanted to have a consistent color grading for all the images from the session, and I wanted to shoot for something a bit softer and subdued.

I followed my usual workflow (documented here and here).

Here was the original image straight out of the camera:


Straight out of the camera

In my Raw processor, I adjusted exposure to taste, increased blacks just slightly and added a touch of contrast. I also dropped the vibrance of the image down just a bit as well. I did this because I knew later I would be toning with some curves and didn’t want things to get too saturated.


After exposure, contrast, and vibrance adjustments

I brought the image into GIMP at this point for further pixel-pushing and retouching. As usual, I immediately decomposed the image into wavelet scales so I could work on individual frequencies. There wasn’t much needed to be done for skin work, just some (very) slight mid-frequency smoothing. Spot healing here and there for trivial things, and I was done.

One small difference in my workflow is that I’ve switched from using a Gaussian Blur on the frequency layers to using a Bilateral Blur instead. I feel it preserves tonal changes much better. It can be found in G'MIC under Repair → Smooth [bilateral].


After some mid-frequency smoothing and spot healing

At this point, it was color toning time! (alliteration alert!)

I ended up applying a portra-esque color curve against the image, and reduced its opacity to about 50%. This gave me the nice tones from the curve adjustment, but didn’t throw everything too far red. Just a sort of delicate touch of portra...


A touch of portra-esque color toning

At this point I did something I don’t normally do that often. I wanted to soften the colors a bit more, and skew the overall feeling just slightly warmer and earthy-toned. Sarah has very pretty auburn hair, the portra adjusted the skin tones to be pleasing, and she had on a white shirt, with a gray/brown sweater.

So I added a layer over the entire image and filled it with a nice brown/orange shade. If you’re familiar at all with teal/orange hell, the shade is actually a much darker and unsaturated orange, but we’re not going to cool the shadows separately. We’re just going to wash the image slightly with this brown-ish shade.

This is the shade I used.
It’s #76645B in HTML notation.

I set the layer opacity for this brown layer down really low to about 9%. This took the edge just ever so slightly off the highlights, and lifted the blacks a tiny bit. Here’s the final result before cropping:

Final result (Compare Original)

I feel like it softens the contrast a bit visually, and enhances the browns and reds nicely. After sharpening and cropping I got the final result:


I purposefully didn’t go into too much detail because I’ve written at length about each of the steps that went into this. The most important thing I think in this case is finding a good color toning that you like (the post about color curves for skin is here). The color layer over the image is new, but it was honestly done through experimentation and taste. Try laying different shades at really low opacity over your image to see how it changes things! Experiment!

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP



More Sarah

Now that we’ve had a chance to shoot together, I’m hoping we can continue to do so. In the meantime, here’s a few more images from that day. (The set is also on Google+)






I love the expression on her face here

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.



June 03, 2014

Cicada Rice Krispies

[Cicadas mating] Late last week we started hearing a loud buzz in the evenings. Cicadas? We'd heard a noise like that last year, when we visited Prescott during cicada season while house-hunting, but we didn't know they had them here in New Mexico. The second evening, we saw one in the gravel off the front walk -- but we were meeting someone to carpool to a talk, so I didn't have time to race inside and get a camera.

A few days later they started singing both morning and evening. But yesterday there was an even stranger phenomenon.

"It sounds like Rice Krispies out in the yard. Snap, crackle, pop," said Dave. And he was right -- a constant, low-level crackling sound was coming from just about all the junipers.

Was that cicadas too? It was much quieter than their loud buzzing -- quiet enough to be a bit eerie, really. You had to stop what you were doing and really listen to notice it.

It was pretty clearly an animal of some kind: when we moved close to a tree, the crackling (and snapping and popping) coming from that tree would usually stop. If we moved very quietly, though, we could get close to a tree without the noise entirely stopping. It didn't do us much good, though: there was no motion at all that we could see, no obvious insects or anything else active.

Tonight the crackling was even louder when I went to take the recycling out. I stopped by a juniper where it was particularly noticeable, and must have disturbed one, because it buzzed its wings and moved enough that I actually saw where it was. It was black, maybe an inch long, with narrow orange stripes. I raced inside for my camera, but of course the bug was gone by the time I got back out.

So I went hunting. It almost seemed like the crackling was the cicadas sort of "tuning up", like an orchestra before the performance. They would snap and crackle and pop for a while, and then one of them would go snap snap snap-snap-snap-snapsnapsnapsnap and then break into its loud buzz -- but only for a few seconds, then it would go back to snapping again. Then another would speed up and break into a buzz for a bit, and so it went.

One juniper had a particularly active set of crackles and pops coming from it. I circled it and stared until finally I found the cicadas. Two of them, apparently mating, and a third about a foot away ... perhaps the rejected suitor?

[Possible cicada emergence holes]
Near that particular juniper was a section of ground completely riddled with holes. I don't remember those holes being there a few weeks ago. The place where the cicadas emerged?

[Fendler's Hedgehog Cactus flower] So our Rice Krispies mystery was solved. And by the way, I don't recommend googling for combinations like cicada rice krispies ... unless you want to catch and eat cicadas.

Meanwhile, just a few feet away from the cicada action, a cactus had sprung into bloom. Here, have a gratuitous pretty flower. It has nothing whatever to do with cicadas.

Update: in case you're curious, the cactus is apparently called a Fendler's Hedgehog, Echinocereus fendleri.

June 02, 2014

Some notes from Krita Sprint 2014

Krita people in Deventer:
Sven, Lukas, Timothee and Steven
Almost two weeks have passed since we returned from the sprint, but we are now only beginning to sort out and formalize all the data and notes we did during the meeting. The point is, this time (as well as during the last sprint in 2011) we had three painters with us who gave us immeasurable amount of input about how they use Krita and what can be improved. This is the case when criticizing and complaining was exactly what we needed :)

So after having general discussions about Krita's roadmap on Saturday [0], we devoted Sunday on listening to painters. Wolthera, Steven and Timothée gave us short sessions during which they were painting their favorite characters and we could look at it and notice all the usecases and small inconveniences they face when working with Krita. The final list of our findings became rather long :), but it will surely have immense impact on our future. We saw not only wishes and bugs, we also had several revelations, the things which we could not even imagine before. Here is a brief list of them.

Tablet-only workflow

Yeah, not all the painters have a keyboard! ;) Sometimes a painter can use a tablet with built-in digitizer for painting. In such a case the workflow completely changes!
Two tool bars is too few! More floating toolbars!
  1. The painter may decide to reassigns the two stylus buttons to pan and zoom gestures since he has no access to a usual Spacebar shortcut.
  2. The toolbars! Yes, the toolbars are the precious space where the painter will put all the actions he needs. And there should be many configurable toolbars. The problem we have now is that there can be only one toolbar of a specific type and every action belongs to its own toolbar. The user should be able to create many general-case toolbars and put/group actions however he likes. I'm not sure this is possible to implement within current KDE framework, but we must investigate into it!
  3. Even when using a tablet some painters cheat a bit and use gaming keypads to access most needed actions, like pop-up palette, color picker and others. Steven came to Deventer with his Razer Nostromo device, and it seems he is quite convenient with it.
Razer Nostromo. Not for gaming :)

Revelations

Though it might sound funny, but some of Krita features really surprised me! I never knew we could use old-good tools this way.
  1. Experiment Brush. Have you ever thought that this brush might be an ideal tool for creation of shadows on a comic-look pictures? Well, it is ;)
  2. Group Layers + Inherit Alpha layer option. I could never imagine that Inherit Alpha feature can be combined with the Group Layers! If you use Inherit Alpha withing a group, then it'll use only the colors of this group! This can be used for filling/inking the parts of the final image. Just move it into a separate group, activate Inherit Alpha and you can easily fill/outline your part of the image!
  3. Color Picker. This is a trivial tool of course. But if you assign it to the second stylus' button, it becomes an efficient tool for mixing color right on the canvas! Paint, pick and mix colors as if you use a real-world brush.
Well, there were many other issues we found during the sessions. There were also some bugs, but the severity of those was really minor. Especially in comparison to the sessions we did in 2011, when David Revoy had to restart Krita several times due to crashes and problems... We did really much progress since Krita 2.4!

Yeah, it was a really nice and efficient sprint! Thanks Boudewijn and Irina for hosting the whole team in their house and KDE e.V. for making all this possible!


[0] - see Boud's report about it

Last week in Krita — week 21&22

Last weekend we celebrated a Krita sprint in Deventer, an event that reunites Krita’s developers to talk about, coordinate and if possible code the next steps in Krita’s roadmap. The discussion themes where varied and went from Krita foundation status to specific software features such as OpenGL default setting. Other topics included:

  • 2.9 main features and code emphasis and 3.0 roadmap priorities. In general we laid out a plan on how to code all the plans needed for a successful 2.9 release and a solid 3.0 version with the new powerful qt 5.0
  • Multiple documents View. An informative session on the current code status of the branch and discussion on the technical complexities of making it happen.
  • Text and vector tools enhancement was a big topic on the table. Big changes and improvements are now set to happen from 3.0 building slowly towards the future.
  • Translation: We established the work that needs to be done to standardize all translatable strings and make internationalization consistent across languages.
  • Discover-ability of features within the app and tool tips. A decision was made about this: create a simple system to allow the user to submit a help tool tip for the community.
  • OpenGL default settings: Krita has two painting engines, while both work okay. The non-GL engine is very slow on Windows. It was decided to turn the OpenGL engine on by default and, if not supported, fall-back to the CPU based one.

This week’s new features:

  • Index colors filter. (Manuel Riecke)
  • Allow to lock docker state. (Boudewijn Rempt)
  • Improved gradient editor: save rename edit. (Sven Langkamp)
  • Show a floating message when rotating, mirroring and zooming the canvas. (Dmitry Kazakov)

Index colors filter

This filter allows you to reduce the number of colors displayed using a color ramp’s map.

Filter: Index Colors dialog

Filter: Index Colors variants

Some uses for the filter include HD index painting as in the video below.

{youtube}v1Z__mSfo8s{/youtube}

Lock docker state

It’s now possible to lock the dockers in position to avoid any modification. This solves the problem of accidentally moving the dockers while adjusting layers and options.

Dock lock position

Improved gradient editor

The gradient editor received necessary improvements. This makes creating gradients a lot easier and gives the option to rename and edit. It's now much faster to get exactly the gradient you are building.

Gradient Editor

Floating messages

Rotating, Zooming and Mirroring canvas will now trigger a nice message at the top of the screen with the current degree, zoom level or ON/OFF states. The message will stay for a second before fading out. Neat!

Floating messages

General bug fix and features

  • FIX #335438: Fix saving on tags for brushes. (Sven Langkamp)
  • Fix crash in imagesplit dialog. (Boudewijn Rempt)
  • FIX #335382: Fix segfault in image docker on ARM. (Patch: Supersayonin)
  • Improved gradient editor. Create segmented gradients, edit, rename. (Sven Langkamp)
  • Show the current brush in the statusbar, CCBUG:#332801. (Boudewijn Rempt)
  • Disable the lock button if the docker is set to float. (Boudewijn Rempt)
  • Do not miss the dot when appending the extension. (Patch: Tobias Hintze)
  • FIX #335298: Fix cut and paste error. (Boudewijn Rempt)
  • Implement internet updates for G'MIC (Compile G’MIC with zlib dependency). (Lukáš Tvrdý)
  • FIX #331358: Fixed rotation tablet sensor on Windows and make rotation on Linux be consistent with rotation on Windows. (Dmitry Kazakov)
  • FIX #331694: Add another exr mimetype “x-application/x-extension-exr”, to make the exr system more robust. (Boudewijn Rempt)
  • FIX #316859: Fix Chalk brush paint wrong colors on saturation ink depletion. (Dmitry Kazakov)
  • Index Colors Filter: Add option to reduce the color count. (Manuel Riecke)
  • Fix “modifier + key” canvas input shortcuts not working. (Arjen Hiemstra)
  • FIX #325928: Allow to lock the state of the dockers. (Boudewijn Rempt)
  • Reduce size of mirror handles and other minor tweaks to the mirror axis handles. (Arjen Hiemstra)
  • Let the user select a 1px brush with a shortcut. (Dmitry Kazakov)
  • FIX #325295: Make changing the brush size with the shortcuts more consistent. (Dmitry Kazakov)
  • FIX #334982: Fix a hang-up when opening the filter dialog twice. (Dmitry Kazakov)
  • Enable opengl by default. (Boudewijn Rempt)
  • FIX #334371. (Boudewijn Rempt)
  • Add rename composition to the right click menu. (Sven Langkamp)
  • FIX #334826: Fix loading ABR brushes. (Boudewijn Rempt)
  • Add ability to update a composition CCBUG:#322159. (Sven Langkamp)
  • Activate compositions with double click. (Sven Langkamp)

Code cleanup and optimizations.

Following previous week’s efforts, Boudewijn Rempt kept improving the code efficiency on many aspects of the code. With the hard work of Dmitry Kazakov, Stuart Dickson, Lukáš Tvrdý and Sven Langkamp the code is kept evolving, from ensuring compilation on systems with QT 4.7 or disabling some portions of QT if the version is too old (to make Krita work in many distros), updating G'MIC version to 1.5.9.0, improving OpenGL crash detection, and renaming brushes to “brush tips” to avoid user confusion between presets and brushes, now brush tips.

Krita Sketch and Gemini

This week in sketch an Gemini. Dan Leinir Turthra Jensen added support for templates and a small foldout in brush tool to allow selecting predefined sizes. Stuart Dickson fixed minor UI inconsistencies re-ordering the display of new documents window, updating mirror tool bar actions and expanding template categories. Dmitry Kazakov fixed color space realted problems. Arjen Hiemstra tweaked the size of mirror mode handles to make them more comfortable to use and made other enhancements to the general UI look and feel. Bug fixes are as follow:

  • Fix hsvF -> hsv. This cause crash when loading sketch and gemini. (Dmitry Kazakov)
  • FIX #332864: Use a lighter colour for the x in the unchecked checkbox. (Arjen Hiemstra)
  • FIX #332860: Cleanup the Tool panel and tool config pages. (Arjen Hiemstra)

Krita Gemini/Sketch with templates

Animation branch

Sohsumbra’s work preview can be seen in the following video, working animation with layers.

{youtube}3VKcvJT8rQ8{/youtube}

June 01, 2014

Release notes: May 2014

What’s the point of releasing open-source code when nobody knows about it? In “Release Notes” I give a round-up of recent open-source activities.

angular-rt-popup (New, github)

A small popover library, similar to what you can find in Bootstrap (it uses the same markup and CSS). Does some things differently compared to angular-bootstrap:

  • Easier markup
  • Better positioning and overflows
  • Correctly positions the arrow next to anchor
angular-rt-popup

 

grunt-git (Updated, github)

  • Support for --depth in clone.
  • Support for --force in push.
  • Multiple file support in archive.

 

angular-gettext (Updated, github, website)

Your favorite translation framework for Angular.JS gets some updates as well:

  • You can now use $count inside a plural string as the count variable. The older syntax still works though. Here’s an example:
    <div translate translate-n="boats.length" translate-plural="{{$count}} boats">One boat</div>
  • You can now use the translate filter in combination with other filters:
    {{someVar | translate | lowercase}}
  • The shared angular-gettext-tools module, which powers the grunt and gulp plugins is now considered stable.

Episode 199: G’MIC

Download the Video! (33:10, 63MB)

No Companion File!

Watch at YouTube

G’MIC is a lot of things that do stuff with images. You have a stand alone program with a complete image manipulation and analysis programming language, an online service and a GIMP plugin that gives easy access to (nearly) the whole package. Don’t get G’MIC from other sources than their pages – the update cycle is so fast that package maintainers can’t keep up. The version recorded in the video is already outdated twice at the time of the publication.

G’MIC is mostly developed by Dr. David Tschumperlé of the GREYC, an institute at the University of Caen in France that is also part of the Centre National de la Recherche Scientifique (National Center for Scientific Research). So definitely no hobbyist project.

I met David and his friend and colleague Jérôme – also a scientist AKA Dr. Jérôme Boulanger – at the LGM in Leipzig. Very nice guys! See Pat David’s wonderful blog post for more images and a report.

You can find information about G’MIC at their site in the very good and complete manual, at YouTUBE from G’MIC and David, at Pat David’s Blog and all over the Internets.

The second part of the video is a promotion for Ramón Miranda‘s very good training DVD “Muses”. It goes through the whole process of learning Krita – a graphics program under Linux and Windows, which is much better for painters than GIMP – up to finishing a real digital painting. You can buy the DVD at the Krita shop, 32.50€ is comparable to other kinds of such DVDs and the proceeds support the Krita Foundation.

I got this one for free – and I will give it away! If you want to have it, write a comment to this blog post before Episode 200 is published. Get your mail address right (it will only be visible to me) and mention that you want the DVD. After the deadline I’ll have some supposedly innocent children draw a winner.

The TOC

00:00:00 Start of video
00:01:10 Installing the G’MIC Plugin for GIMP
00:03:00 Installing and using G’MIC as a stand alone program
00:10:00 G’MIC Online service
00:10:40 Using the G’MIC GIMP Plugin
00:13:40 Pencil Drawing emulation as an example
00:18:10 Film emulations and grain as an example
00:25:15 Spectral filters – Fourrier included
00:26:20 Plotting Graphs with G’MIC
00:26:50 Conclusion
00:29:20 Ramon Miranda’s Krita tutorial DVD
00:33:10 EOF

Meet the GIMP Video Podcast by Rolf Steinort is licensed under a Creative Commons Attribution 4.0 Unported License.
Permissions beyond the scope of this license may be available at http://meetthegimp.org.

flattr this!

May 30, 2014

Punctuation Reveals Truth in advertising

This ad appeared in one of the free Santa Fe weeklies. It's got to be one of the funniest mis-uses of quotes I've seen.

Does she not know that putting quotes around something means that you're quoting someone, you're introducing an unfamiliar word or phrase, or you're trying to draw attention to the quoted phrase and cast doubt on it or make fun of it? That third use, by the way, is called scare quotes. Like you'd see in a phrase like this:

One expects lawyers to have a good command of English, and to pay attention to detail, so ... what should we think?

"Injured" isn't an unfamiliar word, so it has to be either the first or third use. And whether she's soliciting clients who only say they're injured, or she's casting doubt on the injury, it's hard not to read this as an offer to help people pretend to be injured to collect a payout.

Which I'm sure happens all the time ... but I don't think I've previously seen an ad implying it so strongly.

May 29, 2014

Marinating my brain on Fedora.next logos

Yep. Here’s a snapshot.

fedora-next.3

Inkscape SVG Source

This is the design methodology of, ‘throw a bunch of things at the wall and see what sticks.’

May 28, 2014

Iterating on Fedora.NEXT Brand Concept #1

Last week I shared a concept for the Fedora.next logos with you, and I received quite a lot of feedback. Thank you for that. :) The feedback I received mostly clustered along these lines in some form or another:

  • The server logomark doesn’t read as a server to everyone – it’s too rounded.
  • The workstation logomark looks too much like a flip phone to read as a laptop.

Okay. I thought I might take that feedback and fart around with the designs some more, and record a bit of a stream of consciousness of what the heck I did so you can follow along and see where it’s coming from. I opened up the SVG source of the original designs in Inkscape and poked around a bit.

Making the server more… server-y?

server-ba

So the thing is, the initial stab at this concept here was made using minimally modified versions of the Fedora logo bubble. That’s why the rounding was so extreme. So for server, it was pretty simple to just tone down the rounding, and I think it reads better, if not as ideally as it should.

More cowbell for the workstation

workstation-ba

This one required a bit more iteration.

As with the server, the first step was to decrease the rounding so the edges of the shapes were more square (save for one corner, to try to keep a bit of unity with the other logotypes as well as link back to the main Fedora logo.) I actually quite prefer C2, but when it’s lined up with the other Fedora.next product logos, it’s much taller so it doesn’t hang toegther with the set as well. This maybe isn’t a huge problem, but having a similar basic shape across the product logos would definitely make life easier down the line (eg, being able to plug the logos into similar templates for web or print without having to modify the template to fit the taller logo.) Anyway. With this one, I fiddled around a lot to make sure there was a nice curvy negative space on the right side of the ‘laptop,’ to fit in with the Fedora logo-y feel. Too tall, though.

Okay, so on to the next iteration, C3, slimmed down the height of the screen and keyboard and pulled the two shapes closer together. The font was scaled down just a little bit as well. It still kind of has a weird look – I mean, it’s stylized in a way that hangs with the others, but I wanted to see if I could get something a bit more like a traditional laptop shape.

C4 is the next iteration then. It’s pretty much just C3 but the angling is changed to emulate a laptop. The overall styling across these logo marks is flat components layered on top of one another with shadows, like paper cutouts or something. I don’t know if skewing the shapes to indicate a sense of perspective like this breaks the look. It’s just an idea, anyway.

Iteration 2 compared to Iteration 1

it1-v-it2

Okay, so here’s the before and after, showing the logos from
last week and this iteration.

These are just mockups. More work needed. :)

These are all just iterations on the same concept though. There’s certainly room for other concepts for the logos here; I’m hoping to see at least another one for us to evaluate before we settle on anything.

Your ideas are welcome!

Ryan posted his iterations on the Cloud logo today – which is totally awesome, and I hope serves as further invitation for you to post your own iterations and ideas for these logos. They are more than welcome!!

Also, my bad. Seriously.

I forgot to post the link to my sources, which Ryan pointed out when he wanted to start working on a logo iteration. Here they are:

Now if you want to poke around with this design, you can more easily. Of course, it’s totally cool to just throw this idea out the window and start fresh with something else.

This made my day – maybe it’ll make yours, too.

Greg DeKonigsberg posted a link to these kids – apparently around 9 years old – that cover Rammstein, among other things. Their YouTube channel is awesome. Here they are covering Rammstein’s Alter Mann. This has nothing to do with Fedora logos. But neither do pandas.

badges

AppData progress and the email deluge

In the last few days, I’ve been asking people to create and ship AppData files upstream. I’ve:

  • Sent 245 emails to upstream maintainers
  • Opened 38 launchpad bugs
  • Created 5 gnome.org bugs
  • Opened 72 sourceforge feature requests
  • Opened 138 github issues
  • Created 8 bugs on Fedora trac
  • Opened ~20 accounts on random issue trackers
  • Used 17 “contact” forms

In doing this, I’ve visited over 600 upstream websites, helpfully identifying 28 that are stated as abandoned by thier maintainer (and thus removed from the metadata). I’ve also blacklisted quite a few things that are not actually applications and not suitable for the software center.

I’ve deliberately not included GNOME in this sweep, as a lot of the core GNOME applications already have AppData and most of the gnomies already know what to do. I also didn’t include XFCE appications, as XFCE has agreed to adopt AppData on the mailing list and are in the process of doing this already. KDE is just working out how to merge the various files created by Matthias, and I’ve not heard anything from LXDE or MATE. So, I only looked at projects not affiliated with any particular desktop.

For far, the response has been very positive, with at least 10% of the requests been actioned and some projects even doing new releases that I’ve been slowly uploading into Fedora. Another ~10% of requests are acknowlegdments from maintainers thay they would do this sometime before the next release. I have found a lot of genuinely interesting applications in my travels, and lot of junk. The junk is mostly unmaintained, and so my policy of not including applications (unless they have AppData manually added by the distro packager) that have not had an upstream release for the last 5 years seems to be valid.

At least 5 of the replies have been very negative, e.g. “how dare you ask me to do something — do it yourself” and things like “Please do not contact me again – I don’t want any new users“. The vast number of people have not responded yet — so I’m preparing myself for a deluge over the next few weeks from the people that care.

My long term aim is to only show applications in Fedora 22 with AppData, so it seemed only fair to contact the various upstream projects about an initiative they’re probably not familiar with. If we don’t get > 50% of applications in Fedora with the extra data we’ll have to reconsider such a strong stance. So far we’ve reached over 20%, which is pretty impressive for a standard I’ve been pushing for such a short amount of time.

So, if you’ve got an email from me, please read it and reply — thanks.

May 27, 2014

The $30,429,899 Photograph

So, here is a view of the most expensive image ever (that doesn't really exist - except for now because I made it):


This is an amalgam of 16 (out of 18) of the most expensive photographs ever sold, all at once (based on the list at Wikipedia).

I averaged all of the images in Imagemagick, and brought them into GIMP for some level adjustments and minor processing.

I am only missing two images, that I could not find suitable resolution versions of. Those two were Andreas Gursky's "99 Cent II Diptychon" (2001), and Dmitry Medvedev's "Tobolsk Kremlin" (2009).

The rest of the images, at auction time, had a staggering value of $30,429,899! (I'll sell you a print of this for a bit less...) :D Of course, I averaged the images to produce this, so perhaps it's better to show the averaged price of $1,901,868...

The 16 images blended together are:

Andreas Gursky Rhein II (1999) $4,338,500
Cindy Sherman Untitled #96 (1981) $3,890,500
Jeff Wall Dead Troops Talk (A vision after an ambush of a Red Army patrol, near Moqor, Afghanistan, winter 1986) (1992) $3,666,500
Edward Steichen The Pond-Moonlight (1904) $2,928,000
Cindy Sherman Untitled #153 (1985) $2,700,000
unknownBilly the Kid (1879–80) tintype portrait $2,300,000
Edward Weston Nude (1925) $1,609,000
Alfred Stieglitz Georgia O'Keeffe (Hands) (1919) $1,470,000
Alfred Stieglitz Georgia O'Keeffe Nude (1919) $1,360,000
Richard Prince Untitled (Cowboy) (1989) [12] $1,248,000
Richard AvedonDovima with elephants (1955) $1,151,976
Edward Weston Nautilus (1927) $1,082,500
Jeff Wall Untangling (1994) $1 Million AUD
Eugène Atget Joueur d'Orgue (1898–1899) $686,500
Robert MapplethorpeAndy Warhol (1987) $643,200
Ansel Adams Moonrise, Hernandez, New Mexico (1948) $609,600

After staring at it for a bit, I actually started to notice a Conehead looking Andy Warhol in the middle. In fact, now I can't stop seeing it.

May 25, 2014

OPW

Wow, Philip. OPW is a detriment to GNOME development in the same way an espresso or electronic music is. You may not appreciate its catalyst effect to great contributions, but blaming it for being one of the reasons why our developer story is sub optimal is very disrespecting to the people responsible for the program.

I am amused when poor developer workflow immediately becomes “gnome terminal lacks transparency” (and it being the design team’s decision), but reading this sort of lunacy on Planet GNOME is sad.

Raspberry Pi Motion Camera: Part 2, using gphoto2

I wrote recently about the hardware involved in my Raspberry Pi motion-detecting wildlife camera. Here are some more details.

The motion detection software

I started with the simple and clever motion-detection algorithm posted by "brainflakes" in a Raspberry Pi forum. It reads a camera image into a PIL (Python Imaging Library) Image object, then compares bytes inside that Image's buffer to see how many pixels have changed, and by how much. It allows for monitoring only a test region instead of the whole image, and can even create a debug image showing which pixels have changed. A perfect starting point.

Camera support

As part of the PiDoorbell project, I had already written a camera wrapper that could control either a USB webcam or the pi camera module, if it was installed. Initially that plugged right in.

But I was unhappy with the Pi camera's images -- it can't focus closer than five feet (though a commenter to my previous article pointed out that it's possible to break the seal on the lens and refocus it manually. Without refocusing, the wide-angle lens means that a bird five feet away is pretty small, and even when you get something in focus the images aren't very sharp. And a web search for USB webcams with good optical quality was unhelpful -- the few people who care about webcam image quality seem to care mostly about getting the widest-angle lens possible, the exact opposite of what I wanted for wildlife.

[Motion detector camera with external  high-res camera] Was there any way I could hook up a real camera, and drive it from the Pi over USB as though it were a webcam? The answer turned out to be gphoto2.

But only a small subset of cameras are controllable over USB with gphoto2. (I think that's because the cameras don't allow control, not because gphoto doesn't support them.) That set didn't include any of the point-and-shoot cameras we had in the house; and while my Rebel DSLR might be USB controllable, I'm not comfortable about leaving it out in the backyard day and night.

With gphoto2's camera compatibility list in one tab and ebay in another, I looked for a camera that was available, cheap (since I didn't know if this was going to work at all), and controllable. I ordered a used Canon A520.

As I waited for it to arrive, I fiddled with my USB-or-pi-camera to make a start at adding gphoto2 support. I ended up refactoring the code quite a bit to make it easy to add new types of cameras besides the three it supports now -- pi, USB webcam, and gphoto2. I called the module pycamera.

Using gphoto2

When the camera arrived, I spent quite a while fiddling with gphoto2 learning how to capture images. That turns out to be a bit tricky -- there's no documentation on the various options, apparently because the options may be different for every camera, so you have to run

$ gphoto2 --set-config capture=1 --list-config
to get a list of options the camera supports, and then, for each of those options, run
$ gphoto2 --get-config name [option]
to see what values that option can take.

Dual-camera option

Once I got everything working, the speed and shutter noise of capturing made me wonder if I should worry about the lifespan of the Canon if I used it to capture snapshots every 15 seconds or so, day and night.

Since I still had the Pi cam hooked up, I fiddled the code so that I could use the pi cam to take the test images used to detect motion, and save the real camera for the high-resolution photos when something actually changes. Saves wear on the more expensive camera, and it's certainly a lot quieter that way.

Uploading

To get the images off the Pi to where other computers can see them, I use sshfs to mount a filesystem from another machine on our local net.

Unfortunately, sshfs on the pi doesn't work quite right. Apparently it uses out-of-date libraries (and gives a warning to that effect). You have to be root to use it at all, unlike newer versions of sshfs, and then, regardless of the permissions of the remote filesystem or where you mount it locally, you can only access the mounted filesystem as root.

Fortunately I normally run the motion detector as root anyway, because the picamera Python module requires it, and I've just gotten in the habit of using it even when I'm not using python-picamera. But if you wanted to run as non-root, you'd probably have to use NFS or some other remote filesystem protocol. Or find a newer version of sshfs.

Testing the gphoto setup

[Rock squirrel using Raspberry Pi camera] For reference, here's an image using the previous version of the setup, with the Raspberry Pi camera module. Click on the image to see a crop of the full-resolution image in daylight -- basically the best the camera can do. Definitely not what I was hoping for.

So I eagerly set up the tripod and hooked up the setup with the Canon. I had a few glitches in trying to test it. First, no birds; then later I discovered Dave had stolen my extension cord, but I didn't discover that until after the camera's batteries needed recharging.

A new extension cord and an external power supply for the camera, and I was back in business the next day.

[Rock squirrel using Raspberry Pi camera] And the results were worth it. As you can see here, using a real camera does make a huge difference. I used a zoom setting of 6 (it goes to 12). Again, click on the image to see a crop of the full-resolution photo.

In the end, I probably will order one of the No-IR Raspberry pi cameras, just to have an easy way of seeing what sorts of critters visit us at night. But for daylight shots, an external camera is clearly the way to go.

The scripts

The current version of the script is motion_detect.py and of course it needs my pycamera module. And here's documentation for the motion detection camera.

May 23, 2014

Getting Around in GIMP - G'MIC Inpainting (Content Aware Fill)

One of the advantages of hanging around people smarter than me is that I get to see some really neat things as they are being developed. Lucky for me, +David Tschumperlé let’s me see interesting things that they are hacking on over at G'MIC.

Of course, if you’re reading me occasionally you’ll remember that last time I got a chance to work on something with G'MIC, we ended up making some fun film emulation presets. This time I’m not involved other than to play, and it’s really neat!

It is the latest work on the patch-based inpainting algorithm.

The rest of my GIMP tutorials can be found here:
Getting Around in GIMP
My previous look at using the Resynthesizer/Heal Selection plug-in:
Getting Around in GIMP - Heal Selection (Resynthesizer)





I had previously written about using the Resynthesizer plug-in (with the Heal Selection script) for GIMP to produce results similar to Photoshop’s “Content Aware Fill”, but I think at this point I’m going to switch over to using G'MIC full-time for this type of work.

What the algorithms basically do is take a given region in an image that needs to be replaced, search around that region for textures to fill it with, determine what works well, then fills it in (see the previous post for a visual explanation of this).


Using Resynthesizer results (mouseover to see.)

Well, the folks over at G'MIC have been working hard on improving their algorithm for doing this type of inpainting, focusing on results and speed. Let’s have a look...

G'MIC Inpainting

I’m using some examples from my previous post on Resynthesizer so we can compare the results. To access the command, it will be in G'MIC:

Filters → G'MIC
then,
Repair → Inpaint [patch-based]

A major difference between Inpaint and Resynthesizer is the ability to fine-tune the algorithm for your image. Resynthesizer only really has a couple of parameters - how big of a search area to use around your selection, and to restrict the texture search to all-around, sides, or top/bottom.

By contrast, here are the parameters for the Inpainting [patch-based] algorithm:


Don’t let it scare you away just yet...

In speaking with David, the primary parameters that one would use to adjust the results would mainly be the Patch size, Lookup size, and Blend size.

To experiment, I just created a duplicate of my base layer, made a freehand selection around my object to remove, sharpened the selection, and filled it with bright red (to match the mask color indicated in the G'MIC window):


Duplicate layer with red mask over object to remove.

So, let’s have a quick run with the same selection as above, but with the default settings to see where things stand:


Inpainting results with default parameter values.

Not too bad with the defaults. Not too good either, though.

As I said earlier, what’s really nice about the G'MIC Inpainting vs. Resynthesizer is that I can tune the parameters to my liking...

Tweaking

As I understand it from David, the Patch size parameter adjusts the size of the area that will be copied from the surrounding texture to place into the region. The Lookup size adjusts how far away from the inpaint region to look for new textures (similar to “sampling width” in Resynthesizer), and the Blend size adjusts how big of a blend to allow when patching.

I left the Blend size alone, but did increase the patch size to 25 (from the default of 7). I also increased the Lookup factor a bit to allow a bigger search radius from a given patch. Here are the results:


Now we’re talking! (mouseover to compare to Resynthesizer)
G'MIC Inpainting, patch size 25, lookup factor 0.5

This is a really nice result and I prefer the G'MIC version over the Resynthesizer output here. Being able to modify parameters to adjust how the algorithm approaches the problem really opens up many more possibilities.

Of course, here are some requisite samples using G'MIC, pulled from the previous samples I had done with Resynthesizer.


I had captured the full nadir, but needed to remove tripod legs...


G'MIC Inpainting with similar parameter values as before.



Patch size 14, everything else default.



Patch size 35, Lookup size 18, Lookup factor 0.25, Blend size 1

I think with just a small bit of experimentation we can get some really fantastic results. Particularly in places where it might have been harder for Resynthesizer to work well.

The best part is that this is already a part of G'MIC, and the team has been actively working to make it faster and better (I believe they are even presenting a paper on this topic later this year).

Personally, this is my future go-to for inpainting/content-aware-fill where I need it (not often, but when I do, boy am I glad we have options like this). If you haven’t already, you may want to give G'MIC a try (don’t forget there’s a ton of fun film emulation filters as well!).

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


May 22, 2014

Fedora.NEXT Brand Concept #1

A while back we talked about designing the branding for Fedora in a Fedora.NEXT world. We’ve kind of fallen behind on the design process. So I’ve been spending some time with the questionnaire answers from the Cloud working group and the Server working group, and I put together the following boards as sort of a design research exercise:

  • Fedora Cloud Logo Research – items from the survey answers plus some other visualizations of cloudy stuff.
  • Fedora Server Logo Research – items from the survey answers plus some other visualizations of servers and whatnot.
  • Brand Systems – sample ‘brand systems’ of the sort we’ll need to make the Fedora.NEXT Fedora products hang together and look related.
  • Photographic Pixels – just an attempt at putting together some samples of inspirational artwork for this particular concept and maybe some others – I wasn’t sure what else to call it.

Spending so much time putting this together made me really antsy to design something – so I just threw this together this afternoon. It may and probably does suck, and it is something we can totally throw away if we want. But I thought it might be good to mock something up to start the conversation. If you are a designer looking at this seeing all the issues or if it sparks a new better idea, please get in touch on the design-team mailing list and let’s put some stuff together and riff off each other. I think we should probably do a Fedora.next branding hackfest too as mentioned much earlier, but it might be nice to have some material to work with to start. Maybe? What do you think?

Anyway, here’s my strawman! (click to see it at a larger size):

concept1

Okay, some rambling to kind of walk you through this:

  • Across the different competitive logos and such I saw a lot of rounded iconic designs, usually with thick outlines. I thought keeping the roundedness might be good to keep us on the same field, but I wanted something a little different than the other organizations / products / etc. had done.
  • Maybe the layering effect – kind of like pieces of the Fedora bubble (some stretched and squashed) cut out neatly or punched out and layered on top of each other – is a bit more unique than what is already out there.
  • There’s single-color versions too without the affect along the bottom.
  • There’s an accent color for each product – but I thought keeping the logos in Fedora blue would help them hang together more, and I don’t know that a huge blast of grape / lime / tangerine the way we do on our current website is a good thing.
  • The font I chose for the product names in Montserrat in all caps – this is an openly-licensed font, of course.
  • I did a half-ass pixelly pattern for each product too but I haven’t thoguht too deeply yet about how usable those might be or how to apply them yet.
  • The workstation logo was the hardest as the name ‘Workstation’ is quite long and doesn’t fit into the scheme as neatly. It also kind of fights a little bit with the glow of the laptop lid. And that is one funky laptop. But maybe it works.

Anyway! Just an idea! Thoughts / feedback are welcome in the comments. Snark, especially snark that isn’t creative or funny at all, can go straight to /dev/null. And if it doesn’t, don’t worry, I’ll mod it out.

Enjoy!

A report from the Krita Sprint 2014

A few days ago we had the Krita Sprint 2014 in Deventer. It was very productive, with all expected topics discussed but also unexpected improvements that came from it.

The first day, I spent some time testing the cintiq 13hd on linux, and noticed that sadly the calibration in kcm-wacom-tablet module didn’t work at all with it (xsetwacom reporting unchanged Area values..), only the gnome3 one worked properly.
Then I spent more time to find the right steps to make the Huion H610 work properly on Linux with Krita, which was successful in the end. The result is now in Krita.org FAQ.

After this, more people arrived. It was so nice to can put a face on some cool new active contributors that I knew well from IRC but didn’t have a chance to meet IRL yet.
Along the days we ran across all reported bugs, to check them and confirm known issues. We had discussions planning the direction for next major versions (2.9/3.0).
Boudewijn and Wolthera made good work on preparing the Kickstarter’ we are about to launch, and we had more collective brainstorming about it to finish the details..

W S

(Two great active Krita users, by a third one…)

We had a little user-demo session where we could show developers the current state of Krita for different kinds of style and workflow.
That kind of session is always very instructive. After I shown in my demo the gmic colorize-filter, we had a discussion about it with Steven, where is born the idea to have an option to put each color on different layer.
There comes a remote contribution from magic’ David Tscumperlé, who made this option real quite right after he recieved the request, and improving it after a bit more live discussion.
(I could only test it in Gimp for now, but an option to update GMIC directly from the GUI should come soon in Krita too thanks to Lukas ;) )

Some other notable cool unexpected improvements:
-The composition docker is now much more usable thanks to Sven, with possibility to update and rename compositions, and advanced export-selections.
-The Zoom shortcut steps have been improved by Dmitry, after a few patch/test iterations the result is now much better with better scaled values.

Dan and Stuart were here too, working mostly on fixing bugs and improving Sketch and Gemini versions. Again it was really cool to can put a face on some good IRC friends.
KritaSprintTeam2014
(The Krita Sprint Team 2014 – Photo by DmitryK )

I brought some homework for the next few months, as we have the ressource package manager almost ready, it’ll be a good occasion for me to update the default workspaces and brush presets before 2.9 release.
And not forgeting that GSOC time has started so I’ll spend some time following the progress of Somsubhra on the animation branch, awesome things coming!!

Thanks a lot again to KDE e.V. who made this awesome sprint possible with their support, and to Boudewijn and Irina who kindly hosted us.

May 21, 2014

Comcast contractors showed up! but wouldn't do the actual job

There's a new wrinkle in our ongoing Comcast Odyssey.

I was getting ready to be picked up to carpool to a meeting when the doorbell rang. It was Comcast's contractor!

Dave and I went out and showed them the flags marking the route the cable was supposed to take. They nodded, then asked, in broken English, "Okay to dig under driveway?"

"Whaa-aa?" we said? "The cable goes from there" (indicating the box across the street) "to here" (indicating the line of flags across from the box, same side of the driveway.

They went over and pointed to the box on our side of the street, on the neighbor's property -- the box the Comcast installer had explicitly told us could in no way be used for our cable service. No, we don't know why, we told them, but every Comcast person who's been here has been insistent that we can't use that box, we have to use the one across the street.

We pointed to the painted lines on the street, the ones that have been there for a month or more, the ones that the county people left after inspecting the area and determining it safe to dig. We point out that digging across the street is the reason they had to get a traffic permit. We tell them that the cable under the driveway is why the cable was torn up in the first place, and that we're expecting to have our driveway graded some time soon, so they put a new cable there, it will probably just get torn up again. Not that any of that matters since Comcast says we can't use that box anyway.

They look at us blankly and say "We dig across driveway?"

My ride arrives. I have to leave. Dave tries for another five or ten minutes, but he has to leave too. So he finally gives up, tells them no, don't put the cable across the driveway, go back and confirm with their supervisor about the job they're here to do because that isn't it.

I guess they left. There were no signs of digging when we got back.

Later, I checked the dates. It's been 18 days since they applied for a permit. I'm pretty sure the county told me a permit is only good for 11 days, or was it two weeks? Any, less than 18 days. So they probably didn't have a permit any more to dig across the street anyway ... not that that necessarily has any bearing on whether they'd dig.

May 19, 2014

Image Editing with 30-bit Monitors

Payable hardware for professionals is capable of 30-bit throughput since quite some years. And costs continue to go down. This means even budget setups are possible with this kind of gear. So lets follow the question why, who and how monitors capable of displaying 30-bit alias 10-bit per red, green and blue channel can be used. This blog article will first touch some basics, followed by technical aspects below.

Why is it useful to display graphics on a 30-bit monitor setup?
It is essential for graphical editing, to see what effect a editing step has. It is pretty common that low resolution monitors impose a barrier to reliably predict the intended output. This is true for geometrical resolution like for colour resolution and for gamut. The rule of thumb is, the graphics editor needs the most information available to do here/his job and spot artefacts and issues early in the process. This principle is deployed for print, web, film and video editing to reduce costs. You know, redoing something costs time and is part of the jobs calculation. More image information means as well more certainty to reach a graphical result. The typical artefact caused by low colour resolution is reduced tonal range. Colour conversions can reduce the tonal range further. So a sRGB image will look different on a 8-bit per channel monitor with a native gamma close to 2.2 compared to a pipeline with 10-bit per channel. The 8-bit output imposes a bottleneck resulting in loosing some tonal steps known as banding, which must not necessarily be present in the observed sRGB image. One very often read argument against higher bit depth is, that editing hardware shall be as close as possible to customers ones. But that is a illusion. The wide diversity of media and devices makes this nearly impossible. But simulation of end customer hardware is of course an issue and many graphics software has implemented simulation capabilities to address that concern.

Who is most interested in 30-bit colour editing on Linux?
Graphics professional and ambitious users closely observe Linux since many years and deploy it. Many block busters are produced and rendered on Linux machines. Web graphics is well supported since years and camera raw programs implemented a impressive level of features in the last years. So Linux is a potential content creation platform beside content consumption. The typical work flow for content creating people is to generate and edit their art work in high geometrical and colour resolution and down convert to lower resolutions as fits for the job, be that web, print, catalog previews and more flexible high quality delivery depending on actual contract. For instance many photographers archive their shootings in the cameras native format to preserve all available information for later editing or improved rendering. This is a important investment in the future for non short lived work, where old files can shine in new light. Motion picture productions are often rendered and color graded in floating point space by using the OpenEXR intermediate file format and output to 12 bits per component for playback in a cinema. Video production uses in parts raw workflows for advertisements. Medical-, scientific- and archival imaging are potentially interested too and require in parts 30-bit setups like in the DICOM standard. The benefit of 10-bit per channel versus 8-bit does not matter to everyone. Most consumers will not spot any difference while watching web video. But in more demanding areas it is a small but helpful improvement.

How to deploy 30-bit displays on Linux?
That feature was implemented by several companies starting on low level software components like X11, Cairo and pixman. However desktops where pretty slow to adapt software to the new needs and fix bugs. That was in part of the initially higher costs for the hardware. Only few developers in the open source community had early access to suitable gear. I do not write here about suitable graphic cards and monitor combinations. You should consult the web for this. Search for 30-bit monitor. Many early adopters observed psychedelic colours, not working graphics areas and more. Here the state on the well known KDE X11 desktop in release 4.11 on a openSUSE-13.1 . The login screen colours are broken. The splash screen after login looks correct and after some period the desktop becomes visible with again broken colours. To manually fix most of that, one have to tell that Qt shall use native colours. Create following text into a file called ~/.kde4/env/qtnative.sh .

$ kwrite ~/.kde4/env/qtnative.sh

#!/bin/sh
export QT_GRAPHICSSYSTEM=native

With the above variable the desktop should look reasonably in KWin, which is really great. Automating that in Qt would be appreciated.

However 30-bit monitors typical aim at high quality setups. Beside colour resolution they often enough offer a wider gamut than usual monitors. This results in partially heavily saturated colours, which burns in sensible eyes. Those people who do colour grading or photo editing are mostly affected, otherwise they can not easily play this work. So desktop colour correction is an other important feature to enable here. KWin supports ICC based colour correction through KolorManager, which would be useful for the colour saturation. But KWin disables all effects for 30-bit OpenGL visuals. The alternative Compiz-0.8 series has the CompIcc colour server plug-in, which provides the same ICC colour correction feature. To make use of it, one needs to install following packages: compizconfig-settings-manager, CompIcc-0.8.9. Unfortunedly the KDE decorator is no longer available. So use the Emerald decorator from X11:Compiz with the 30-bit-shadow.patch in order to avoid artefacts in the shadow code. Compiz can be used as a default window manager application. Use the system settings to switch to Compiz. Use ccsm to switch on Color Management if not done automatically. And voila the 30-bit desktop should be ready to explore.

What works and what not?
The Plasma desktop is fine including all menus. Dolphin, KWrite, and other applications work. Thunderbird shows some artefacts due to not properly supporting the R10G10B10A2 pixel format. The same is true for Firefox, which lets in parts shine through content behind the Firefox window. Gwenview and ShowFoto work fine within their 8-bit drawing. Only the preview is broken in ShowFoto. Krita supports with the OpenGL backend even native 10-bit per colour component. Menus in Sketch are black. Krita shows minimal artefacts through twice colour converting from image to sRGB by Krita and from sRGB to the monitor colour space by CompIcc. But this effect is much lesser visible than the improvements through its 30-bit support. Applications which try to code 24-bit colour themselves are broken like Konqueror. Gtk and hence Gnome applications with graphical areas do not work. They show black areas. VLC works fine. So daily work should be fine in with 30-bit in the KDE application family depending what you do, with some minor glitches. Valuable Gtk applications as is like Inkscape and most Gtk applications are unusable in a 30-bit setup, with Gimps drawing area being a exception. Thunderbird/Firefox are guessedly affected by the same Gtk bug for which a patch was created some time ago. A patched libgtk-2 is available for testing on openSUSE, which appears to have fixed the problem almost for me.

Beside the need to exchange a windowmanager, to patch a few components and do some manual settings, Linux appears almost there in support of 30-bit setups. Polishing of that feature needs testing of patches and finally acceptance for distribution. Your feedback about bugs, tests and patches can make a difference to developers.

May 18, 2014

Interview with Wayne Parker

Sketches by Wayne Parker

1. Would you like to tell us something about yourself?

My name is Wayne Parker. I'm a professional illustrator from Virginia Beach, VA, but living in Arlington at the moment.

2. Do you paint professionally or as a hobby artist?

Well, both. My main job is pretty boring as I do illustrations for the government, but I also work freelance and do commissions whenever I have the time to do so. I'm also working on a personal IP titled “The Notorious Disciples” which I'll be finishing up on this year.

3. When and how did you end up trying digital painting for the first time?

My first experience with painting digitally was when I served in the military and got a hold of Photoshop and discovered the potential of painting non-traditionally. However, my first real headfirst dive into digital painting was when I attended Ringling School of Art and Design for animation. We were given Corel Painter and I took to it like a fish to water and later gave Photoshop a second look as well.

4. What is it that makes you choose digital over traditional painting?

For me, and I'm sure for a lot of people, it's just easier to to get your ideas across quickly. My traditional work can become precious to me and takes more time to setup and plan whereas digital painting allows for more experimentation, less mess, and I'm able to tweak and adjust till my heart is content.

5. How did you first find out about open source communities? What is your opinion about them?

I first heard about Open Source software viaconcept.org. Someone mentioned Mypaint as a good sketchbook, so I checked it out and and loved it. I've tried sooooooooooooooooo many variations of Linux before deciding that Windows was the best for my workflow, but that's not to say open source isn't great because it is.

6. Have you worked for any FOSS project or contributed in some way?

Nope, but I haven't really been asked either. I'll support any project that looks intriguing.

7. How did you find out about Krita?

I heard about Krita from a very good post on Deevad's (David Revoy) blog:http://www.davidrevoy.com/article98/krita-2-4-beta-screenshots-features-and-ppa

8. What was your first take on it?

I loved it. It's the best of both programs I'd grown accustomed to using; Painter and Photoshop, but better in a lot of ways than both.

9. What do you love about Krita?

I really like how it has all the tools for concept art work (mirroring, realistic media, line assistance templates, etc) and all the tools for finished illustrations as well (filters, tweakable brushes, color adjustments, etc).

10. What do you think needs improvement in Krita? Also, anything that you really hate?

Krita for the most part is pretty awesome, but I do have a few issues that keep me from giving it a 10-star glowing endorsement. One of my gripes is that Krita can't open multiple documents within one instance of Krita. In Photoshop I open multiple documents for use of reference, artwork comparison, and just general inspiration. Having multiple full-scale instances of Krita open for each document I open is a little annoying. The other thing is better layer implementation and/or organization. I really miss being able to select multiple layers at once to merge, move, or transform without grouping them. Also coming from a photoshop background I wish Krita's version of adjustment layers (filter layers) were a little more intuitive and less confusing and buggy. Overall, I can sorta deal with the multiple document issue, but I really do wish some real development went into making the layer experience a little better, more powerful and less buggy. I don't hate anything about Krita. It's a great piece of software, and it's free so I can't complain too much at all.

11. In your opinion, what sets Krita apart from the other tools that you use?

The brushes. So many choices and options for unique mark making, much better that any paint program I've ever used. The only thing I miss a little is a way to make impasto type bevels with paint strokes. Painter and Artrage do this pretty well. Maybe a bevel brush engine would solve this issue. Krita would then be untouchable.

12. If you had to pick one favourite of all your work done in Krita so far, what would it be?

Truth be told, I haven't completed an all-in-Krita painting yet. I'm using Krita at the moment for my personal project and I'll post the finish results when everything is done ;)

13. What is it that you like about it? What brushes did you use in it?

Mostly my own custom brushes.

14. What would you like to share with our site visitors?

Nothing much other than go and check out Krita for yourself. It's free, works great on Windows, and has a great community that's very active and helpful. Let's spread the word on this fantastic art tool!

Check out my work on my site, www.TheArtofWayneParker.com, myTumblr athttp://artmessiah.tumblr.com (lots of sketches and random stuff) or on my deviantart site athttp://wayneparker.deviantart.com/gallery/

 

Last week in Krita — week 20

When a feature is proposed there is discussion and feedback before getting into coding and implementation. The debate continues until most devs and artists agree that we need the feature and should be coded. This is done so the coding time is used more effectively. However no new feature is bug free or closed to improvements.

With this in mind I present you the products of this week’s hard work.

Week 20 progress

This week’s new features:

This week’s main Bug fixes:

  • Partially fix #333917: Added extensive tablet debugging on Windows. (Dmitry Kazakov)
  • Worked #333843: Added more extensive tablet debugging on X11. (Dmitry Kazakov)
  • FIX Linux #331358: Fix tablet stylus rotation in Linux. The Windows fix will be worked shortly after. (Dmitry Kazakov)
  • FIX #334204: Handling tablet events when “Pan/Scroll” mode is assigned to a button. (Dmitry Kazakov)
  • Fix crash on open/create file. (Timothée Giet)
  • FIX #332367: Add actions provided by KisLayerBox to the global action collection. This adds some missing actions to the keyboard configuration list. (Dmitry Kazakov)
  • FIX #334508: Fix pixel-alignment of the Rectangle and Ellipse tools. (Dmitry Kazakov)
  • FIX #334408: Fix a triangular brush outline for a 1px brush. (Dmitry Kazakov)

Show axis in mirror mode

In Mirror mode the axis is now represented by a line. Axis display has two icons, one to show the type of mirror, vertical or horizontal, and a second one to edit the axis position by clicking and dragging.

Mirror axis canvas editable

Palette now shows both Foreground and Background colors.

Palette fg/bg colors

This work also adjusted the overall size and proportion of the palette elements to allow bigger preset icons, help recognize selected presets and provide more comfortable usage.

Massive improvements in resource manager

Boudewijn Rempt has been busy providing a better experience on the resource manager. Aside from the many code optimizations and code clean up, boud worked to stabilize the manager to work seamlessly with Krita md5 tags internals. Also he coded a new dialog to make and export resource bundles. A previously manual labor, assembling brush presets, patterns sets, texture packs, brush tips sets; will be automatized with this new feature, making the task of sharing resources a breeze.

Bundle creator dialog Resource manager ui tweaks

Krita Sketch and Gemini

Arjen Hiemstra and Dan Leinir Turthra Jensen have been busy fixing bugs and polishing Sketch and Gemini interface mostly. A new feature is the ability to see the mirror axis and adjust it using on screen ui buttons. These controls were also ported to Gemini and main Krita.

  • Gemini: FIX #334381: Hide the “open existing as new” action. (Arjen Hiemstra)

Code cleanup and optimizations.

Listing all optimization and cleanups from this week would result in a massive list. Some tasks done include: Remove unused methods and code, standardize code style and naming schemes, add tests cases and checks, close memory leaks, simplify code, rename libraries and many other small code-related tasks. Most optimizations are small but added together make the code not only easier to work with but also faster to run. Special thanks to Boudewijn Rempt for his hard work in this area.

May 16, 2014

Bullet moves to github and Erwin Coumans joins Google!

GoogleBostonDynamics Development of the open source Bullet Physics SDK continues at http://github.com/bulletphysics/bullet3 . All the open issues have been moved from the googlecode repository to github with links between old and new issues. There will be a Bullet 2.83 release using the github repository very soon, it is in alpha stage now. In 2014 we will be moving to Bullet 3.x and the unstable Bullet 3.x code is already included in Bullet 2.83.
Other news is that recently I joined Google to work on the robotics project!

May 15, 2014

QuickPose tool

Hi

Here’s the first test of QuickPose tool, a surprise tool I was working on. As its name points, is a tool aimed for quick and dirty posing and transforming a selected region from a mesh. It turns an area into a rubber like material so all transformation are nicely propagated to the vertices in a physical way.
Sometimes you don´t need  complex setup just for bending few parts from a model :)

Under the hood it performs complex implicit representation of the mesh and store its diferential coordinates in order to preserve high frecuency detail when the mesh is deformed.

Still is subject to further optimization and interface change, particularily I need to add support for pose rotations so stay tunned!

Cheers


Strong smoothing

Hi

Smoothing algorithm seems to be a never ending source of inspiration. There´s probably as much smoothing algorithm as there´s researchers out there!
So far 3dCoat has a plenty of smoothing tools, but nearly all of them suffered when mesh density is high or varies a lot across a surface.

Every artist know this when a small dense area rebels against a smoothing eefort. We have tools like recently developed like Smoother, that ideally smooth a painted area, useful for pre-planned smoothing. Or powerfull smoothing, that performs a more aggressive smoothing but still suffers from the curse of the high numbers.

well, Now i have added to TSmooth a third algorithm:  Strong smoothing.  it will always smooth the mesh under the pen regardless the base geometry, and vene better, in a consistent way!

So be careful smoothing with this tool because it can obliterate the most stubborn details!

hope you like it!


CopyClay Laplacian Bending

For CopyClay I´ve actively researched several bending algorithm. Making the parts seamless blend with the base mesh is far from trivial, and many algorithms can be used. A membrane based algorithm was first develop, which strored an offest from an ideal membrane for each vertex, it turned out not very robust and limited.
Then I cam up with an iterative, spring elastic algorithm that perform quite good and moreover, is fast.  For most cases is the default.
It has the limitation that for parts that required extreme bending over mesh, it requires a lot of iterations and thus, become slow and non optimal.

So finally I’ve implemented a laplacian iteration-free method, completely based on linear system solution,wich has many advantages over previous one: it provides the best bending algorithm altough is not as fast as few iterations on Elastic method) but for best quality is unmatched!
It accurately bend the part over the surface because in theory is the convergence mesh of the elastic method at unlimited iterations.


Bridge split factor

Hi

Working over few bugs in Bridge I realize that not always we need such a high resolution in bridging detail provided by the automatic splitting algorithm implemented in it.
And the fact that in most cases tunnels are hidden from the actual models it can be good to fine tune its detail level so it can be less or perhaps more?, because there’s everything in this world :P

So I add a slider to control that possibility if user wants too, but the default will be the automatic detail estimation.

factor 2 default automatic factor 1 factor 0.5 factor 0.25


A Raspberry Pi motion-detecting wildlife camera

I've been working on an automated wildlife camera, to catch birds at the feeder, and the coyotes, deer, rabbits and perhaps roadrunners (we haven't seen one yet, but they ought to be out there) that roam the juniper woodland.

This is a similar project to the PiDoorbell project presented at PyCon, and my much earlier proximity camera project that used an Arduino and a plug computer but for a wildlife camera I didn't want to use a sonar rangefinder. For one thing, it won't work with a bird feeder -- the feeder is always there, so the addition of a bird won't change anything as far as a sonar rangefinder is concerned. For another, the rangefinders aren't very accurate beyond about six feet.

Starting with a Raspberry Pi was fairly obvious. It's low power, cheap, it even has an optional integrated camera module that has reasonable resolution, and I could re-use a lot of the camera code I'd already written for PiDoorbell.

I patched together some software for testing. I'll write in more detail about the software in a separate article, but I started with the simple motion detection code posted by "brainflakes" in the Raspberry Pi forums. It's a slick little piece of code you'll find in various versions all over the net; it uses PIL, the Python Imaging Library, to compare a specified region from successive photos to see how much has changed.

One aside about the brainflakes code: most of the pages you'll find referencing it tell you to install python-imaging-tk. But there's nothing in the code that uses tk, and python-imaging is really all you need to install. I wrote a GUI wrapper for my motion detection code using gtk, so I had no real need to learn the Tk equivalent.

Once I had some software vaguely working, it was time for testing.

The hardware

One big problem I had to solve was the enclosure. I needed something I could put the Pi in that was moderately waterproof -- maybe not enough to handle a raging thunderstorm, but rain or snow can happen here at any time without much warning. I didn't want to have to spend a lot of time building and waterproofing it, because this is just a test run and I might change everything in the final version.

I looked around the house for plastic objects that could be repurposed into a camera enclosure. A cookie container from the local deli looked possible, but I wasn't quite happy with it. I was putting the last of the milk into my morning coffee when I realized I held in my hand a perfect first-draft camera enclosure.

[Milk carton camera enclosure] A milk carton must be at least somewhat waterproof, right? Even if it's theoretically made of paper.

[cut a hole to mount the Pi camera] I could use the flat bottom as a place to mount the Pi camera with its two tiny screw holes,

[Finished milk cartnn camera enclosure] and then cut a visor to protect the camera from rain.

[bird camera, installed] It didn't take long to whip it all together: a little work with an X-acto knife, a little duct tape. Then I put the Pi inside it, took it outside and bungeed it to the fence, pointing at the bird feeder.

A few issues I had to resolve:

Raspbian has rather complicated networking. I was using a USB wi-fi dongle, but I had trouble getting the Pi to boot configured properly to talk to our WPA router. In Raspbian networking is configured in about six different places, any one of which might do something like prioritize the not-connected eth0 over the wi-fi dongle, making it impossible to connect anywhere. I ended up uninstalling Network Manager and turning off ifplugd and everything else I could find so it would use my settings in /etc/network/interfaces, and in the end, even though ifconfig says it's still prioritizing eth0 over wlan0, I got it talking to the wi-fi.

I also had to run everything as root. The python-picamera module imports RPi.GPIO and needs access to /dev/mem, and even if you chmod /dev/mem to give yourself adequate permissions, it still won't work except as root. But if I used ssh -X to the Pi and then ran my GUI program with sudo, I couldn't display any windows because the ssh permission is for the "pi" user, not root.

Eventually I gave up on sudo, set a password for root, and used ssh -X root@pi to enable X.

The big issue: camera quality

But the real problem turned out to be camera quality.

The Raspberry Pi camera module has a resolution of 2592 x 1944, or 5 megapixels. That's terrific, far better than any USB webcam. Clearly it should be perfect for this tast.

[House finch with the bad Raspberry Pi camera module] Update: see below. It's not a good camera, but it turns out I had a lens problem and it's not this bad.

So, the Pi camera module might be okay if all I want is a record of what animals visit the house. This image is good enough, just barely, to tell that we're looking at a house finch (only if we already rule out similar birds like purple finch and Cassin's finch -- the photo could never give us enough information to distinguish among similar birds). But what good is that? I want decent photos that I can put on my web site.

I have a USB camera, but it's only one megapixel and gives lousy images, though at least they're roughly in focus so they're better than the Pi cam.

So now I'm working on a setup where I drive an external camera from the Pi using gphoto2. I have most of the components working, but the code was getting ugly handling three types of cameras instead of just two, so I'm refactoring it. With any luck I'll have something to write about in a week or two.

Meanwhile, the temporary code is in my github rpi directory -- but it will probably move from there soon.

I'm very sad that the Pi camera module turned out to be so bad. I was really looking forward to buying one of the No-IR versions and setting up a night wildlife camera. I've lost enthusiasm for that project after seeing how bad the images were. I may have to investigate how to remove the IR filter from a point-and-shoot camera, after I get the daylight version working.

[rock squirrel with cheeks full of sunflower seeds] Update, a few days later: It turns out I had some spooge on the lens. It's not quite as bad as I made it out to be. Here's a sample. It's still not a great camera, and it can't focus anywhere near as close as the 2 feet I've seen claimed -- 5 feet is about the closest mine can focus, which means I can't get very close to the wildlife, which was a lot of the point of building a wildlife camera. I've seen suggestions of putting reading glasses in front of the lens as a cheap macro adaptor.

Instead, I'm going ahead with the gphoto2 option, which is about ready to test -- but the noIR Pi camera module might be marginally acceptable for a night wildlife camera.


Fedora 21 needs your beautiful photos!

nuancier_monitor-f21

We need your submissions to make beautiful wallpapers available for Fedora 21! The supplemental wallpaper submission process is open and we’re ready for your work!

Questions You May Have

I don’t want to upload photos to the wiki. It’s a pain!! Haven’t you fixed this by now?

Zzzzzzap! Excuse removed! It is easier than ever to submit supplemental wallpapers – our lovingly hand-crafted custom wallpaper submission app, Nuancier, is now ready for your submissions! No more wiki submissions! (And the crowd goes wild!)

Nuancier, the new Fedora wallpaper submission app.

Nuancier, the new Fedora wallpaper submission app.

Can I have a cookie if I submit some wallpapers?

We’ll do you one better. Yes, even better than a cookie are the special-edition, Fedora 21-specific Fedora Badges you will earn for participating in the supplemental wallpaper process.

nuancier-f21

In order, we’ve got:

So go out and get ‘em, tiger!

When do I need to submit by?

You’ve got plenty of time – the deadline is August 16 2014 at 23:59 UTC. But!! The luxury of time you have is no excuse!

Those of you in the Northern Hemisphere will of course be out gallivating and having all sorts of summer fun – and you know this whole wallpaper thing will totally slip your mind while you’re on the beach or having a picnic in the park. So do it now!!! (Of course, you should also take a picture of the beach, take a picture of the birds in the trees, and submit then too!!) And our Fedora friends in the Southern Hemisphere – well, you’re going to be bundling up and trying to stay warm…. or if you’re near the equator you’ll be trying to stay cool… things that will keep you busy enough perhaps to forget about these wallpapers. You can prevent this! Submit your work now, while you’re thinking about it!

Okay, okay. I’m in. What are the guidelines? What are you looking for?

There is definitely a glut of photos of cats in the world, so if you’re going to submit a photo of a cat, it’s really got to stand out and be amazing. Keep that in mind. Also, if it’s a picture you would worry about showing to your dad or your mom, we probably don’t want to see it either. :)

Okay, okay. You want the real requirements. Gnokii made a great post that dives into the specific requirements here, and Nuancier also has a list as a review/reminder when you start the submission process.

It is quite important that you are willing to openly license your submissions – Fedora believes the open sharing of software and content is important for everyone, so we try to reflect that in our wallpaper artwork as well. Gnokii’s post goes into more detail on the specific licenses.

Hope to see your submissions!!

bamboo

Krita 2.8.3 Released

The third monthly bugfix release for Krita 2.8 is out! Download your Windows installer now, or get your distribution's updated packages! There are quite a few bugfixes and improvements:

  • Translation fix in the Multihand tool: Axis -> Axes.
  • More precise translations. (bug 333135)
  • A fix for the outline of invert selection.
  • Krita no longer closes immediately when a file is corrupted but gently warns the user.
  • Fix bug: deleting Group-Layer and contents when there are no other layers prevented user from creating new layers (crash). (bug 333496)
  • Improved detection of supported image formats.
  • Removed a number of resource leaks.
  • Add support for selection in GMIC filters. (bug 325771)
  • Fix crash when GMIC filter is applied to the layer which was moved. (bug 327980)
  • Add search box for filter names of the GMIC plug-in. A text box below the filters tree can now be used to find the GMIC filter by name.
  • Select only paint layers when gathering all Krita layers from layer stack.
  • Remember the last used preset across sessions.
  • Fix invalid recalculation of width and height between units.
  • Ensure that the channel flags are always reset when they are set to full, otherwise compositing will not be will work not efficient. (bug 333080)
  • Fix painting grid on lower zoom levels. (bug 333234)
  • Fix a triangular brush outline for a 1-pixel brush. (bug 334408)
  • Fix pixel-alignment of the Rectangle and Ellipse tools, perform alignment exactly how the user expects. (bug 334508)
  • Make layer actions such as “Delete the layer or mask” listed in the in Configure Shortcuts dialog. (bug 332367)
  • Fix handling a tablet when “Pan/Scroll” mode is assigned to a button. Note: Wacom’s “Pan/Scroll” feature supports only vertical wheel scroll, so using usual Middle-button panning is recommended. (bug 334204)

May 14, 2014

digiKam 4.0.0 released!

Have a look at the digiKam site for informations about their new major release. Sounds really great, but I’ll stay with darktable. ;-)

flattr this!

May 13, 2014

G'MIC Montage

So I was talking with G'MIC creator David Tschumperlé the other day and thought it might be handy to have a way to create a montage of images (similar to the montage command in Imagemagick), but to have more control over the montage layout with automatic image fitting.

Pat David G'MIC montage aligned images fit

In the past, if I wanted to montage a series of images of different sizes, I would do it manually in GIMP. What I was proposing to David was to programatically deal with fitting images given some manual input from the user.

I know that he usually works quite fast, but he blew me away with how quick he got into the idea this time. From describing it one afternoon, it was about a day to get something working. (Add another half a day after I gave him a hard time about making the edges fit regardless of padding... Sorry David!)




That means that it’s now trivially easy to combine multiple images and automatically have them scaled to fit each other, like this:

Pat David G'MIC montage aligned images fit

Just a single parameter change and we can re-orient the images vertically like so:

Pat David G'MIC montage aligned images fit

I used only two images here as a simple example, but it’s not any harder to do this with as many images as your memory will permit...

Pat David G'MIC montage aligned images fit LGM leipzig photowalk

So let’s have a look at this new filter...

The Montage Filter

To use the filter, you’ll want to load up all of the images you want to create the montage from as layers in GIMP.

After running G'MIC,

Filters → G'MIC...

You’ll need to navigate to the new "Montage" filter:

Arrays & tiles → Montage

Pat David G'MIC montage command GUI interface
Firing up G'MIC to use the Montage filter

The first thing you’ll want to remember to do is to change the “Input layers...” to something more appropriate (I like to use “All visibles”). Your preview window should now change to show all your images. Handily, G'MIC numbers each of your images for you (this will come in handy later when you’re creating a custom montage layout.

(While you’re here, I also like to change the “Output mode...” to be “New image”. This way I can go back and easily tweak things as I fiddle.)

Montage type

The first setting you’ll want to take a look at is the Montage type. The options are:

  • Custom layout
  • You have complete control over the layout. I’ll cover this in greater detail later.
  • Horizontal
  • All of the images will be arranged horizontally, height adjusted to all be the same.
  • Vertical
  • All of the images will be arranged vertically, width adjusted to all be the same.
  • Horizontal Array
  • G'MIC will create a horizontal arrangement of multiple rows, fitting the images.
  • Vertical Array
  • G'MIC will create a vertical arrangement of multiple columns, fitting all the images.

Here are the results of running each of these against my 5 test images:

Pat David G'MIC montage aligned images fit
Horizontal, Vertical, Vertical Array, and Horizontal Array

If one of these orientations will work for you, then you’re all set! If you want more control on how the images are arranged, I go over it a little later.

Merging mode

This parameter has two options: Aligned and Scaled.

The Aligned option will not resize any of the images, and will only align them.

The Scaled option is the one I am most interested in here. This option will automatically scale the images DOWN to fit them in their orientation if the Alignment/scale factor is 0.5 or below. You can have the images upscale as well if you want by increasing the factor greater than 0.5. I personally try to avoid all upscaling, so I leave it at the default (0.5):

Pat David G'MIC montage aligned images fit horizontal
Horizontal montage, scaling to least height automatically (Scaled merging).

Pat David G'MIC montage aligned images fit horizontal
Horizontal montage, Aligned merging (no scaling).

Padding

There is also a great option to adjust the padding between images when creating the montage. This value is in pixels. Super handy to allow some space between your images:

Pat David G'MIC montage aligned images fit horizontal array
Horizontal array with 20px padding between images

Frame

There is also an option to include a colored frame around each of the individual images if you want (I don’t personally use this option - but it’s nice that it’s there!).

Cycle layers

This is a nice option to automatically rotate the images in the stack (effectively swapping through which images belong where).

Output as

This is a very handy option for those of us who may like to fiddle even more after generating the montage. You can output the entire montage as a single layer or as multiple layers for further adjustments (I used this option to move layers around in the first image so I could place text on the canvas).

Custom Layout

Ok, so a quick explanation of how to define your own custom layouts is in order, I think. It may seem daunting at first, but I promise it’s not as bad as it looks!

The command for a custom layout is built up from two things: an identifier for orientation (V or H), and an index number for the image.

It helps to always think of it in terms of two "spaces" where you can add images. So if you were working in a horizontal orientation, you’d have two spaces to designate images (or sub nests) left and right.

For instance, let’s start simply and create a simple 2 image horizontal layout. Assume we want to use images 0 and 1 from our list. We would type the following into the Custom layout input in G'MIC:

H(0,1)

In my case, this would produce:

Pat David G'MIC montage aligned images fit custom
Custom parameter: H(0, 1)

Now, what if we wanted to have a third image in this row? We would just add another horizontal orientation in place of one of the images. So, if we replace the second image in the command with another horizontal layout  we would then have 3 images in a row:

H(0, H(1, 2))
Pat David G'MIC montage aligned images fit custom
Custom parameter: H(0, H(1, 2))

As you can see, it helps to think of it in pairs. The first command said, put image 0 first, then for the next image, use the horizontal layout of images 1 & 2 (in green).

If we wanted, we could instead have said that we wanted the images on the right to be vertically fit by changing the second H to V:

H(0, V(1, 2))
Pat David G'MIC montage aligned images fit custom
Custom parameter: H(0, V(1, 2))

Of course, you can sub-nest these layouts as deep as you’d like. So to continue the example, if we wanted to have the bottom right image from above actually be another two images in a horizontal orientation:

H(0, V(1, H(2, 3)))
Pat David G'MIC montage aligned images fit custom
Custom parameter: H(0, V(1, H(2, 3)))

If you’re ever unsure of which image is which index number, just refer to the G'MIC preview window. The image indices are visible over the images they belong to (super helpful). You can always switch over to horizontal or vertical layout to get an overview of the images that are loaded up.

(New!) Rotation

This is the only problem with working with someone like David... He’s usually working too quickly and adding stuff faster than I can write posts about it! Since I started writing this post, he has already added a neat little feature to the custom code that allows you to rotate any arbitrary image or block by prepending an “R” to it’s section.

Using the 3 image montage from above ( H(0, V(1, 2)) ), but adding an R in front of the vertical block yields:

H(0, RV(1, 2))

To rotate again, just add another R in front:

H(0, RRV(1, 2))

Conclusion

I don’t often arrange my images like this, but I’m not sure if it’s more because I’m too lazy to manually arrange them or not...

I did look around to see if there were other options briefly after David wrote this, and found something called BlogStop. Apparently this is a piece of commercial software to create a montage and includes some sort of blogging integration. It’s also a $49USD piece of software! So consider a small donation to David if you’ve got a few spare bucks around - I know it makes him happy (I’m pretty sure he’s using donations to buy hot chocolates in his lab while he works...).

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


May 11, 2014

Sonograms in Python

I went to a terrific workshop last week on identifying bird songs. We listened to recordings of songs from some of the trickier local species, and discussed the differences and how to remember them. I'm not a serious birder -- I don't do lists or Big Days or anything like that, and I dislike getting up at 6am just because the birds do -- but I do try to identify birds (as well as mammals, reptiles, rocks, geographic features, and pretty much anything else I see while hiking or just sitting in the yard) and I've always had trouble remembering their songs.

[Sonogram of ruby-crowned kinglet] One of the tools birders use to study bird songs is the sonogram. It's a plot of frequency (on the vertical axis) and intensity (represented by color, red being louder) versus time. Looking at a sonogram you can identify not just how fast a bird trills and whether it calls in groups of three or five, but whether it's buzzy/rattly (a vertical line, lots of frequencies at once) or a purer whistle, and whether each note is ascending or descending.

The class last week included sonograms for the species we studied. But what about other species? The class didn't cover even all the local species I'd like to be able to recognize. I have several collections of bird calls on CD (which I bought to use in combination with my "tweet" script -- yes, the name messes up google searches, but my tweet predates Twitter -- a tweet Python script and tweet in HTML for Android). It would be great to be able to make sonograms from some of those recordings too.

But a search for Linux sonogram turned up nothing useful. Audacity has a histogram visualization mode with lots of options, but none of them seem to result in a usable sonogram, and most discussions I found on the net agreed that it couldn't do it. There's another sound editor program called snd which can do sonograms, but it's fiddly to use and none of the many color schemes produce a sonogram that I found very readable.

Okay, what about python scripts? Surely that's been done?

I had better luck there. Matplotlib's pylab package has a specgram() call that does more or less what I wanted, and here's an example of how to use pylab.specgram(). (That post also has another example using a library called timeside, but timeside's PyPI package doesn't have any dependency information, and after playing the old RPM-chase game installing another dependency, trying it, then installing the next dependency, I gave up.)

The only problem with pylab.specgram() was that it shows the full range of the sound, both in time and frequency. The recordings I was examining can last a minute or more and go up to 20,000 Hz -- and when pylab tries to fit that all on the screen, you end up with a plot where the details are too small to show you anything useful.

You'd think there would be a way for pylab.specgram() to show only part of the spectrum, but that doesn't seem to be. I finally found a Stack Overflow discussion where "edited" gives an excellent rewritten version of pylab.specgram which allows setting minimum and maximum frequency cutoffs. Worked great!

Then I did some fiddling to allow for analyzing only part of the recording -- Python's wave package has no way to read in just the first six seconds of a .wav file, so I had to read in the whole file, read the data into a numpy array, then take a slice representing the seconds of the recording I actually wanted.

But now I can plot nice sonograms of any bird song I want to see, print them out or stick them on my Android device so I can carry them with me.

Update: Oops! I forgot to include a link to the script. Here it is: Sonograms in Python.


May 09, 2014

Synfig funding for Summer 2014

We have launched a new fundraising campaign with a huge ambition to get funding for development in this summer....

Berlin DX Hackfest

DSCF0726

As regular planet GNOME readers have noticed, Berlin had the priviledge of hosting a couple of great GNOME developers and deisgners (and me ;). Berlin is where my first involvment with Free software people took place (at Gimpcon) and despite rather chilly weather it’s a great city to hang around, especially the east central part around Friedrichshain. Big thanks to Allan for organizing the event and extra thanks to Chris Kühl for hosting us at Endocode. Lovely office and great location. Free cappucino with your foursquare first checkin at the Espresso Ambulanz around the corner btw, great coffee.

After the hackfest that’s been centered around API docs and the toolkit we spent some extra days with Jon and Allan on some designs such as the selection mode, sharing, touch aspects of some widgets and started going through bugs and maintenance obstacles that stand in the way of making Adwaita the default gtk+ style. Sadly some refactoring is going to need to happen in the next couple of days/weeks, but it looks like Lapo is onboard for the challenge, so it’s going to be great having a wingman for this unrewarding chore.

It was a great kickstart, pretty pumped about 3.14 :)

May 07, 2014

Last week in Krita — week 19

Big changes are made through small steps, and this week we made a lot of them. The team worked hard to unify tag management, standardize color rendering across all tools, fix interface bugs and optimize some filters. The majority of bugs and fixes are worked in the main development branch, but the bigger and experimental ones are made inside separate branches.

Branches work is not included automatically into the main development branch. Before sending it the branch developer and brave users work with it, test it and help optimize it before adding it to the main branch.

Week 19 progress

This week’s new features:

  • Implement support for more types of palettes. (Boudewijn Rempt)
  • Search filter for G'MIC plugin filters. (Lukáš Tvrdý)
  • Improved handling of tags and resources names. (Boudewijn Rempt)
  • Remove hardcoded margins from filter dialog, use normal buttonbox for ok/cancel. This change allows the buttons to accommodate in any order depending on the windows manager rules (Ok/Cancel, Cancel/Ok). (Friedrich W. H. Kossebau)

This week’s main Bug fixes:

  • FIX #334437: Fix dimensions of autobrushes. (Boudewijn Rempt)
  • FIX #334450: Fix generating md5 sum for task set resources. (Boudewijn Rempt)
  • FIX #333326: Ask user for a file when a File Layer doesn’t exist. (Dmitry Kazakov)
  • FIX #333234: Fix painting Grid on lower zoom levels. (Dmitry Kazakov)
  • FIX #333080: Ensure channel flags are cleared when set to all-ones. (Dmitry Kazakov)
  • FIX #332130: Fix using a transaction in the Filter Op. (Dmitry Kazakov)
  • FIX #333485: Brush settings blending mode selection with a single click. Also its possible to click and drag to select a mode. (Spencer Brown)
  • FIX #334078: Swap Units when swapping width/height orientation. (Boudewijn Rempt)
  • FIX #331043: Fix Transform Tool arrows on mirrored canvas. (Koushik S)
  • FIX #334255: Port KisPaintopPreset to the proper KoResource API. (Boudewijn Rempt)
  • FIX #325771: Fix glitches when bitblting gmic result on moved layer with selection. (Dmitry Kazakov)

Implement support for more palette types

We now support RIFF_PAL palettes, paintshop pro palettes and Photoshop ACO palettes.

ACO (Adobe Color Palette) is the native Format of Photoshop which support rgb, cmyk, hsv, lab and gray, at 16 bits/channel. A nice addition to default gimp palettes, indeed!

Search filter box in Gmic

Gmic offers around 250 different filters to use. Finding them by space memory is a bit slow when you know exactly what you want. The new search box allows to easily find the filter by its name. Gmic filtering with keyword

Krita Sketch and Gemini

  • Make tooltip work nicely with touch on Win 8. (Arjen Hiemstra)
  • Correct a typo in the minimize button tooltip. (Arjen Hiemstra)
  • Add tooltips to WelcomePage and MenuPanel. (Arjen Hiemstra)
  • Add tooltip functionality to Button. (Arjen Hiemstra)
  • FIX #331341: Drop down the panel from the handle instead of moving the handle. (Arjen Hiemstra)
  • Add support for adding an offset to items added to MouseTracker. (Arjen Hiemstra)
  • FIX #333779: Set the URL for new images to “New Image.kra”. (Arjen Hiemstra)
  • Keep the views presets constantly in sync. (Dan Leinir Turthra Jensen)

Code cleanup and optimizations

  • Open the QBuffer before writing into it. (Boudewijn Rempt)
  • Rename Axis Center into Axes Center. (Dmitry Kazakov)
  • Fix loading aco profiles. (Boudewijn Rempt)
  • Fix KisDlgFilter to always show the name of the currently selected filter. (Friedrich W. H. Kossebau)
  • Make current filter gettable in KisFilterSelectorWidget. (Friedrich W. H. Kossebau)
  • Add “:” after all labels in the filter settings UI. (Friedrich W. H. Kossebau)
  • Refactoring and fixing resource manager code bugs about creation, install and uninstall. (Victor Lafon)
  • Convert AppData to 0.6 format, to avoid translation clutter: Matthias Klumpp krita/krita.appdata.xml
  • Remove spurious export clause. (Boudewijn Rempt)
  • Add a proper option for enable/disable GMic. (Arjen Hiemstra)
  • Fix KisPatternTest. (Boudewijn Rempt)

Branches work

A developer can focus on a major new feature without worrying about making Krita unusable by using branches. This week in branches we had some interesting work.

2.8

2.8 stable received a couple of new features and bug fixes:

  • FIX #334078: Swap Units when swapping width/height orientation. (Boudewijn Rempt)
  • Remember last preset used between sessions.(Boudewijn Rempt)
  • Search filter for Gmic plugin filters. (Lukáš Tvrdý)

animator-plugin-somsubhra

Advancement in layer handling for animation data and fixes to interface loading and preferences.

krita-testing-kazakov

Exciting work from Dmitry in this branch. Color rendering massive improvements and standarization across all Krita components, fixing bugs in color selection, popup palette colors, specific color selector and adding support in palette docker for color managed rendering.

calligra-resource_md5-rempt

Boudewijn is preparing the tag system for growing and extending. The current code, now in master, makes the system more robust and versatile allowing the resources to be recognized even if they change location.

 

Linux Security Summit 2014

The Linux Security Summit is happening in Chicago August 18th and 19th, just before LinuxCon. Send us some presentation and topic proposals, and join the conversation with other like-minded people. :)

I’d love to see what people have been working on, and what they’d like to work on. Our general topics will hopefully include:

  • System hardening
  • Access control
  • Cryptography
  • Integrity control
  • Hardware security
  • Networking
  • Storage
  • Virtualization
  • Desktop
  • Tools
  • Management
  • Case studies
  • Emerging technologies, threats & techniques

The Call For Participation closes June 6th, so you’ve got about a month, but earlier is better.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Libre Graphics Meeting 2014 in Leipzig, Germany

What an amazing trip!


Leipzig Market Little Planet Panorama

I was half joking when I mentioned to schumaml on irc a few months ago that I should come out to LGM2014 and do a photography workshop using GIMP. Half joking turned into half-serious, and half-serious turned into “holy crap, I better buy my plane tickets!”…

The LGM team accepted a workshop proposal by Tobias Ellinghaus and myself to talk about darktable and FL/OSS photography workflows. I was also presenting alongside David Tschumperlé and Jérome Boulanger on a retrospective look at G’MIC over the last couple of years! (The title of the presentation was “A 2012-2013 retrospective of the G’MIC project : New features from artists/developers collaborations”).

To say that I was excited is an understatement! I was finally going to get a chance to meet the GIMP team face to face (I idle in the GIMP channel on irc quite a bit)! I was also going to be able to hang out with David, who I collaborate quite often with. Also, none other than Rolf Steinort was also going to be there!


So after driving 700 miles to South Florida to drop our daughter off with grandma, Dot and I jumped on a plane in Miami! We finally hopped off the plane in Leipzig the next afternoon and explored a small bit of the city in preparation for the photowalk the next morning... (after first making a quick stop at the GIMP apartment where Michael and Femke were working on CoC stuff for LGM).

Photowalk

The next morning was a little rough for me, being jet-lagged by about 6 hours. I finally made it out and headed over to the University where I expected to have a small handful of people to meet (it ended up being a few more than I had anticipated, which was nice).

Kaffeehaus Riquet Leipzig Germany Libre Graphics Meeting 2014 Photowalk
Kaffeehaus Riquet (Leipzig, Germany)

We had a stroll through town while making a few stops, such as Nikolaikirche (Church of St. Nicholas), the home of the Monday Demonstrations in East Germany. I probably should have done a better job directing the photowalk, but honestly I was mostly busy being enthralled by the city itself...

I mean, how can you not be when you turn a corner to find this:

LGM2014 Leipzig Germany photowalk Nikolaikirche GIMPNikolaikirche (Church of St. Nicholas) and Tobias

Or seeing this when peeking inside the church:


Ceiling view of Nikolaikirche atrium

Of course, Leipzig is home to Johannes Sebastian Bach and his final resting place just happens to be in this church:


Thomaskirche (St. Thomas Church)

To get ready for my trip, I listened to Toccata and Fugue in D minor a LOT:



Panorama View of Thomaskirche

While walking around the city we also happened into the Leipzig Market. It was a bit of an overcast morning with a sun that was attempting (in vain) to break through the gray cloud cover...

GIMP LGM leipzig town hall panorama
Leipzig Old Town Hall with the sun behing the cupola...

One of the nice thing about overcast days is that the sun gets a perfect diffusion material - clouds! This makes for much softer light and prettier portraits when shot outside. We stopped for a moment for me to bring out my reflectors and talk a little bit about using them in various configurations for fill on days like that.

GIMP LGM dot leipzig market portrait photowalk
My favorite model demonstrating using a reflector to brighten shadows

What’s nice about using natural light and reflectors is that you get immediate feedback on what the modifiers are doing to your scene - you can see what the reflector is doing in real-time to adjust and modify to taste.

This was actually funny because we had someone holding the reflector for us, and a group of about 10 people (some with big, fancy gear) all firing cameras at one woman in the middle of the market. This actually got quite a few passer-bys to stop and wonder what we were doing (and I even saw a few people snapping photos with their phones of us!). I wonder if they thought we were papparazzi, or doing a model fashion shoot?

Of course, there were cool things to shoot at some of the vendors...

GIMP Leipzig LGM market flowers photowalk
Flowers in the Leipzig Market

Even cooler to shoot some of the people there as well...

GIMP photowalk LGM Leipzig organ grinder
Organ Grinder in the Leipzig Market

First Day of LGM!


I was so busy the first day of LGM meeting new people, seeing faces for the first time, and attending the presentations that I didn’t really grab any photos of the event! (Yet another reason why I’d be a horrible photojournalist...).

There were some fun and great presentations given on the first day, though! Pippin led the proceedings off with the “State of Libre Graphics” giving a nice overview of various projects. Nathan Willis gave a neat statistical presentation on traffic patterns of users across different discussion forms, and Peter Sikking did a great presentation on “UI Design for Full GEGL Integration in GIMP” (slides can be found here):


Of course, immediately following Peter I presented with David Tschumperlé and Jerome Boulanger on a retrospective of G’MIC over the past two years:


There were a ton of other great presentations, and I learned quite a bit (there was cool stuff from the Commons Machinery for instance on contextualizing creative works).

Speaking of David Tschumperlé, it was awesome to finally be able to meet and talk in person (after spending the past year collaborating on a few different things for G’MIC)! We were able to talk about many different things, and during the course of the week Jerome even managed to create a great filter to generate grain from a small sample in G’MIC!


David Tschumperlé poses in front of Thomaskirche...

We wandered around that evening a bit taking in the sights and grabbing some dinner...


No, I have no idea what Jerome Boulanger is pointing at...

Rolf Steinort (Meet the GIMP)!

Here was someone I was also really looking forward to meeting, Rolf Steinort (Meet the GIMP)! Rolf had to work earlier in the week, but was able to catch a train over for the second day of LGM. This was awesome. If you’ve ever spent a lot of time learning great things from someone online, you can imagine how neat it was to finally be able to sit down and spend some time with them.

We spent some time wandering around the city together and talking (he’s German and was able to give us some great history and context to our surroundings). It was a blast!

We even passed by a €1 store where they were selling really inexpensive goods, and I got the idea to build a quick light modifier (I didn’t want to pack too heavily for the trip). I had built a couple of cheap and quick ringflash modifiers previously and thought it might be fun to build one to use during the meeting. So I dropped in and spent about €5 purchasing a pasta straining bowl (colander), some cups and some tape.

I borrowed a knife from Rolf, and while we sat at a cafe getting milkshakes (Milchbar Pinguin) I hacked at the plastic bowl and taped stuff together to build a franken-ringflash. This was a very handy light modifier to have for the LGM party later that evening:


img_0001
People were patient with me using the ringflash...

Rolf managed to capture a great shot of me...

LGM GIMP Pat David

just as I was getting ready to photograph him (though intimidating to the subject, the light quality is uniquely ring-flash-y):

GIMP Rolf Steinort Meet the GIMP LGM ringflash
Rolf Steinort through a DIY ringflash...

One nice thing about the way this ringflash was constructed (not being attached to the camera permanently), is that I can easily just hold it off to the side to use it as a makeshift beauty dish as well:

GIMP GMIC beauty dish LGM david tschumperle
David lit from the side using the ringflash as a beauty dish instead...

The rest of my photos from the LGM party night are here (or here as a Flickr set):


Not too bad for €5 worth of cheap goods from a corner store!

Yes, many people have asked for a build guide or tutorial for making the same ringflash you see here - I’ll be glad to write one up as soon as I finish processing photos from the trip! So stay tuned!

Some Portraits

Yes, the DIY ringflash was fun to play with, but I really wanted to use the opportunity to grab something a little more polished of the GIMP team. Thanks to Tobias Ellinghaus from the darktable team, I was able to borrow his umbrella and stand to shoot some quick one-light portraits.

I was also able to borrow Rolf’s tripod to hold up my diffusion panel as a makeshift umbrella in a pinch. This is how I shot the sample image I used for the workshop on Saturday:


Image used for the photo workshop on Saturday

Including an outtake that I thought was cute:


I didn’t have the luxury of my own room to setup when it came time to shoot everone else, though. I had to do my best with a small area at the top of the stairs in the University, next to the main meeting room...

Not the most ideal situation, but sometimes a challenge is fun! I setup very close to a single wall with Tobias’ umbrella on a lightstand. The chair for the subjects was about a foot from the wall and the umbrella was about the same distance away. We moved out from the back wall to let it fall to gray (which was tough because umbrellas like to spill a lot of light everywhere!).

In the end I feel like I got some neat portraits out of it.

Rolf sat in for me while I adjusted lighting ratios, and I got a pretty nice photo of him in the process...

Rolf Steinort headshot portrait Meet the GIMP Pat David
Rolf Steinort (Meet the GIMP)

Not everyone looked so somber! Antenne wouldn’t stop laughing and grinning long enough for me to get a serious photo, so I opted for one that suited her better:

GIMP antenne LGM headshot portrait Pat David
Antenne

Ryan Lerch jumped in for one of my favorite portraits of the series in a more somber look:


While ginger coons had a slightly brighter portrait...


GIMP Portraits

The set of portraits that I really wanted to grab while I was at LGM was of all of the GIMP team who were able to attend. I wanted to put faces to the people who were giving their time and expertise to all of us in improving and maintaining such an important piece of software to me and my pursuits.

These are the people working hard to bring us such a great piece of software like GIMP. They donate their talents to the software that many of us use every day (and sometimes take for granted perhaps). So remember to donate whenever you can!

In no particular order:

GIMP Team headshot portrait LGM Pat David
Jehan

GIMP Team Ville headshot portrait LGM Pat David
Ville

GIMP Team Simon headshot portrait LGM Pat David
Simon

GIMP Team LGM headshot portrait Pat David
Null

GIMP Team Michael headshot portrait LGM Pat David
Michael

GIMP Team Mitch headshot portrait LGM Pat David
Mitch

GIMP Team Peter headshot portrait LGM Pat David
Peter

GIMP Team Pippin headshot portrait LGM Pat David
Pippin

GIMP Team Sven headshot portrait LGM Pat David
Sven

I can’t thank the GIMP team enough for letting me tag along this year to LGM. It really was wonderful getting to meet everyone!

darktable Portraits

While at LGM I also got to meet and make friends with the darktable team as well! These guys were a lot of fun to hang out with, and Tobias opened the Photo Workshop on Saturday with a great intro to darktable! So I couldn’t pass up a chance to shoot their portraits as well:

darktable team portrait headshot Tobias Ellinghaus LGM Pat David
Tobias

darktable team portrait headshot Simon Spannagel LGM Pat David
Simon

darktable team portrait headshot Johannes Hanika LGM Pat David
Johannes

I’m actually missing a portrait of Pascal for some reason. I must have spaced out and didn’t chase him down to get him to sit for a quick shot. I’m sorry Pascal!

Before I completely finished wrapping up the last shoot at LGM, Dot had a great idea to shoot the darktable team as an homage to Mick Rocks famous cover for Queen II:


Which led to a photograph so funny that I was laughing the entire time while editing it!

darktable group portrait queen album cover mick rock Pat David

In Conclusion

This was an amazing trip! The opportunity to meet so many smart and talented people working hard on free software was just incredible. Everyone was quite passionate about what they were doing, and so many great ideas and discussions were had! I can’t recommend it enough if you’ve ever considered attending! Just go!

If you want to see the complete set of photos I took while I was there, I have them all in an album on Flickr here:

LGM2014 Album by Pat David

Road to Providence: a new game being made with Krita

Remember Playkot's Supercity? Game artist Paul Geraskin liked has just started a crowdfunding campaign to support working on a new indie game, inspired by Howard Lovecraft: Road to Providence. As with Supercity, Road to Providence will be created with open source tools: Krita, Blender, and jMonkeyEngine.

Here's a video of their development process:

 {youtube}3VvDwrp8214{/youtube}

 The artwork is already coming along nicely, despite the campaign having only started just now: