September 20, 2014

Concept of Baba Yaga Character

Here is a concept for another character – Baba Yaga. You can find a lot of references to this name in Russian folklore, but in our story this old lady is an outstanding scientist of cybernetics. That strong and unconditional person bears a dark past, directly related to the birth of the main antagonist of the story – Koshchei The Deathless. Much thanks to Anastasia Majzhegisheva for the artwork.

Baba Yaga concept by Anastasia Majzhegisheva.

Baba Yaga concept by Anastasia Majzhegisheva.

Fri 2014/Sep/19

  • I finally got off my ass and posted my presentation from GUADEC: GPG, SSH, and Identity for Beginners (PDF) (ODP). Enjoy!

  • Growstuff.org, the awesome gardening website that Alex Bailey started after GUADEC 2012, is running a fundraising campaign for the development of an API for open food data. If you are a free-culture minded gardener, or if you care about local food production, please support the campaign!

September 19, 2014

Fanart by Anastasia Majzhegisheva – 14

Marya Morevna. Pencil artwork by Anastasia Majzhegisheva.

Marya Morevna. Pencil artwork by Anastasia Majzhegisheva.

Mirror, mirror

A female hummingbird -- probably a black-chinned -- hanging out at our window feeder on a cool cloudy morning.

[female hummingbird at the window feeder]

September 18, 2014

And now for some hardware (Onda v975w)

Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Vrrrroooom.


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Woodcut/Hedcut(ish) Effect



Rolf as a woodcut/hedcut

I was working on the About page over on PIXLS.US the other night. I was including some headshots of myself and one of Rolf Steinort when I got pulled off onto yet another tangent (this happens often to me).

The rest of my GIMP tutorials can be found here:




This time I was thinking of those awesome hand-painted(!) portraits used by the Wall Street Journal by the artist Randy Glass.





Of course, the problem was that I had neither the time or skill to hand paint a portrait in this style.

What I did have was a rudimentary understanding of a general effect that I thought would look neat. So I started playing around. I finally got to something that I thought looked neat (see lede image), but I didn't take very good notes while I was playing.

This meant that I had to go back and re-trace my steps and settings a couple of times before I could describe exactly what it was I did.

So after some trial and error, here is what I did to create the effect you see.

Desaturate


Starting with your base image, desaturate using a method you like. I'm going to use an old favorite of mine, Mairi:


The base image, desaturated.

Duplicate this layer, and on the duplicate run the G'MIC filter, “Graphic Novel” by Photocomix.

Filters → G'MIC
Artistic → Graphic Novel

Check the box to "Skip this step" for "Apply Local Normalization", and adjust the "Pencil amplitude" to taste (I ended up at about 66). This gives me this result:


After running G'MIC/Graphic Novel

I then adjusted the opacity to taste on the G'MIC layer, reducing it to about 75%. Then create a new layer from visible (Right-click layer, “New from visible”).

Here is what I have so far:


On this new new layer (should be called “Visible” by default), run the GIMP filter:

Filters → Artistic → Engrave

If you don't have the filter, you can find the .scm at the registry here.

The only settings I change are the “Line width”, which I set to about 1/100 of the image height, and make sure the “Line type” is set to “Black on bottom”. Oh, and I set the “Blur radius” to 1.

This leaves me with a top layer looking like this:


After running Engrave

(If you want to see something cool, step back a few feet from your monitor and look at this image - the Engrave plugin is neat).

Now on this layer, I will run the G'MIC deformation filter “random” to give some variety to the lines:

G'MIC → Deformations → Random

I used an amplitude of about 2.35 in my image. We are looking to just add some random waviness to the engrave lines. Adjust to taste.

I ended up with:


Results after applying G'MIC/Random deformation to the engrave layer.

At this point I will apply a layer mask to the layer. I will then copy the starting desaturated layer and paste it into the layer mask.


I added a layer mask to the engraved layer (Right-click the layer, “Add layer mask...” - initialize it to white). I then selected the lowest layer, copied it (Ctrl/Cmd + C), selected the layer mask and pasted (Ctrl/Cmd + V). Once pasted, anchor the selection to apply it to the mask.

This is what it looks like with the layer mask applied:


The engrave layer with the mask applied

At this point I will use a brush and paint over the background with black to mask more of the effect, particularly from the background and edges of her face and hair. Once I'm done, I'm left with this:


After cleaning up the edges of the mask with black

I'll now set the layer blending mode to “Darken Only”, and create a new layer from visible again.

Add a layer mask to the new visible layer (should be the top layer), copy the layer mask from the layer below it (the engrave layer), and paste it into the top layer mask:


Now adjust the levels of the top layer (not the mask!), by selecting it, and opening the levels dialog:

Colors → Levels...

Adjust to taste. In my image I pulled the white point down to about 175.

At this point, my image looks like this:


After adjusting levels to brighten up the face a bit

At this point, create a new layer from visible again.

Now make sure that your background color is white.

On this new layer, I'll run a strange filter that I've never used before:

Filters → Distorts → Erase Every Other Row...

In the dialog, I'll set it to use “Columns”, and “Fill with BG”. Once it's done running, set the layer mode to “Overlay”. This leaves me with this:


After running “Erase Every Other Row...”

At this point, all that's left is to do any touchups you may want to do. I like to paint with white and a low opacity in a similar way to dodging an image. That is, I'll paint white with a soft brush on areas of highlights to accentuate them.

Here is my final result after doing this:



I'd recommend playing with each of the steps to suit your images. On some images, it helps to modify the parameters of the “Graphic Novel” filter to get a good effect. After you've tried it a couple of times through you should get a good feel for how the different steps change the final outcome.

As always, have fun and share your results! :)

Summary

There seems to be many steps, but it's not so bad once you've done it. In a nutshell:

  1. Desaturate the image, and create a duplicate of the layer.
  2. Run G'MIC/Graphic Novel filter, skip local normalization. Set layer opacity to about 40-60% (experiment).
  3. Create new layer from visible.
    1. Run Filters → Artistic → Engrave (not Filters → Distorts → Engrave!).
      • Set the Line Height ~ 1/100 of image height, black on bottom
    2. On the same engrave layer, run G'MIC → Deformation → Random
      • Set amplitude to taste
    3. Change layer mode to “Darken only”
    4. Add a layer mask, use the original desaturated layer for the mask
  4. Create new layer from visible
    1. Add a layer mask, using the original desaturated layer for mask again (or the mask from previous layer)
    2. Adjust levels of layer to brighten it up a bit
  5. Create (another) new layer from visible
    1. Set background color to white
    2. Run Filters → Distorts → Erase Every Other Row...
      • Set to columns, and fill with BG color
    3. Set layer blend mode to “Overlay”

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


September 17, 2014

What’s in a job title?

Over on Google+, Aaron Seigo in his inimitable way launched a discussion about  people who call themselves community managers.. In his words: “the “community manager” role that is increasingly common in the free software world is a fraud and a farce”. As you would expect when casting aspertions on people whose job is to talk to people in public, the post generated a great, and mostly constructive, discussion in the comments – I encourage you to go over there and read some of the highlights, including comments from Richard Esplin, my colleague Jan Wildeboer, Mark Shuttleworth, Michael Hall, Lenz Grimmer and other community luminaries. Well worth the read.

My humble observation here is that the community manager title is useful, but does not affect the person’s relationships with other community members.

First: what about alternative titles? Community liaison, evangelist, gardener, concierge, “cat herder”, ombudsman, Chief Community Officer, community engagement… all have been used as job titles to describe what is essentially the same role. And while I like the metaphors used for some of the titles like the gardener, I don’t think we do ourselves a service by using them. By using some terrible made-up titles, we deprive ourselves of the opportunity to let people know what we can do.

Job titles serve a number of roles in the industry: communicating your authority on a subject to people who have not worked with you (for example, in a panel or a job interview), and letting people know what you did in your job in short-hand. Now, tell me, does a “community ombudsman” rank higher than a “chief cat-herder”? Should I trust the opinion of a “Chief Community Officer” more than a “community gardener”? I can’t tell.

For better or worse, “Community manager” is widely used, and more or less understood. A community manager is someone who tries to keep existing community members happy and engaged, and grows the community by recruiting new members. The second order consequences of that can be varied: we can make our community happy by having better products, so some community managers focus a lot on technology (roadmaps, bug tracking, QA, documentation). Or you can make them happier by better communicating technology which is there – so other community managers concentrate on communication, blogging, Twitter, putting a public face on the development process. You can grow your community by recruiting new users and developers through promotion and outreach, or through business development.

While the role of a community manager is pretty well understood, it is a broad enough title to cover evangelist, product manager, marketing director, developer, release engineer and more.

Second: The job title will not matter inside your community. People in your company will give respect and authority according to who your boss is, perhaps, but people in the community will very quickly pigeon-hole you – are you doing good work and removing roadblocks, or are you a corporate mouthpiece, there to explain why unpopular decisions over which you had no control are actually good for the community? Sometimes you need to be both, but whatever you are predominantly, your community will see through it and categorize you appropriately.

What matters to me is that I am working with and in a community, working toward a vision I believe in, and enabling that community to be a nice place to work in where great things happen. Once I’m checking all those boxes, I really don’t care what my job title is, and I don’t think fellow community members and colleagues do either. My vision of community managers is that they are people who make the lives of community members (regardless of employers) a little better every day, often in ways that are invisible, and as long as you’re doing that, I don’t care what’s on your business card.

 

A follow up to yesterday's Videos new for 3.14

The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

Making of GNOME 3.14

The release of GNOME 3.14 is slowly approaching, so I stole some time from actual design work and created this little promo to show what goes into a release that probably isn’t immediately obvious (and a large portion of it doesn’t even make it in).

Watch on Youtube

I’d like to thank all the usual suspects that make the wheels spinning, Matthias, Benjamin and Allan in particular. The crown goes to Lapo Calamandrei though, because the amount of work he’s done on Adwaita this cycle will really benefit us in the next couple of releases. Thanks everyone, 3.14 will be a great release*!

* I keep saying that every release, but you simply feel it when you’re forced to log in to your “old” GNOME session rather than jhbuild.

September 16, 2014

Videos 3.14 features

We've added a few, but nonetheless interesting features to Videos in GNOME 3.14.

Auto-rotation of videos

If you capture videos in portrait orientation on your phone, we are now able to rotate them automatically in the movie player, as well as in the thumbnails.

Better streaming

You can now seek anywhere inside streamed videos, even if we didn't download all the way to that point. That's particularly useful for long videos, or slow servers (or a combination of both).

Thumbnails generation

Finally, videos without thumbnails in your videos directory will have thumbnails automatically generated, without having to browse them in Files. This makes the first experience of videos more pleasing to the eye.

What's next?

We'll work on integrating Victor Toso's work on grilo plugins, to show information about the film or TV series on your computer, such as grouping episodes of a series together, showing genres, covers and synopsis for films.

With a bit of luck, we should also be able to provide you with more video content as well, through partners.

Back from Akademy 2014

So last week-end I came back from Akademy 2014, it was a loooong road, but really worth it of course!
Great to meet so much nice people, old friends and new ones. Lots of interesting discussions.

I won’t tell again everything that happened as it’s been already well covered in the dot and several blog posts on planet.kde, with lots of great photos in this gallery.

On my part, I’m especially happy to have met Jens Reuterberg and other people from the new Visual Design Group. We could discuss about the tools we have and how we could try to improve/resurrect Karbon and Krita vector tools.. And share ideas about some redesign like for the network manager…

Then another important point was the BoF we had with all other french people, about our local communication on the web and about planning for Akademy-Fr that will be co-hosted again with Le Capitole du Libre in Toulouse in November.

Thanks again to everyone who helped organize it, and to KDE e.V. for the travel support that allowed me to be there.

PS: And Thanks a lot Adriaan for the story, that was very fun.. Héhé sure I’ll think about drawing it, when I’ll have time.. ;)

September 14, 2014

Global key bindings in Emacs

Global key bindings in emacs. What's hard about that, right? Just something simple like

(global-set-key "\C-m" 'newline-and-indent)
and you're all set.

Well, no. global-set-key gives you a nice key binding that works ... until the next time you load a mode that wants to redefine that key binding out from under you.

For many years I've had a huge collection of mode hooks that run when specific modes load. For instance, python-mode defines \C-c\C-r, my binding that normally runs revert-buffer, to do something called run-python. I never need to run python inside emacs -- I do that in a shell window. But I fairly frequently want to revert a python file back to the last version I saved. So I had a hook that ran whenever python-mode loaded to override that key binding and set it back to what I'd already set it to:

(defun reset-revert-buffer ()
  (define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)

That worked fine -- but you have to do it for every mode that overrides key bindings and every binding that gets overridden. It's a constant chase, where you keep needing to stop editing whatever you wanted to edit and go add yet another mode-hook to .emacs after chasing down which mode is causing the problem. There must be a better solution.

A web search quickly led me to the StackOverflow discussion Globally override key bindings. I tried the techniques there; but they didn't work.

It took a lot of help from the kind folks on #emacs, but after an hour or so they finally found the key: emulation-mode-map-alists. It's only barely documented -- the key there is "The “active” keymaps in each alist are used before minor-mode-map-alist and minor-mode-overriding-map-alist" -- and there seem to be no examples anywhere on the web for how to use it. It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?

Okay, here's what it means. First you define a new keymap and add your bindings to it:

(defvar global-keys-minor-mode-map (make-sparse-keymap)
  "global-keys-minor-mode keymap.")

(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)

Now define a minor mode that will use that keymap. You'll use that minor mode for basically everything.

(define-minor-mode global-keys-minor-mode
  "A minor mode so that global key settings override annoying major modes."
  t "global-keys" 'global-keys-minor-mode-map)

(global-keys-minor-mode 1)

Now build an alist consisting of a list containing a single dotted pair: the name of the minor mode and the keymap.

;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode
                                              global-keys-minor-mode-map)))

Finally, set emulation-mode-map-alists to a list containing only the global-minor-mode-alist.

(setf emulation-mode-map-alists '(global-minor-mode-alist))

There's one final step. Even though you want these bindings to be global and work everywhere, there is one place where you might not want them: the minibuffer. To be honest, I'm not sure if this part is necessary, but it sounds like a good idea so I've kept it.

(defun my-minibuffer-setup-hook ()
  (global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)

Whew! It's a lot of work, but it'll let me clean up my .emacs file and save me from endlessly adding new mode-hooks.

September 13, 2014

About panels and blocks - new elements for FreeCAD

I've more or less recently been working on two new features for the Architecture workbench of FreeCAD: Panels and furniture. None of these is in what we could call a finished state, but I thought it would be interesting to share some of the process here. Panels are a new type of object, that inherits all...

September 12, 2014

Thu 2014/Sep/11

September 11, 2014

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=82&m=group_discussions&ts=textdisc-6&itemID=5914453683503906819&itemType=member&anetID=98449
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=17&m=group_discussions&ts=grouppost-disc-6&itemID=5914453683503906819&itemType=member&anetID=98449

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to & everywhere in the link.

If you take the link from the text email and replace & with &, the link works, and takes you to the specific discussion.

Pagination

Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

&count=1000&paginationToken=
It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
    sys.exit(0)
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&", "&") + \
       "&count=1000&paginationToken="
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

September 08, 2014

Dot Reminders

I read about cool computer tricks all the time. I think "Wow, that would be a real timesaver!" And then a week later, when it actually would save me time, I've long since forgotten all about it.

After yet another session where I wanted to open a frequently opened file in emacs and thought "I think I made a bookmark for that a while back", but then decided it's easier to type the whole long pathname rather than go re-learn how to use emacs bookmarks, I finally decided I needed a reminder system -- something that would poke me and remind me of a few things I want to learn.

I used to keep cheat sheets and quick reference cards on my desk; but that never worked for me. Quick reference cards tend to be 50 things I already know, 40 things I'll never care about and 4 really great things I should try to remember. And eventually they get burned in a pile of other papers on my desk and I never see them again.

My new system is working much better. I created a file in my home directory called .reminders, in which I put a few -- just a few -- things I want to learn and start using regularly. It started out at about 6 lines but now it's grown to 12.

Then I put this in my .zlogin (of course, you can do this for any shell, not just zsh, though the syntax may vary):

if [[ -f ~/.reminders ]]; then
  cat ~/.reminders
fi

Now, in every login shell (which for me is each new terminal window I create on my desktop), I see my reminders. Of course, I don't read them every time; but I look at them often enough that I can't forget the existence of great things like emacs bookmarks, or diff <(cmd1) <(cmd2).

And if I forget the exact keystroke or syntax, I can always cat ~/.reminders to remind myself. And after a few weeks of regular use, I finally have internalized some of these tricks, and can remove them from my .reminders file.

It's not just for tech tips, either; I've used a similar technique for reminding myself of hard-to-remember vocabulary words when I was studying Spanish. It could work for anything you want to teach yourself.

Although the details of my .reminders are specific to Linux/Unix and zsh, of course you could use a similar system on any computer. If you don't open new terminal windows, you can set a reminder to pop up when you first log in, or once a day, or whatever is right for you. The important part is to have a small set of tips that you see regularly.

September 05, 2014

SVG Working Group Meeting Report — London

The SVG Working Group had a four day Face-to-Face meeting just before The Graphical Web conference in Winchester (UK). The meetings were hosted by Mozilla in their London office.

Here are some highlights of the meeting:

Day 1

Minutes

  • Symbol and marker placement shorthands:

    Map makers use symbols quite extensively. We decided at a previous meeting to add the ‘refX’ and ‘refY’ attributes (from <marker>) to <symbol> so that symbols can be aligned to a particular point on a map without having to do manual position adjustments. We have since been asked to provide ‘shorthand’ values for ‘refX’ and ‘refY’. I proposed adding ‘left’, ‘center’, and ‘right’ to ‘refX’ (defined as 0%, 50%, and 100%) of the view box as well as ‘top’, ‘center’, and ‘bottom’ to ‘refY’. These values follow those used in the ‘transform-origin’ property. We debated the usefulness and decided to postpone the decision until we had feedback from those using SVG for maps (see Day 4).

    For example, to center a symbol at the moment, one has to subtract off half the width and height from the ‘x’ and ‘y’ attributes of the <use> element:

      <symbol id="MySquare" viewBox="0 0 20 20">
        <rect width="100%" height="100%"
    	  style="fill:none;stroke:black;stroke-width:2px"/>
      </symbol>
      <use x="100" y="100" width="100" height="100"
           xlink:href="#MySquare"/>
    

    By using ‘refX’ and ‘refY’ set to ‘center’, one no longer needs to perform the manual calculations:

      <symbol id="MySquare" viewBox="0 0 20 20"
                      refX="center" refY="center">
        <rect width="100%" height="100%"
    	  style="fill:none;stroke:black;stroke-width:2px"/>
      </symbol>
      <use x="150" y="150" width="100" height="100"
                 xlink:href="#MySquare"/>
    
    A square symbol centered in an SVG.

    An example of a square <symbol> centered inside an SVG.

  • Marker and symbol overflow:

    One common ‘gotcha’ in using hand-written markers and symbols is that by default anything drawn outside the marker or symbol viewport is hidden. People sometimes naively draw a marker or symbol around the origin. Since this is the upper-left corner of the viewport, only one quarter of the marker or symbol is shown. We decided to change the default to not hide the region outside the viewport, however, if this is shown to break too much existing content, the change might be reverted (it is possible that some markers/symbols have hidden content outside the viewport).

  • Two triangle paths with markers on corners. Only one-fourth of each marker on the left path is shown.

    Example of markers drawn around origin point. Left: overflow=’hidden’ (default), right: overflow=”visible’.

  • Variable-stroke width:

    Having the ability to vary stroke width along a path is one of the most requested things for SVG. Inkscape has the Live Path Effect ‘Power Stroke’ extension that does just that. However, getting this into a standard is not a simple process. We must deal with all kinds of special cases. The most difficult part will be to decide how to handle line joins. (See my post from the Tokyo meeting for more details.) As a step towards moving this along, we need to decide how to interpolate between points. One method is to use a Centripital Catmull-Rom function. Johan Engelen quickly added this function as an option to Inkscape’ Power Stroke implementation (which he wrote) for us to test.

Day 2

Minutes

  • Path animations:

    In the context of discussing the possibility of having a canonical path decomposition into Bezier curves (for speed optimization) we briefly discussed allowing animation between paths with different structures. Currently, SVG path animations require the start and end paths to have the same structure (i.e. same types of path segments).

  • Catmull-Rom path segments.

    We had a lengthy discussion on the merits of Catmull-Rom path segments. The main advantage of Catmull-Rom paths is that the path goes through all the specified points (unlike Bezier path segments where the path does not go through the handles). There are some disadvantages… adding a new segment changes the shape of the previous segment, the paths tend not to be particularly pretty, and if one is connecting data points, the curves have the tendency to over/under shoot the data. The majority of the working group supports adding these curves although there is some rather strong dissent. The SVG 2 specification already contains Catmull-Rom paths text.

    After discussing the merits of Catmull-Rom path segments we turned to some technical discussions: what exact form of Catmull-Rom should we use, how should start and end segments be specified, how should Catmull-Rom segments interact with other segment types, how should paths be closed?

    Here is a demo of Catmull-Rom curves.

Day 3

Minutes

  • <tref> decission:

    One problem I see with the working group is that it is dominated by browser interests: Opera, Google (both Blink), Mozilla (Gecko), and Adobe (Blink, Webkit, Gecko). (Apple and Microsoft aren’t actively involved with the group although we did have a Microsoft rep at this meeting.) This leaves those using SVG for other purposes sometimes high and dry. Take the case of <tref>. This element is used in the air-traffic control industry to shadow text so it is visible on the screen over multi-color backgrounds. Admittedly, this is not the best way to do this (the new ‘paint-order’ property is a perfect fit for this) but the fact is that it is being used and flight-control software can’t be changed at a moments notice. Last year there was a discussion on the SVG email list about deprecating <tref> due to some security issues. From reading the thread, it appeared the conclusion was reached that <tref> should be kept around using the same security model that <use> has.

    Deprecating <tref> came up again a few weeks ago and it was decided to remove the feature altogether and not just deprecate it (unfortunately I missed the call). The specification was updated quickly and Blink removed the feature immediately (Firefox had never implemented it… probably due to an oversight). It has reached the point of no-return. It seems that Blink in particular is eager to remove as much cruft as possible… but one person’s cruft is someone else’s essential tool. (<tref> had other uses too, such as allowing localization of Web pages through a server.)

  • Blending on ‘fill’ and ‘stroke’:

    We have already decided to allow multiple paint servers (color, gradient, pattern, hatch) on fills and strokes. It has been proposed that blending be allowed. This would follow the model of the ‘background-blend-mode’ property. (Blending is already allowed between various element using the ‘mix-blend-mode’ property’, available in Firefox (nightly), Chrome, and the trunk version of Inkscape.)

  • CSS Layout Properties:

    The SVG attributes: ‘x’, ‘y’, ‘cx’, ‘cy’, ‘r’, ‘rx’, ‘ry’ have been promoted to properties (see SVG Layout Properties). This allows them to be set via CSS. There is an experimental implementation in Webkit (nightly). It also allows them to be animated via CSS animations.

A pink square centered in SVG if attributes supported, nothing otherwise.

A test of support of ‘x’, ‘y’, ‘width’, and ‘height’ as properties. If supported, a pink square will be displayed on the center of the image.

Day 4

Minutes

  • Shared path segments (Superpaths):

    Sharing path segments between paths is quite useful. For example, the boundary between two countries could be given as one sub-path, shared between the paths of the two countries. Not only does this reduce the amount of data needed to describe a map but it also allows the renderer to optimize the aliasing between the regions. There is an example polyfill available.

    We discussed various syntax issues. One requirement is the ability to specify the direction of the inserted path. We settled for directly referencing the sub-path as d=”m 20,20 #subpath …” or d=”m 20,20 -#subpath…”, the latter for when the subpath should be reversed. We also decided that the subpath should be inserted into the path before any other operation takes place. This would nominally exclude having separate properties for each sub-path but it makes implementation easier.

  • Here, MySubpath is shared between two paths:

      <path id="MySubpath" d="m 150,80 c 20,20 -20,120 0,140"/>
      <path d="m 50,220 c -40,-30 -20,-120 10,-140 30,-20 80,-10
                       90,0 #MySubpath c 0,20 -60,30 -100,0 z"
    	style="fill:lightblue" />
      <path d="m 150,80 c 20,-14 30,-20 50,-20 20,0 50,40 50,90
                       0,50 -30,120 -100,70 -#MySubPath z"
    	style="fill:pink" />
    

    This SVG code would render as:

    Two closed paths sharing a common section.

    The two closed paths share a common section.

  • Stroke position:

    An often requested feature is to be able to position a stroke with some percentage inside or outside a path. We were going to punt this to a future edition of SVG but there seems to be quite a demand. The easiest way to implement this is to offset the path and then stroke that (remember, one has to be able to handle dashes, line joins, and end caps). If we can come up with a simple algorithm to offset a stroke we will add this to SVG 2. This is actually a challenging task as an offset of a Bezier curve is not a Bezier… thus some sort of approximation must be used. The Inkscape ‘Path->Linked Offset’ is one example of offsetting. So is the Inkscape Power Stroke Live Path Effect (available in trunk).

  • Symbol and marker placement shorthands, revisited:

    After feedback from mappers, we have decided to include the symbol and marker placement shorthands: ‘left’, ‘center’, ‘right’, ‘top’, and ‘bottom’.

  • Units in path data:

    Currently all path data is in User Units (pixels if untransformed). There is some desire to have the ability to specify a unit in the path data. Personally, I think this is mostly useless, especially as units (cm, mm, inch, etc.) are useless as there is no way to set a preferred pixel to inch ratio (and never will be). The one unit that could be useful is percent. In any case, we will be investigating this further.

Lots of other technical and administrative topics were discussed: improved DOM, embedding SVG in HTML, specification annotations, testing, etc.

September 04, 2014

Something Wicked This Way Comes...

I've been working on something, and I figured that I should share it with anyone who actually reads the stuff I publish here.

I originally started writing here as a small attempt at bringing tutorials for doing high-quality photography using F/OSS to everyone. So far, it's been amazing and I've really loved meeting and getting to know many like-minded folks.

I'm not leaving. Having just re-read that previous paragraph makes it sound like I am. I'm not.

I am, however, working on something new that I'd like to share with you, though. I've called it:


I've been writing to hopefully help fill in some gaps on high-quality photographic processes using all of the amazing F/OSS tools that so many great groups have built, and now I think it's time to move that effort into its own home.

F/OSS photography deserves its own site focused on demonstrating just how amazing these projects are and how fantastic the results can be when using them.

I'm hoping pixls.us can be that home. Pixel-editing for all of us!

I'm been building the site in my spare time over the past couple of weeks (I'm building it from scratch, so it's going a little slower than just slapping up a wordpess/blogger/CMS site). I want the new site to focus on the content above all else, and to make it as accessible and attractive as possible for users. I also want to keep the quality of the content as high as possible.

If anyone would like to contribute anything to help out: expertise, artwork, images, tutorials and more, please feel free to contact me and let me know. I'm in the process of porting my old GIMP tutorials over to the new site (and probably updating/re-writing a bunch of it as well), so we can have at least some content to start out with.

If you want to follow along my progress at the moment while I build out the site, I'm blogging about it on the site itself at http://pixls.us/blog. As mentioned in the comments, I actually do have an RSS feed for the blog posts, I just hadn't linked to it yet (working on it quickly). The location (should your feedreader not pick it up automatically now) is: http://pixls.us/blog/feed.xml.

If you happen to subscribe in a feedreader, please let me know if anything looks off or broken so I can fix it! :)

Things are in a constant state of flux at the moment (did I mention that I'm still building out the back end?), so please bear with me. Please don't hesitate for a moment to let me know if something looks strange, or with any suggestions as well!

When it's ready to go, I'm going to ask for everyones help to get the word out, link to it, talk about it, etc. The sooner I can get it ready to go, the sooner we can help folks find out just how great these projects are and what they can do with them!

Excelsior!

September 03, 2014

(Locally) Testing ansible deployments

I’ve always felt my playbooks undertested. I know about a possible solution of spinning up new OpenStack instances with the ansible nova module, but felt it to be too complex as a good idea to implement. Now I’ve found a quicker way to test your playbooks by using Docker.

In principal, all my test does is:

  1. create a docker container
  2. create a copy of the current ansible playbook in a temporary directory and mount it as a volume
  3. inside the docker container, run the playbook

This is obviously not perfect, since:

  • running a playbook locally vs connecting via ssh can be a different beast to test
  • can become resource intensive if you want to test different scenarios represented as docker images.

There is possibly more, but for myself in small it is a workable solution so far.

Find the code on github if you’d like to have a look. Improvements welcome!

 


September 02, 2014

Using strace to find configuration file locations

I was using strace to figure out how to set up a program, lftp, and a friend commented that he didn't know how to use it and would like to learn. I don't use strace often, but when I do, it's indispensible -- and it's easy to use. So here's a little tutorial.

My problem, in this case, was that I needed to find out what configuration file I needed to modify in order to set up an alias in lftp. The lftp man page tells you how to define an alias, but doesn't tell you how to save it for future sessions; apparently you have to edit the configuration file yourself.

But where? The man page suggested a couple of possible config file locations -- ~/.lftprc and ~/.config/lftp/rc -- but neither of those existed. I wanted to use the one that already existed. I had already set up bookmarks in lftp and it remembered them, so it must have a config file already, somewhere. I wanted to find that file and use it.

So the question was, what files does lftp read when it starts up? strace lets you snoop on a program and see what it's doing.

strace shows you all system calls being used by a program. What's a system call? Well, it's anything in section 2 of the Unix manual. You can get a complete list by typing: man 2 syscalls (you may have to install developer man pages first -- on Debian that's the manpages-dev package). But the important thing is that most file access calls -- open, read, chmod, rename, unlink (that's how you remove a file), and so on -- are system calls.

You can run a program under strace directly:

$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.

Pruning the output

And of course, you'll see tons of crap you're not interested in, like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid of that first. The easiest way is to use grep. Let's say I want to know every file that lftp opens. I can do it like this:

$ strace lftp sitename |& grep open

I have to use |& instead of just | because strace prints its output on stderr instead of stdout.

That's pretty useful, but it's still too much. I really don't care to know about strace opening a bazillion files in /usr/share/locale/en_US/LC_MESSAGES, or libraries like /usr/lib/i386-linux-gnu/libp11-kit.so.0.

In this case, I'm looking for config files, so I really only want to know which files it opens in my home directory. Like this:

$ strace lftp sitename |& grep 'open.*/home/akkana'

In other words, show me just the lines that have either the word "open" or "read" followed later by the string "/home/akkana".

Digression: grep pipelines

Now, you might think that you could use a simpler pipeline with two greps:

$ strace lftp sitename |& grep open | grep /home/akkana

But that doesn't work -- nothing prints out. Why? Because grep, under certain circumstances that aren't clear to me, buffers its output, so in some cases when you pipe grep | grep, the second grep will wait until it has collected quite a lot of output before it prints anything. (This comes up a lot with tail -f as well.) You can avoid that with

$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.

Back to that strace | grep

Okay, whichever way you grep for open and your home directory, it gives:

open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks is ~/.local/share/lftp/bookmarks -- and I probably can't use that to set my alias.

But wait, why doesn't it show lftp trying to open those other config files?

Using script to save the output

At this point, you might be sick of running those grep pipelines over and over. Most of the time, when I run strace, instead of piping it through grep I run it under script to save the whole output.

script is one of those poorly named, ungoogleable commands, but it's incredibly useful. It runs a subshell and saves everything that appears in that subshell, both what you type and all the output, in a file.

Start script, then run lftp inside it:

$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename

After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp, then another Ctrl-D to exit the subshell script is using. Now all the strace output was in /tmp/lftp.strace and I can grep in it, view it in an editor or anything I want.

So, what files is it looking for in my home directory and why don't they show up as open attemps?

$ grep /home/akkana /tmp/lftp.strace

Ah, there it is! A bunch of lines like this:

access("/home/akkana/.lftprc", R_OK)    = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755)     = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0

So I should have looked for access and stat as well as open. Now I have the list of files it's looking for. And, curiously, it creates ~/.config/lftp if it doesn't exist already, even though it's not going to write anything there.

So I created ~/.config/lftp/rc and put my alias there. Worked fine. And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks later when I had a need for that. All thanks to strace.

August 29, 2014

Putting PackageKit metadata on the Fedora LiveCD

While working on the preview of GNOME Software for Fedora 20, one problem became very apparent: When you launched the “Software” application for the first time, it went and downloaded metadata and then built the libsolv cache. This could take a few minutes of looking at a spinner, and was a really bad first experience. We tried really hard to mitagate this, in that when we ask PackageKit for data we say we don’t mind the cache being old, but on a LiveCD or on first install there wasn’t any metadata at all.

So, what are we doing for F21? We can’t run packagekitd when constructing the live image as it’s a D-Bus daemon and will be looking at the system root, not the live-cd root. Enter packagekit-direct. This is an admin-only tool (no man page) installed in /usr/libexec that designed to be run when you want to run the PackageKit backend without getting D-Bus involved.

For Fedora 21 we’ll be running something like DESTDIR=$INSTALL_ROOT /usr/libexec/packagekit-direct refresh in fedora-live-workstation.ks. This means that when the Live image is booted we’ve got both the distro metadata to use, and the libsolv files already built. Launching gnome-software then takes 440ms until it’s usable.

Ansible Variables all of a Sudden Go Missing?

I’ve written a playbook which deploys a working development environment for some of our internal systems. I’ve tested it with various versions of RHEL. Yet when I ran it against a fresh install of Fedora it failed:

fatal: [192.168.1.233] => {'msg': "One or more undefined variables: 'ansible_lsb' is undefined", 'failed': True}

It turned out, that ansible gets it’s facts through different programs on the remote machine. If some of these programs are not available (in this instance it was lsb_release) the variables are not populated resulting in this error.

So check if all variables you access are indeed available with:

$ ansible -m setup <yourhost>

August 28, 2014

Debugging a mysterious terminal setting

For the last several months, I repeatedly find myself in a mode where my terminal isn't working quite right. In particular, Ctrl-C doesn't work to interrupt a running program. It's always in a terminal where I've been doing web work. The site I'm working on sadly has only ftp access, so I've been using ncftp to upload files to the site, and git and meld to do local version control on the copy of the site I keep on my local machine. I was pretty sure the problem was coming from either git, meld, or ncftp, but I couldn't reproduce it.

Running reset fixed the problem. But since I didn't know what program was causing the problem, I didn't know when I needed to type reset.

The first step was to find out which of the three programs was at fault. Most of the time when this happened, I wouldn't notice until hours later, the next time I needed to stop a program with Ctrl-C. I speculated that there was probably some way to make zsh run a check after every command ... if I could just figure out what to check.

Terminal modes and stty -a

It seemed like my terminal was getting put into raw mode. In programming lingo, a terminal is in raw mode when characters from it are processed one at a time, and special characters like Ctrl-C, which would normally interrupt whatever program is running, are just passed like any other character.

You can list your terminal modes with stty -a:

$ stty -a
speed 38400 baud; rows 32; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ;
eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff
-iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig icanon -iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke

But that's a lot of information. Unfortunately there's no single flag for raw mode; it's a collection of a lot of flags. I checked the interrupt character: yep, intr = ^C, just like it should be. So what was the problem?

I saved the output with stty -a >/tmp/stty.bad, then I started up a new xterm and made a copy of what it should look like with stty -a >/tmp/stty.good. Then I looked for differences: meld /tmp/stty.good /tmp/stty.bad. I saw these flags differing in the bad one: ignbrk ignpar -iexten -ixon, while the good one had -ignbrk -ignpar iexten ixon. So I should be able to run:

$ stty -ignbrk -ignpar iexten ixon
and that would fix the problem. But it didn't. Ctrl-C still didn't work.

Setting a trap, with precmd

However, knowing some things that differed did give me something to test for in the shell, so I could test after every command and find out exactly when this happened. In zsh, you do that by defining a precmd function, so here's what I did:

precmd()
{
    stty -a | fgrep -- -ignbrk > /dev/null
    if [ $? -ne 0 ]; then
        echo
        echo "STTY SETTINGS HAVE CHANGED \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!"
        echo
    fi
}
Pardon all the exclams. I wanted to make sure I saw the notice when it happened.

And this fairly quickly found the problem: it happened when I suspended ncftp with Ctrl-Z.

stty sane and isig

Okay, now I knew the culprit, and that if I switched to a different ftp client the problem would probably go away. But I still wanted to know why my stty command didn't work, and what the actual terminal difference was.

Somewhere in my web searching I'd stumbled upon some pages suggesting stty sane as an alternative to reset. I tried it, and it worked.

According to man stty, stty sane is equivalent to

$ stty cread -ignbrk brkint -inlcr -igncr icrnl -iutf8 -ixoff -iuclc -ixany  imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

Eek! But actually that's helpful. All I had to do was get a bad terminal (easy now that I knew ncftp was the culprit), then try:

$ stty cread 
$ stty -ignbrk 
$ stty brkint
... and so on, trying Ctrl-C each time to see if things were back to normal. Or I could speed up the process by grouping them:
$ stty cread -ignbrk brkint
$ stty -inlcr -igncr icrnl -iutf8 -ixoff
... and so forth. Which is what I did. And that quickly narrowed it down to isig. I ran reset, then ncftp again to get the terminal in "bad" mode, and tried:
$ stty isig
and sure enough, that was the difference.

I'm still not sure why meld didn't show me the isig difference. But if nothing else, I learned a bit about debugging stty settings, and about stty sane, which is a much nicer way of resetting the terminal than reset since it doesn't clear the screen.

Siggraph 2014 report

20140812_115707Blender Foundation/Institute has been present at this year’s Siggraph again. In beautiful Vancouver we’ve caught up with many old friends, industry relations and made a lot of new connections.

Birds of a feather

As usual we had a loaded room with 120+ people attending both presentations. Best part I still find is the introduction round, hearing where everyone comes from and what they do with Blender is always cool – including surprising visitors from the (film) industry attending (Like Image Engine, BMW, Digital Tutors).

My presentation slides (pdf) can be downloaded here.

Tradeshow

20140812_152800Having a small booth on the show always works very well to get meetings organized and reach out to many people you would otherwise never meet. Instead of having to hunt for new contacts, they’re just dropping by :) Here are my notes and impressions from Siggraph in general.

  • This year we had a space next to the Krita Foundation – and of course immediately removed the dividing barrier to make it a 30 foot long presentation! Krita is really impressive these days – they’re definitely leaving GIMP behind in some areas, becoming the artists’ choice, especially for more serious work and productions.
  • We also had support from HP and CGCookie again this year – making it financially possible to do this presentation. Thanks a lot guys!
  • We showed OpenSubdiv – and (yes, really) Blender is the first of the 3d tools offering GPU OpenSubdiv in a release. The Pixar folks were very happy with that, and complimented us with the quality of feedback they had (our developer Sergey Sharybin fixed several issues for them).20140812_152729
  • OpenSubdiv is going to get an important upgrade with 3.0 this year, which will speed up initialization time 5 fold (or better).
  • Cycles – this is still our flagship project, and there’s serious interest from parties to pick this up for their own needs – including evaluation it as in-house rendering engine for vfx. Can’t say more about this though…
  • Several visitors mentioned their interest – or active support – for the Blender 101 project (a release-compatible Blender version configured for learning purposes). Follow-ups with big companies and educational initiatives are being done now – will post about this when things get more tangible.
  • Path-tracers! Everyone makes ray-tracers these days, and AMD showed proudly their own OpenCL render engine – codenamed “Fire Render” for now. A release date is unknown. According to the AMD contact they would release it under a permissive open source license – maybe even public domain.
    The render engine was fast and looked good – but it they mainly showed the obligatory shiny car on a floor with skydome. It’s like cycles 2011 – they’ve got a way to go before it’s a production render system (if they ever intend to though).
  • AMD’s OpenCL progress is still fuzzy – but the general message is that we should try to split up the render kernel in smaller parts – graphics cards just don’t like it and we might run into similar issues with CUDA sooner than later anyway.
  • I had a long chat with Khronos’ president on this topic too – according to him we shouldn’t give up so easily, an industry compatible OpenCL compiler shouldn’t have such problems with building Cycles. To be continued…
  • Meanwhile, Intel had their render engine showing off too – Embree – which showed amazing detailed furry character renders… nearly realtime. On a ‘regular’ 16 core Xeon! The Intel contact was amazed I hadn’t heard of it… but a bit later I learned we use Embree’s BVH already in Cycles.
  • Nvidia is pushing their remote rendering (Grid, “cloud”) offering even further – including offering us access to a 15 x 8 gpu system for testing and rendering of final frames for Gooseberry. We’ll do followups on that.
  • Microsoft Developer support walked by – finally a contact to try to get a couple of free MSVC Pro license keys from! First follow-ups have been done now.
  • Interesting open source compositing project: had a long demo by the lead developer of Natron – extremely promising (“Nuke style”) standalone compositor, using the Tuttle and OpenFX plugins. I found OpenFX very disappointing though… no open release should accept this level of crappy overlay watermarks.
  • The guys from ImageEngine (Neil Blomkamp’s fx house) told me they work on an open compositor as well – Gaffer. Had not time yet to inspect this.
  • X3D – the Web3D consortium walked by with a quite a big delegation. They did a very warm and convincing plea for keeping good support for X3D files (“3d on the web”), and even check on further authoring of interactive 3d content, that can be published on the web efficiently this way. X3D has as a benefit that the display engines (java script webgl or others) can stay separated from the content… just like .html is compatible for all browsers.
  • Presto! A friendly Pixar employee gave us an extensive demo of the in-house animation tool. Of course it has real-time opensubdiv fur (yep, back to work!). But most amazing I found the Presto workflow, UI concepts and metaphors… it feels all quite familiar (selecting, responsiveness of UI in general). It’s a bit like Maya too (but then done well ;), any Blender animator would be really up to speed with Presto in a few hours.

In general – Siggraph was amazing as usual, and Blender’s still in the center of the attention for many people, including and steadily growing by the industry. We’re doing really good – and the support we had last year from companies (Epic, Valve, Google, Nvidia, HP, …) only helped to show how we as Blender Foundation and as Blender community can keep growing. In small steps, relaxed,  by having a lot of fun together, and stay focused on building the best free/open source 3D creation tools for the best of our artists.

Special thanks to everyone who helped out on Siggraph: Sean, Jonathan, Wes, Mike, Francesco, Patrick, David, Joseph and Oscar!

Ton Roosendaal
August 28, 2014

August 27, 2014

5 UX Tips for Developers

I wrote an article walking through 5 UX Tips for Developers over at the Red Hat Developer blog. They are just some general suggestions for improving applications that are geared towards developers without training in UX. If this sounds like something that might be useful to you, feel free to take a look:

5 UX Tips for Developers

Thanks :)

August 26, 2014

GIMP 2.8.14 Released

Yesterday's 2.8.12 release had broken library versioning, so we had to roll out GIMP 2.8.14 today. The only change is the fixed libtool versioning. Please do not distribute any binaries of yesterday's broken 2.8.12 release, and get GIMP 2.8.14 using the torrent: http://download.gimp.org/pub/gimp/v2.8/gimp-2.8.14.tar.bz2.torrent

Open Flight Controllers

In my last multirotor themed entry I gave an insight into the magical world of flying cameras. I also gave a bit of a promise to write about the open source flight controllers that are out there. Here’s a few that I had the luck laying my hands on. We’ll start with some acro FCs, with a very differt purpose to the proprietary NAZA I started on. These are meant for fast and acrobatic flying, not for flying your expensive cameras on a stabilized gimbal. Keep in mind, I’m still fairly inexperienced so I don’t want to go into specifics and provide my settings just yet.

Blackout: Potsdam from jimmac on Vimeo.

CC3D

The best thing to be said about CC3D is that while being aimed at acro pilots, it’s relatively newbie friendly. The software is fairly straight forward. Getting the QT app built, set up the radio, tune motors and tweak gains is not going to make your eyes roll in the same way APM’s ground station would (more on that in a future post, maybe). The defaults are reasonable and help you achieve a maiden flight rather than a maiden crash. Updating to the latest firmware over the air is seamless.

Large number of receivers and connection methods is supported. Not only the classic PWM, or the more reasonable “one cable” CPPM method, but even Futaba proprietary SBUS can be used with CC3D. I’ve flown it with Futaba 8J, 14SG and even the Phantom radio (I actually quite like the compact receiver and the sticks on the TX feel good. Maybe it’s just that it’s something I’ve started on). As you’re gonna be flying proximity mostly, the range is not an issue, unless you’re dealing with external interference where a more robust frequency hopping radio would be safer. Without a GPS “break” or even a barometer, losing signal for even a second is fatal. It’s extremely nasty to get a perfect 5.8 video of your unresponsive quad plumetting to the ground :)

Overall a great board and software, and with so much competition, the board price has come down considerably recently. You can get non-genuine boards for around EUR20-25 on ebay. You can learn more about CC3D on openpilot website

Naze32

Sounding very similar to the popular DJI flight controller, this open board is built around the 32-bit STM32 processor. Theoretically it could be used to fly a bit larger kites with features like GPS hold. You’re not limited to the popular quad or hexa setups with it either, you can go really custom with defining your own motor mix. But you’d be stepping in the realm of only a few and I don’t think I’d trust my camera equipment to a platform that hasn’t been so extensively tested.

Initially I didn’t manage to get the cheap acro variant ideal for the minis, so I got the ‘bells & whistles’ edition, only missing the GPS module. The mag compass and air pressure barometer is already on the board, even though I found no use for altitude hold (BARO). You’ll still going to worry about momentum and wind so reaching for those goggles mid flight is still not going to be any less difficult than just having it stabilized.

If you don’t count some youtube videos, there’s not a lot of handholding for the naze32. People assume you have prior experience with similar FCs. There are multiple choices of configuration tools, but I went for the most straight forward one — a Google Chrome/Chromium Baseflight app. No compiling necessary. It’s quite bare bones, which I liked a lot. Reasonably styled few aligned boxes and CLI is way easier to navigate than the non-searchable table with bubblegum styling than what APM provides for example.

One advanced technique that caught my eye, as the typical process is super flimsy and tedious, is ESC calibration. To set the full range of speeds based on your radio, you usually need to make sure to provide power to the RX, and setting the top and bottom throttle leves to each esc. With this FC, you can actually set the throttle levels from the CLI, calibrating all ESCs at the same time. Very clever and super useful.

Another great feature is that you can have up to three setting profiles, depending on the load, wind conditions and the style you’re going for. Typically when flying proximity, between trees and under park benches, you want very responsive controls at the expense of fluid movement. On the other hand if you plan on going up and fast and pretend to be a plane (or a bird), you really need to have that fluid non-jittery movement. It’s not a setting you change mid-flight, using up a channel, but rather something you choose before arming.

To do it, you hold throttle down and yaw to the left and with the elevator/aileron stick you choose the mode. Left is for preset1, up is for preset 2 and right is for preset 3. Going down with the pitch will recalibrate the IMU. It’s good to solder on a buzzer that will help you find a lost craft when you trigger it with a spare channel (it can beep on low voltage too). The same buzzer will beep for selecting profiles as well.

As for actual flying characteristics, the raw rate mode, which is a little tricky to master (and I still have trouble flying 3rd person with it), is very solid. It feels like a lot larger craft, very stable. There’s also quite a feat in the form of HORI mode, where you get a stabilized flight (kite levels itself when you don’t provide controls), but no limit on the angle, so you’re still free to do flips. I can’t say I’ve masted PID tuning to really get the kind of control over the aircraft I would want. Regardless of tweaking the control characteristics, you won’t get a nice fluid video flying HORI or ANGLE mode, as the self leveling will always do a little jitter to compensate for wind or inaccurate gyro readings which seems to not be there when flying rate. Stabilizing the footage in post gets rid of it mostly, but not perfectly:

Minihquad in Deutschland

You can get the plain acro version for about EUR30 which is an incredible value for a solid FC like this. I have a lot of practice ahead to truly get to that fluid fast plane-like flight that drew me into these miniquads. Check some of these masters below:

APM and Sparky next time. Or perhaps you’d be more interested in the video link instead first? Let me know in the comments.

Update: Turns out NAZE32 supports many serial protocols apart form CPPM, such as Futaba SBUS and Graupner SUMD.

August 25, 2014

Mon 2014/Aug/25

  • The Safety and Privacy team

    During GUADEC we had a Birds-of-a-feather session (BoF) for what eventually became the Safety Team. In this post I'll summarize the very raw minutes of the BoF.

    Locks on bridge

    What is safety in the context of GNOME?

    Matthew Garrett's excellent keynote at GUADEC made a point that GNOME should be the desktop that takes care of and respects the user, as opposed to being just a vehicle for selling stuff (apps, subscriptions) to them.

    I'll digress for a bit to give you an example of "taking care and respecting the user" in another context, which will later let me frame this for GNOME.

    Safety in cities

    In urbanism circles, there is a big focus on making streets safe for everyone, safe for all the users of the street. "Safe" here means many things:

    • Reducing the number of fatalities due to traffic accidents.
    • Reducing the number of accidents, even if they are non-fatal, because they waste everyone's time.
    • Making it possible for vulnerable people to use the streets: children, handicapped people, the elderly.
    • Reducing muggings and crime on the streets.
    • Reducing the bad health effects of a car-centric culture, where people can't walk to places they want to be.

    It turns out that focusing on safety automatically gives you many desirable properties in cities — better urbanism, not just a dry measure of "streets with few accidents".

    There is a big correlation between the speed of vehicles and the proportion of fatal accidents. Cities that reduce maximum speeds in heavily-congested areas will get fewer fatal accidents, and fewer accidents in general — the term that urbanists like to use is "traffic calming". In Strasbourg you may have noticed the signs that mark the central island as a "Zone 30", where 30 Km/h is the maximum speed for all vehicles. This lets motor vehicles, bicycles, and pedestrians share the same space safely.

    Zone 30 End of Zone 30

    Along with traffic calming, you can help vulnerable people in other ways. You can put ramps on curbs where you cross the street; this helps people on wheelchairs, people carrying children on strollers, people dragging suitcases with wheels, skaters, cyclists. On sidewalks you can put tactile paving — tiles with special reliefs so blind pedestrians can feel where the "walking path" is, or where the sidewalk is about to end, or where there is a street crossing. You can make traffic lights for pedestrians emit a special sound when it is people's turn to cross the street — this helps the blind as well as those who are paying attention to their cell phone instead of the traffic signals. You can make mass transit accessible to wheelchairs.

    Once you have slow traffic, accessible mass transit, and comfortable/usable sidewalks, you get more pedestrians. This leads to more people going into shops. This improves the local economy, and reduces the amount of money and time that people are forced to waste in cars.

    Once you have people in shops, restaurants, or cafes at most times of the day, you get fewer muggings — what Jane Jacobs would call "eyes on the street".

    Once people can walk and bike safely to places they actually want to go (the supermarket, the bakery, a cafe or a restaurant, a bank), they automatically get a little exercise, which improves their health, as opposed to sitting in a car for a large part of the day.

    Etcetera. Safety is a systemic thing; it is not something you get by doing one single thing. Not only do you get safer streets; you also get cities that are more livable and human-scaled, rather than machine-scaled for motor vehicles.

    And this brings us to GNOME.

    Safety in GNOME

    Strasbourg     Cathedral, below the rose

    "Computer security" is not very popular among non-technical users, and for good reasons. People have friction with sysadmins or constrained systems that don't let them install programs without going through bureaucratic little processes. People get asked for passwords for silly reasons, like plugging a printer to their home computer. People get asked questions like "Do you want to let $program do $thing?" all the time.

    A lot of "computer security" is done from the viewpoint of the developers and the administrators. Let's keep the users from screwing up our precious system. Let's disallow people from doing things by default. Let's keep control for ourselves.

    Of course, there is also a lot of "computer security" that is desirable. Let's put a firewall so that vandals can't pwn your machine, and so that criminals don't turn your computer into a botnet's slave. Let's keep rogue applications (or rogue users) from screwing up the core of the system. Let's authenticate users so a criminal can't access your bank account.

    Security is putting an armed guard at the entrance of a bank; safety is having enough people in the streets at all times of the day so you don't need the police most of the time.

    Security is putting iron bars in low-storey windows so robbers can't get in easily; safety is putting iron railings in high-storey balconies so you don't fall over.

    Security is disallowing end-user programs from reading /etc/shadow so they can't crack your login passwords; safety is not letting a keylogger run while the system is asking you for your password. Okay, it's security as well, but you get the idea.

    Safety is doing things that prevent harm to users.

    Strasbourg     Cathedral, door detail

    Encrypt all the things

    A good chunk of the discussion during the meeting at GUADEC was about existing things that make our users unsafe, or that inadvertently reveal user's information. For example, we have some things that don't use SSL/TLS by default. Gnome-weather fetches the weather information over unencrypted HTTP. This lets snoopers figure out your current location, or your planned future locations, or the locations where people related to you might live. (And in more general terms, the weather forecasts you check are nobody's business but yours.)

    Gnome-music similarly fetches music metadata over an unencrypted channel. In the best case it lets a snooper know your taste in music; in the worst case it lets someone correlate your music downloads with your music purchases — the difference is a liability to you.

    Gnome-maps fetches map tile data over an unencrypted connection. This identifies places you may intend to travel; it may also reveal your location.

    Strasbourg     Cathedral, stained glass window

    But I don't have a nation-state adversary

    While the examples above may seem far-fetched, they go back to one of the biggest problems with the Internet: unencrypted content is being used against people. You may not have someone to hide from, but you wouldn't want to be put in an uncomfortable situation just from using your software.

    You may not be a reckless driver, but you still put on seatbelts (and you would probably not buy a car without seatbelts).

    We are not trying to re-create Tails, the distro that tries to maintain your anonymity online, but we certainly don't want to make things easy for the bad guys.

    During the meeting we agreed to reach out to the Tails / Tor people so that they can tell us where people's identifying information may leak inadvertently; if we can fix these things without a specialized version of the software, everyone will be safer by default.

    Sandbox all the things

    While auditing code, or changing code to use encrypted connections, can be ongoing "everyday" work, there's a more interesting part to all of this. We are moving to sandboxed applications, where running programs cannot affect each other, or where an installed program doesn't affect the installed dependencies for other programs, or where programs don't have access to all your data by default. See Allan Day's posts on sandboxed apps for a much more detailed explanation of how this will work (parts one and two).

    We have to start defining the service APIs that will let us keep applications isolated from the user's personal data, that is, to avoid letting programs read all of your home directory by default.

    Some services will also need to do scrubbing of sensitive data. For example, if you want to upload photos somewhere public, you may want the software to strip away the geolocation information, the face-recognition data, and the EXIF data that reveals what kind of expensive camera you have. Regular users are generally not aware that this information exists; we can keep them safer by asking for their consent before publishing that information.

    Strasbourg     Cathedral, floor grate

    Consent, agency, respect

    A lot of uncomfortable, inconvenient, or unsafe software is like that because it doesn't respect you.

    Siloed software that doesn't let you export your data? It denies you your agency to move your data to other software.

    Software that fingerprints you and sends your information to a vendor? It doesn't give you informed consent. Or as part of coercion culture, it sneakily buries that consent in something like, "by using this software, you agree to the Terms of Service" (terms which no one ever bothers to read, because frankly they are illegible).

    Software that sends your contact list to the vendor so it can spam them? This is plain lack of respect, lack of consent, and more coercion, as those people don't want to be spammed in the first place (and you don't want to be the indirect cause).

    Allan's second post has a key insight:

    [...] the primary purpose of posing a security question is to ascertain that a piece of software is doing what the user wants it to do, and often, you can verify this without the user even realising that they are being asked a question for security purposes.

    We can take this principle even further. The moment when you ask a security question can be an opportunity to present useful informations or controls – these moments can become a valuable, useful, and even enjoyable part of the experience.

    In a way, enforcing the service APIs upon applications is a way of ensuring that they ask for your consent to do things, and that they respect your agency in doing things which naive security-minded software may disallow "for security reasons".

    Here is an example:

    Agency: "I want to upload a photo"
    Safety: "I don't want my privacy violated"
    Consent: "Would you like to share geographical information, camera information, tags?"

    L'amour

    Pattern Language

    We can get very interesting things if we distill these ideas into GNOME's Pattern Language.

    Assume we had patterns for Respect the user's agency, for Obtain the user's consent, for Maintain the user's safety, and for Respect the user's privacy. These are not written yet, but they will be, shortly.

    We already have prototypal patterns called Support the free ecosystem and User data manifesto.

    Pattern languages start being really useful when you have a rich set of connections between the patterns. In the example above about sharing a photo, we employ the consent, privacy, and agency patterns. What if we add Support the free ecosystem to the mix? Then the user interface to "paste a photo into your instant-messaging client" may look like this:

    Mockup of 'Insert photos' dialog

    Note the defaults:

    • Off for sharing metadata which you may not want to reveal by default: geographical information, face recognition info, camera information, tags. This is the Respect the user's privacy pattern in action.

    • On for sharing the license information, and to let you pick a license right there. This is the Support the free ecosystem pattern.

    If you dismiss the dialog box with "Insert photos", then GNOME would do two things: 1) scrub the JPEG files so they don't contain metadata which you didn't choose to share; 2) note in the JPEG metadata which license you chose.

    In this case, Empathy would not communicate with Shotwell directly — applications are isolated. Instead, Empathy would make use of the "get photos" service API, which would bring up that dialog, and which would automatically run the metadata scrubber.

    Resources

August 24, 2014

One of them Los Alamos liberals

[Adopt-a-Highway: One of them Los Alamos liberals] I love this Adopt-a-Highway sign on Highway 4 on the way back down from the Jemez.

I have no idea who it is (I hope to find out, some day), but it gives me a laugh every time I see it.

August 23, 2014

Going to Akademy 2014

Akademy 2014

In two weeks I’m going to Akademy 2014, the annual summit of the KDE community.
This year it will take place in Brno, at the University of Technology, from September 6th to 12th. As usual, the main talks will be during the 2 first days (6th-7th), then we will have a week filled with Birds-of-feather, workshops and meetings. I’ll present a workshop about “Creating interface mockups and textures with KDE software” on Monday 8th.

Akademy is a really important event for the KDE community and for Free Software more generally. That can happen thanks to the support from some generous sponsors. You can become one here, or make some donations to support KDE through out the year here.

See you @ Akademy 2014!

Ideal kitchen?

I'm always quite fascinated by medieval-like kitchens, I'd like to have one for myself one day... But until the moment one is rich enough to buy an old medieval farm-castle in France, one must stick with dreaming. I also happened to need a new background for the Inventáriode receitas. So (after I saw this amazing...

August 22, 2014

Rocker Concept

Anastasia Majzhegisheva brings a quick concept of Rocker character. Rocker is a husband of Mechanic-Sister, and… well, you can see he is a tough guy.

RockerArtwork by Anastasia Majzhegisheva

Rocker
Artwork by Anastasia Majzhegisheva

P.S. BTW, Anastasia have a Google+ page, you can find a lot of her artwork there!

August 21, 2014

Call for Content: Blenderart Magazine issue #46

We are ready to start gathering up tutorials, making of articles and images for Issue # 46 of Blenderart Magazine.

The theme for this issue isFANtastic FANart

This is going to be a seriously fun issue for everyone to take part in. We are going to honor and pay homage to our favorite artists by creating an issue full of Fanart.

At some point we all give in to the overwhelming urge to re-create our favorite characters, logos etc. We have no intention of claiming the idea as our own, we are simply practicing our craft, improving our skills and showing our love to those artists whose work inspires our own.

So in this issue we are looking for tutorials or “making of” articles on:

  • Personal Fanart projects that you have done to practice your skills and or for fun
  • A nice short summary of why this project inspired you, what you learned

*warning: lack of submissions could result in an entire issue of strange sculpting experiments, half completed models and a galley filled with random bad sketches by yours truly…. :P …… goes off to start filling sketchbook with hundreds of stick figures, just in case. :P

Articles

Send in your articles to sandra
Subject: “Article submission Issue # 46 [your article name]“

Gallery Images

As usual you can also submit your best renders based on the theme of the issue. The theme of this issue is “FANtastic FANart”. Please note if the entry does not match with the theme it will not be published.

Send in your entries for gallery to gaurav
Subject: “Gallery submission Issue # 46″

Note: Image size should be of 1024x (width) at max.

Last date of submissions October 5, 2014.

Good luck!
Blenderart Team

Papagayo packages for Windows

For a long time we have received many requests to provide Papagayo packages for Windows. I am aware about the Papagayo 2.0 bump from original developers, and I am considering to port the changes of our custom version of Papagayo into their version as soon as possible. Unfortunately, right now I’m busy with other priorities, so it’s very hard to tell when this actually will be done.

Papagayo screenshot (Windows)

Papagayo screenshot (Windows)

But there is a good news! As a result of our recent collaboration with some small animation studio of Novosibirsk, we have come up with a build of Papagayo, which is works for Windows. The build is pretty crappy, but it is working.

Here’s the download link: papagayo-1.2-5-win.zip

Note: To start the application, unpack the archive and run the “papagayo.bat” file.

Papagayo update

I am happy to announce that we have made a small update to our Papagayo packages, which fixes the issue with FPS settings value. In previous versions FPS value was internally messed up when you load a new audio file, which was leading to incorrect syncronization results. This update have no other changes and is recommended for all users.

Download updated packages

August 20, 2014

Mouse Release Movie

[Mouse peeking out of the trap] We caught another mouse! I shot a movie of its release.

Like the previous mouse we'd caught, it was nervous about coming out of the trap: it poked its nose out, but didn't want to come the rest of the way.

[Mouse about to fall out of the trap] Dave finally got impatient, picked up the trap and turned it opening down, so the mouse would slide out.

It turned out to be the world's scruffiest mouse, which immediately darted toward me. I had to step back and stand up to follow it on camera. (Yes, I know my camera technique needs work. Sorry.)

[scruffy mouse, just released from trap] [Mouse bounding away] Then it headed up the hill a ways before finally lapsing into the high-bounding behavior we've seen from other mice and rats we've released. I know it's hard to tell in the last picture -- the photo is so small -- but look at the distance between the mouse and its shadow on the ground.

Very entertaining! I don't understand why anyone uses killing traps -- even if you aren't bothered by killing things unnecessarily, the entertainment we get from watching the releases is worth any slight extra hassle of using the live traps.

Here's the movie: Mouse released from trap. [Mouse released from trap]

August 17, 2014

Krita booth at Siggraph 2014

This year, for the first time, we had a Krita booth at Siggraph. If you don’t know about it, Siggraph is the biggest yearly Animation Festival, which happened this year in Vancouver.
We were four people to hold the booth:
-Boudewijn Rempt (the maintainer of the Krita project)
-Vera Lukman (the original author of our popup-palette)
-Oscar Baechler (a cool Krita and Blender user)
-and me ;) (spreading the word about Krita training; more about this in a next post…)

Krita team at Siggraph

Together with Oscar and Vera, we’ve been doing live demos of Krita’s coolest and most impressive features.

Krita booth

We were right next to the Blender booth, which made a nice free-open-source solution area. It was a good occasion for me to meet more people from the blender team.

Krita and Blender booth

People were all really impressed, from those who discovered Krita for the first time to those who already knew about it or even already used it.
As we already started working hard on integrating Krita for VFX workflow, with support for high-bit-depth painting on OpenEXR files, supporting OpenColorIO color management, and even animation support, it was a good occasion to showcase these features and get appropriate feedback.
Many studios expressed their interest to integrate Krita into their production pipeline, replacing less ideal solutions they are using currently…
And of course we met lots of digital painters like illustrators, concept artists, storyboarders or texture artists who want to use Krita now.
Reaching such kinds of users was really our goal, and I think it was a success.

There was also a bird of feather event with all Open-source projects related to VFX that were present there, which was full of great encounters.
I even could meet the guy who is looking at fixing the OCIO bug that I reported a few days before, that was awesome!

OpenSource Bird Of Feather

So hopefuly we’ll see some great users coming to Krita in the next weeks/months. As usual, stay tuned ;)

*Almost all photos here by Oscar Baechler; much more photos here or here.

August 15, 2014

DXF export of FreeCAD Drawing pages

I just upgraded the code that exports Drawing pages in FreeCAD, and it works now much better, and much more the way you would expect: Mount your page fully in FreeCAD, then export it to DXF or DWG with the press of a button. Before, doing this would export the SVG code from the Drawing page,...

Time-lapse photography: stitching movies together on Linux

[Time-lapse clouds movie on youtube] A few weeks ago I wrote about building a simple Arduino-driven camera intervalometer to take repeat photos with my DSLR. I'd been entertained by watching the clouds build and gather and dissipate again while I stepped through all the false positives in my crittercam, and I wanted to try capturing them intentionally so I could make cloud movies.

Of course, you don't have to build an Arduino device. A search for timer remote control or intervalometer will find lots of good options around $20-30. I bought one so I'll have a nice LCD interface rather than having to program an Arduino every time I want to make movies.

Setting the image size

Okay, so you've set up your camera on a tripod with the intervalometer hooked to it. (Depending on how long your movie is, you may also want an external power supply for your camera.)

Now think about what size images you want. If you're targeting YouTube, you probably want to use one of YouTube's preferred settings, bitrates and resolutions, perhaps 1280x720 or 1920x1080. But you may have some other reason to shoot at higher resolution: perhaps you want to use some of the still images as well as making video.

For my first test, I shot at the full resolution of the camera. So I had a directory full of big ten-megapixel photos with filenames ranging from img_6624.jpg to img_6715.jpg. I copied these into a new directory, so I didn't overwrite the originals. You can use ImageMagick's mogrify to scale them all:

mogrify -scale 1280x720 *.jpg

I had an additional issue, though: rain was threatening and I didn't want to leave my camera at risk of getting wet while I went dinner shopping, so I moved the camera back under the patio roof. But with my fisheye lens, that meant I had a lot of extra house showing and I wanted to crop that off. I used GIMP on one image to determine the x, y, width and height for the crop rectangle I wanted. You can even crop to a different aspect ratio from your target, and then fill the extra space with black:

mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg

If you decide to rescale your images to an unusual size, make sure both dimensions are even, otherwise avconv will complain that they're not divisible by two.

Finally: Making your movie

I found lots of pages explaining how to stitch together time-lapse movies using mencoder, and a few using ffmpeg. Unfortunately, in Debian, both are deprecated. Mplayer has been removed entirely. The ffmpeg-vs-avconv issue is apparently a big political war, and I have no position on the matter, except that Debian has come down strongly on the side of avconv and I get tired of getting nagged at every time I run a program. So I needed to figure out how to use avconv.

I found some pages on avconv, but most of them didn't actually work. Here's what worked for me:

avconv -f image2 -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 time-lapse.mp4

Adjust the start_number and filename appropriately for the files you have.

Avconv produces an mp4 file suitable for uploading to youtube. So here is my little test movie: Time Lapse Clouds.

August 13, 2014

ProjectPier

I should really write more about all the little open-source tools we use everyday here in our architecture studio. There are your usual CAD / BIM / 3D applications, of course, that you know a bit of if you follow this blog, but one of the tools that really helps us a lot in our...

August 12, 2014

Native OSX packages available for testing

We have made a new packages of Synfig that run natively on OSX and don't require X11 installed. Help us to test them!...

August 10, 2014

Synfig website goes international

We are happy to announce that our main website is going to provide its content translated for several languages....

Sphinx Moths

[White-lined sphinx moth on pale trumpets] We're having a huge bloom of a lovely flower called pale trumpets (Ipomopsis longiflora), and it turns out that sphinx moths just love them.

The white-lined sphinx moth (Hyles lineata) is a moth the size of a hummingbird, and it behaves like a hummingbird, too. It flies during the day, hovering from flower to flower to suck nectar, being far too heavy to land on flowers like butterflies do.

[Sphinx moth eye] I've seen them before, on hikes, but only gotten blurry shots with my pocket camera. But with the pale trumpets blooming, the sphinx moths come right at sunset and feed until near dark. That gives a good excuse to play with the DSLR, telephoto lens and flash ... and I still haven't gotten a really sharp photo, but I'm making progress.

Check out that huge eye! I guess you need good vision in order to make your living poking a long wiggly proboscis into long skinny flowers while laboriously hovering in midair.

Photos here: White-lined sphinx moths on pale trumpets.

August 09, 2014

A bit of FreeCAD BIM work

This afternoon I did some BIM work in FreeCAD for a house project I'm doing with Ryan. We're using this as a test platform for IFC roundtripping between Revit and FreeCAD. So far the results are mixed, lots of information get lost on the way obviously, but on the other hand I'm secretly pretty happy...

August 08, 2014

Siggraph 2014

Meet us at the SIGGRAPH 2014 conference in Vancouver!

Sunday 10 August, Birds of a feather, Convention Center East, room 3

  • 3 PM: Blender Foundation and community meeting
    Ton Roosendaal talks about last year’s results and plans for next year.
    Feedback welcome!
  • 4.30 PM: Blender Artist Showcase and demos
    Everyone’s welcome to show 5-10 minutes of work you did with Blender.
    Well known artists have been invited already for it, like Jonathan Williamson (BlenderCookie), Sean Kennedy (former R&H), Mike Pan (author BGE book), etc.

Tuesday 12 – Thursday 14 August: Tradeshow exhibit

  • Exhibit hall, booth #545
  • FREE TICKETS! Go to this URL and use promotion code BL122947
  • And meet with our neighbors: Krita Foundation.
  • Tuesday 9.30 AM – 6 PM, Wednesday 9.30 AM – 6 PM, Thursday 9.30 AM – 3.30 PM
  • Exhibit has been kindly sponsored by HP and BlenderCookie.

Daily meeting point after show hours to get together informal for a drink or food:

  •  Rogue Kitchen & Wetbar in Gastown. 601 W. Cordova Street
    (Walk out of convention center to the east, to the trainstation, 10 minutes.)

EAT – SLEEP – BLEND – REPEAT shirts

  • Available in three loud colors – the crew outfit for this year. We’ll sell then for CAD 20 at the BOF and booth.

show

August 07, 2014

Post-GUADEC


  • If you have an orientation sensor in your laptop that works under Windows 8, this tool might be of interest to you.
  • Mattias will use that code as a base to add Compass support to Geoclue (you're on the hook!)
  • I've made a hack to load games metadata using Grilo and Lua plugins (everything looks like nail when you have a hammer ;)
  • I've replaced a Linux phone full of binary blobs by another Linux phone full of binary blobs
  • I believe David Herrmann missed out on asking for a VT, and getting something nice in return.
  • Cosimo will be writing some more animations for me! (and possibly for himself)
  • I now know more about core dumps and stack traces than I would want to, but far less than I probably will in the future.
  • Get Andrea to approve Timm Bädert's git account so he can move Corebird to GNOME. Don't forget to try out Charles, Timm!
  • My team won FreeFA, and it's not even why I'm smiling ;)
  • The cathedral has two towers!
Unfortunately for GUADEC guests, Bretzel Airlines opened its new (and first) shop on Friday, the last days of the BoFs.

(Lovely city, great job from Alexandre, Nathalie, Marc and all the volunteers, I'm sure I'll find excuses to come back :)

Check out Flock Day 2′s Virtual Attendance Guide on Fedora Magazine

flock-logo

I’ve posted today (Thursday’s) guide to Flock talks over on Fedora Magazine:

Guide to Attending Flock Virtually: Day 2

The guide to days 3 and 4 will follow, of course. Enjoy!

August 06, 2014

Guide to Attending Flock Virtually: Day 1

flock-logo

Flock, the Fedora Contributor Conference, starts tomorrow morning in Prague, the Czech Republic, and you can attend – no matter where in the world you are. (Although admittedly, depending on where you are, you may need to give up on some sleep if you intend to attend live ;-) )

Here’s a quick schedule of tomorrow’s talks for remote attendees:

Wednesday, 6 August 2014

6:45 AM UTC / 8:45 AM Prague / 2:45 AM Boston

Opening: Fedora Project Leader (Matthew Miller)

7:00 AM UTC / 9:00 AM Prague / 3:00 AM Boston

Keynote: Free And Open Source Software In Europe: Policies And Implementations (Gijs Hillenius)

8:00 AM UTC / 10:00 AM Prague / 4:00 AM Boston

Better Presentation of Fonts in Fedora (Pravin Satpute)

Contributing to Fedora SELinux Policy (Michael Scherer)

FedoraQA: You are important (Amita Sharma)

9:00 AM UTC / 11:00 AM Prague / 5:00 AM Boston

Fedora Magazine (Chris Anthony Roberts)

State of Copr Build Service (Miroslav Suchý)

Taskotron and Me (Tim Flink)

Where’s Wayland (Matthias Clasen)

12:00 PM UTC / 14:00 PM Prague / 8:00 AM Boston

Fedora Workstation – Goals, Philosophy, and Future (Christian F.K. Schaller)

Procrastination makes you better: Life of a remotee (Flavio Percoco)

Python 3 as Default (Bohuslav Kabrda)

Wayland Input Status (Hans de Goede)

13:00 PM UTC / 15:00 PM Prague / 9:00 AM Boston

Evolving the Fedora Updates Process (Luke Macken)

Fedora Future Devices (Wolnei Tomazelli Junior)

Outreach Program for Women: Lessons in Collaboration
(Marina Zhurakhinskaya)

Predictive Input Methods (Anish Patel)

14:00 PM UTC / 16:00 PM Prague / 10:00 AM Boston

Open Communication and Collaboration Tools for Humans (Sayan Chowdhury, Ratnadeep Debnath)

State of the Fedora Kernel (Josh Boyer)

The Curious Case of Fedora Freshmen (aka Issue #101) (Sarup Banskota)

UX 101: Practical Usability Methods Anyone Can Use (Karen Tang)

15:00 PM UTC / 17:00 PM Prague / 11:00 AM Boston

Fedora Ambassadors: State of the Union (Jiří Eischmann)

Hyperkitty: Past, Present, and Future (Aurélien Bompard)

Kernel Tuning (John H Dulaney)

Release Engineering and You (Dennis Gilmore)

16:00 PM UTC / 18:00 PM Prague / 12:00 PM Boston

Advocating Fedora.next (Christoph Wickert)

Documenting Software with Mallard (Jaromir Hradilek, Petr Kovar)

Fedora Badges and Badge Design (Marie Catherine Nordin, Chris Anthony Roberts)

How is the Fedora kernel different? (Levente Kurusa)

Help us cover these talks!

help-1

We’re trying to get as full coverage as possible of these talks on Fedora Magazine. You can help us out, even if you are a remote attendee. If any of the talks above are at a reasonable time in your timezone and you’d be willing to take notes and draft a blog post for Fedora Magazine, please sign up on our wiki page for assignments! You can also contact Ryan Lerch or Chris Roberts for more information about contributing.

August 05, 2014

(lxml) XPath matching against nodes with unprintable characters

Sometimes you want to clean up HTML by removing tags with unprintable characters in them (whitespace, non breaking space, etc). Sometimes encoding this back and forth results in weird characters when the HTML is rendered. Anyways, here is the snippet you might find useful:


def clean_empty_tags(node):
    """
    Finds all tags with a whitespace in it. They come out broke and
    we won't need them anyways.
    """
    for empty in node.xpath("//p[.='\xa0']"):
        empty.getparent().remove(empty)

FreeCAD Spaces

I just finished to give a bit of polish to the Arch Space tool of FreeCAD. Until now it was a barely geometric entity, that represents a closed space. You can define it buy building it from an existing solid shape, or from selected boundaries (walls, floors, whatever). Now I added a bit of visual goodness....

Privacy Policy

I got an envelope from my bank in the mail. The envelope was open and looked like the flap had never been sealed.

Inside was a copy of their privacy policy. Nothing else.

The policy didn't say whether their privacy policy included sealing the envelope when they send me things.

Clarity in GIMP (Local Contrast + Mid Tones)

I was thinking about other ways I fiddle with Luminosity Masks recently, and I thought it might be fun to talk about some other ways to use them when looking at your images.

My previous ramblings about Luminosity Masks:
The rest of my GIMP tutorials can be found here:

If you remember from my previous look at Luminosity Masks, the idea is to create masks that correspond to different luminous levels in your image (roughly the lightness of tones). Once you have these masks, you can make adjustments to your image and isolate their effect to particular tonal regions easily.



In my previous examples, I used them to apply different color toning to different tonal regions of the image, like this example masked to the DarkDark tones (yes, DarkDark):


Mouseover to change Hue to: 0 - 90 - 180 - 270

What’s neat about that application is when you combine it with some Film Emulation presets. I’ll leave that as an exercise for you to play with.

In this particular post I want to do something different.
I want to make some eyes bleed.


“My eyes! The goggles do nothing!” Radioactive Man (Rainier Wolfcastle)

In the same realm of bad tone-mapping for HDR images (see the first two images here) there are those who sharpen to ridiculous proportions as well as abuse local contrast enhancement with Unsharp Mask.

It was this last one that I was fiddling with recently that got me thinking.

Local Contrast Enhancement with Unsharp Mask

If you haven’t heard of this before, let me explain briefly. There is a sharpening method you can use in GIMP (and other software) that utilizes a slightly blurred version of your image to enhance edge contrasts. This leads to a visual perception of increased sharpness or contrast on those edges.

It’s easy to do this manually to see sort of how it works:
  1. Open an image.
  2. Duplicate the base layer.
  3. Blur the top layer a bit (Gaussian blur).
  4. Set the top layer blend mode to “Grain Extract”.
  5. Create a New Layer from visible.
  6. Set the new layer blend mode to “Overlay”, and hide the blurred layer.
Of course, it’s quite a bit easier to just use Unsharp Mask directly (but now you know how to create high-pass layers of your image - we’re learning things already!).

So let’s have a look at an image from a nice Fall day at a farm:


I can apply Unsharp Mask through the menu:

Filters → Enhance → Unsharp Mask...

Below the preview window there are three sliders to adjust the effect: Radius, Amount, and Threshold.

Radius changes how big a radius to use when blurring the image to create the mask.
Amount changes how strong the effect is.
Threshold is a setting for the minimum pixel value difference to define an edge. You can ignore it for now.

If we apply the filter with its default values (Radius: 5.0, Amount: 0.50), we get a nice little sharpening effect on the result:


Unsharp Mask with default values
(mouseover to compare original)

It gives a nice little “pop” to the image (a bit much for my taste). It also avoids sharpening noise mostly, which is nice as well.

So far this is fairly simple stuff, nothing dramatic. The problem is, once many people learn about this they tend to go a bit overboard with it. For instance, let’s crank up the Amount to 3.0:


Don’t do this. Just don’t.

Yikes. But don’t worry. It’s going to get worse.

High Radius, Low Amount

So I’m finally getting to my point. There is a neat method of increasing local contrast in an image by pushing the Unsharp Mask values more than you might normally. If you use a high radius, and default amount you get:


Unsharp Mask, Radius: 80 Amount: 0.5
(mouseover to compare original)

It still looks like clown vomit. But we can still gain the nice local contrast enhancement and mitigate the offensiveness by turning the Amount down even further. Here it is with the Radius still at 80, but the Amount turned down to 0.10:


Unsharp Mask, Radius: 80 Amount: 0.10
(mouseover to compare original)

Even with the Amount at 0.10 it might be a tad much for my taste. The point is that you can gain a nice little boost to local contrast with this method.

Neat but hardly earth-shattering. This has been covered countless times in various places already (and if this is the first time you’re hearing about it, then we’re learning two new things today!).

We can see that we now have a neat method for bumping up the local contrast of an image slightly to give it a little extra visual pop. What we can think about now is, how can I apply that to my images in other interesting ways?

Perhaps we could find some way to apply these effects to particular areas of an image? Say, based on something like luminosity?

Clarity in Lightroom

From what I can tell (and find online), it appears that this is basically what the “Clarity” adjustment in Adobe Lightroom does. It’s a Local Contrast Enhancement masked in some way to middle tones in the image.

Let’s have a quick look and see if that theory holds any weight. Here is the image above, brought into Lightroom and with the “Clarity’ pushed to 100:


From Lightroom 4, Clarity: 100

This seems visually similar to the path we started on already, but let’s see if we can get something better with what we know so far.

Clarity in GIMP

What I want to do is to increase the local contrast of my image, and confine those adjustments to the mid-tone areas of the image. We have seen a method for increasing local contrast with Unsharp Mask, and I had previously written about creating Luminosity Masks. Let’s smash them together and see what we get!

If you haven’t already, go get the Script-Fu to automate the creation of these masks (I tend to use Saul’s version as it’s faster than mine) from the GIMP Registry.

Open an image to get started (I’ll be using the same image from above).

Create Your Luminosity Masks

You’ll need to generate a set of luminosity masks using your base image as a reference. With your image open, you can find Saul’s Luminosity Mask script here:

Filters → Generic → Luminosity Masks (saulgoode)

It should only take a moment to run, and you shouldn’t notice anything different when it’s finished. If you do check your Channels dialog, you should see all nine of the masks there (L, LL, LLL, M, MM, MMM, D, DD, DDD).


Luminosity Masks, by row: Darks, Mids, Lights

Enhance the Local Contrast

Now it’s time to leave subtlety behind us. We are going to be masking these results anyway, so we can get a little crazy with the application in this step. You can use the steps I mentioned above with Unsharp Mask to increase the local contrast, or you can use G'MIC to do it instead.

The reason that you may want to use G'MIC instead is that to increase the local contrast without causing a bit of a color shift would require that you apply the Unsharp Mask on a particular channel after decomposition. G'MIC can automatically apply the contrast enhancement only on the luminance in one step.

Let’s try it with the regular Unsharp Mask in GIMP. I’m going to use similar settings to what we used above, but we’ll turn the amount up even more.

With your image open in GIMP, duplicate the base layer. We’ll be applying the effect and mask on this duplicate over your base.

Now we can enhance the local contrast using Unsharp Mask:
Filters → Enhance → Unsharp Mask...

This time around, we’ll try using Radius: 80 and Amount: 1.5.


Unsharp Mask, Radius: 80, Amount: 1.5. My eyes!

Yes, it’s horrid, but we’re going to be masking it to the mid-range tones remember. Now I can apply a layer mask to this layer by Right-clicking on the layer, and selecting “Add Layer Mask...”.
Right-click → Add Layer Mask...

In the “Add a Mask to the Layer” dialog that pops up, I’ll choose to initialize the layer to a Channel, and choose the “M” mid-tone mask:


Once the ridiculous tones are confined to the mid-tones, things look much better:


Unsharp Mask, Radius: 80, Amount: 1.5. Masked to mid-tones.
(mouseover to compare original)

You can see that there is now a nice boost to the local contrast that is confined to the mid-tones in the image. This is still a bit much for me personally, but I’m purposefully over-doing it in an attempt to illustrate the process. Really you’d want to either tone-down the amount on the USM (UnSharp Mask), or adjust the opacity of this layer to taste now.

So the general formula we are seeing is to make an adjustment (local contrast enhance in this case), and to use the luminosity masks to give us control over where the effect is applied.

For instance, we can try using other types of contrast/detail enhancement in place of the USM step.

I had previously written about detail enhancement through “Freaky Details”. This is what we get when replacing the USM local contrast enhancement with it. Using G'MIC, I can find “Freaky Details” at:
Filters → G'MIC
Details → Freaky details

I used an Amplitude of 4, Scale 22, and Iterations 1. I applied this to the Luminance Channels:


Freaky Details, Amplitude 4, Scale 22, Iterations 1, mid-tone mask
(mouseover to compare original)

Trying other G'MIC detail enhancements such as “Local Normalization” can yield slightly different results:


G'MIC Local Normalization at default values.
(mouseover to compare original)

Yes, there’s some halo-ing, but remember that I’m purposefully allowing these results to get ugly to highlight what they’re doing.

G'MIC Local Variance Normalization is a neat result with fine details as well:


G'MIC Local Variance Normalization (default settings)
(mouseover to compare original)

In Conclusion

This approach works because our eyes will be more sensitive to slight contrast changes as they occur in the mid-tones of an image as opposed to the upper and lower tones. More importantly, it’s a nice introduction to viewing your images as more than a single layer.

Understanding these concepts and viewing your images as the sum of multiple parts allows you much greater flexibility in how you approach your retouching.

I fully encourage you to give it a shot and see what other strange combinations you might be able to discover! For instance, try using the Film Emulation presets in combination with different luminosity masks to find new and interesting combinations of color grading! Try setting the masked layers to different blending modes! You may surprise yourself with what you find.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.



GUADEC

This blog post is mostly about showing some photos I took, but I may as well give a brief summary from my point of view.

Had a good time in Strasbourg this week. Hacked a bit on Adwaita with Lapo, who has fearlessly been sanding the rough parts after the major refactoring. Jim Hall uncovered the details of his recent usability testing of GNOME, so while we video chatted before, it was nice to meet him in person. Watched Christian uncover his bold plans to focus on Builder full time which is both awesome and sad. Watched Jasper come out with the truth about his love for Windows and Federico’s secret to getting around fast. Uncovered how Benjamin is not getting more aerodynamic (ie fat) like me. Enjoyed a lot of great food (surprisingly had crêpes only once).

In a classic move I ran out of time in my lightning talk on multirotors, so I’ll have to cover the topic of free software flight controllers in a future blog post. I managed to miss a good number of talks I intended to see, which is quite a feat, considering the average price of beer in the old town. Had a good time hanging out with folks which is so rare to me.

During the BOFs on Wednesday I sat down with the Boxes folks, discussing some new designs. Sad that it was only few brief moments I managed to talk to Bastian about our Blender workflows. Unfortunately the Brno folks from whom I stole a spot in the car had to get back on Thursday so I missed the Thursday and Friday BOFs as well.

Despite the weather I enjoyed the second last GUADEC. Thanks for making it awesome again. See you in the next last one in Gothenburg.

August 04, 2014

ReduceContour tool test

Hi all

Recently I´ve been doing some ground work, polishing FillHoles tool and other internal tools. Nothing fun to show but definitelly improving robustness and inbetween added here and there small useful new features, like the one I´ve being showing in this quick and dirt video series.

Like proportinal inflate, I´ve recently added too in Separate disconnected functionality a treshold to delete smaller parts. Very common when we import noisy meshes with lots of floating parts.
One of the most important feature is the possibility to bridge and connect seperated meshes manually, like I show here: http://farsthary.wordpress.com/2014/07/15/fillholes-revamp-test-1/

So stay tunned because the real fun my start soon for me :P
Cheers
Raul


Notes on Fedora on an Android device

A bit more than a year ago, I ordered a Geeksphone Peak, one of the first widely available Firefox OS phones to explore this new OS.

Those notes are probably not very useful on their own, but they might give a few hints to stuck Android developers.

The hardware

The device has a Qualcomm Snapdragon S4 MSM8225Q SoC, which uses the Adreno 203 and a 540x960 Protocol A (4 touchpoints) touchscreen.

The Adreno 203 (Note: might have been 205) is not supported by Freedreno, and is unlikely to be. It's already a couple of generations behind the latest models, and getting a display working on this device would also require (re-)writing a working panel driver.

At least the CPU is an ARMv7 with a hardware floating-point (unlike the incompatible ARMv6 used by the Raspberry Pi), which means that much more software is available for it.

Getting a shell

Start by installing the android-tools package, and copy the udev rules file to the correct location (it's mentioned with the rules file itself).

Then, on the phone, turn on the developer mode. Plug it in, and run "adb devices", you should see something like:

$ adb devices
List of devices attached
22ae7088f488 device

Now run "adb shell" and have a browse around. You'll realise that the kernel, drivers, init system, baseband stack, and much more, is plain Android. That's a good thing, as I could then order Embedded Android, and dive in further.

If you're feeling a bit restricted by the few command-line applications available, download an all-in-one precompiled busybox, and push it to the device with "adb push".

You can also use aafm, a simple GUI file manager, to browse around.

Getting a Fedora chroot

After formatting a MicroSD card in ext4 and unpacking a Fedora system image in it, I popped it inside the phone. You won't be able to use this very fragile script to launch your chroot just yet though, as we lack a number of kernel features that are required to run Fedora. You'll also note that this is an old version of Fedora. There are probably newer versions available around, but I couldn't pinpoint them while writing this article.

Runnning Fedora, even in a chroot, on such a system will allow us to compile natively (I wouldn't try to build WebKit on it though) and run against a glibc setup rather than Android's bionic libc.

Let's recompile the kernel to be able to use our new chroot.

Avoiding the brick

Before recompiling the kernel and bricking our device, we'll probably want to make sure that we have the ability to restore the original software. Nothing worse than a bricked device, right?

First, we'll unlock the bootloader, so we can modify the kernel, and eventually the bootloader. I took the instructions from this page, but ignored the bits about flashing the device, as we'll be doing that a different way.

You can grab the restore image from my Fedora people page, as, as seems to be the norm for Android(-ish) devices makers to deny any involvement in devices that are more than a couple of months old. No restore software, no product page.

The recovery should be as easy as

$ adb reboot-bootloader
$ fastboot flash boot boot.img
$ fastboot flash system system.img
$ fastboot flash userdata userdata.img
$ fastboot reboot

This technique on the Geeksphone forum might also still work.

Recompiling the kernel

The kernel shipped on this device is a modified Ice-Cream Sandwich "Strawberry" version, as spotted using the GPU driver code.

We grabbed the source code from Geeksphone's github tree, installed the ARM cross-compiler (in the "gcc-arm-linux-gnu" package on Fedora) and got compiling:

$ export ARCH=arm
$ export CROSS_COMPILE=/usr/bin/arm-linux-gnu-
$ make C8680_defconfig
# Make sure that CONFIG_DEVTMPFS and CONFIG_EXT4_FS_SECURITY get enabled in the .config
$ make

We now have a bzImage of the kernel. Launching "fastboot boot zimage /path/to/bzImage" didn't seem to work (it would have used the kernel only for the next boot), so we'll need to replace the kernel on the device.

It's a bit painful to have to do this, but we have the original boot image to restore in case our version doesn't work. The boot partition is on partition 8 of the MMC device. You'll need to install my package of the "android-BootTools" utilities to manipulate the boot image.


$ adb shell 'cat /dev/block/mmcblk0p8 > /mnt/sdcard/disk.img'
$ adb pull /mnt/sdcard/disk.img
$ bootunpack boot.img
$ mkbootimg --kernel /path/to/kernel-source/out/arch/arm/boot/zImage --ramdisk p8.img-ramdisk.cpio.gz --base 0x200000 --cmdline 'androidboot.hardware=qcom loglevel=1' --pagesize 4096 -o boot.img
$ adb reboot-bootloader
$ fastboot flash boot boot.img

If you don't want the graphical interface to run, you can modify the Android init to avoid that.

Getting a Fedora chroot, part 2

Run the script. It works. Hopefully.

If you manage to get this far, you'll have a running Android kernel and user-space, and will be able to use the Fedora chroot to compile software natively and poke at the hardware.

I would expect that, given a kernel source tree made available by the vendor, you could follow those instructions to transform your old Android phone into an ARM test "machine".

Going further, native Fedora boot

Not for the faint of heart!

The process is similar, but we'll need to replace the initrd in the boot image as well. In your chroot, install Rob Clark's hacked-up adb daemon with glibc support (packaged here) so that adb commands keep on working once we natively boot Fedora.

Modify the /etc/fstab so that the root partition is the SD card:

/dev/mmcblk1 /                       ext4    defaults        1 1

We'll need to create an initrd that's small enough to fit on the boot partition though:

$ dracut -o "dm dmraid dmsquash-live lvm mdraid multipath crypt mdraid dasd zfcp i18n" initramfs.img

Then run "mkbootimg" as above, but with the new ramdisk instead of the one unpacked from the original boot image.

Flash, and reboot.

Nice-to-haves

In the future, one would hope that packages such as adbd and the android-BootTools could get into Fedora, but I'm not too hopeful as Fedora, as a project, seems uninterested in running on top of Android hardware.

Conclusion

Why am I posting this now? Firstly, because it allows me to organise the notes I took nearly a year ago. Secondly, I don't have access to the hardware anymore, as it found a new home with Aleksander Morgado at GUADEC.

Aleksander hopes to use this device (Qualcomm-based, remember?) to add native telephony support to the QMI stack. This would in turn get us a ModemManager Telephony API, and the possibility of adding support for more hardware, such as through RIL and libhybris (similar to the oFono RIL plugin used in the Jolla phone).

Common docker pitfalls

I’ve ran into a few problems with docker I’d like to document myself and how to solve them.

Overwriting an entrypoint

If you’ve configured a script as an entrypoint which fails, you can run the docker image with a shell in order to fiddle with the script (instead of continously rebuilding the image):

#--entrypoint (provides a new entry point which is the nominated shell)
docker run -i --entrypoint='/bin/bash'  -t f5d4a4d6a8eb

Possible errors you face otherwise are these:

/bin/bash: /bin/bash: cannot execute binary file

Weird errors when building the image

I’ve ran into this a few times. Errors like:

Error in PREIN scriptlet in rpm package libvirt-daemon-0.9.11.4-3.fc17.x86_64
or
useradd: failure while writing changes to /etc/passwd

If you’ve set SELinux to enforcing, you may want to temporarily disable SELinux for just building the image. Don’t disable SELinux permanently.

Old (base) image

Check if your base image has changed (e.g. docker images) and pull it again (docker pull <image>)

hamburg001


August 03, 2014

RawSpeed moves to github

As you may have noticed, there hasn’t been much activity lately, since all on the Rawstudio team has had various things getting in the way of doing more work on Rawstudio.

I have however now and again found time to work on RawSpeed, and will from now on host all changes on github. Github makes a lot of work much easier, and allows direct pull requests to be made.

RawSpeed Version 2; New Cameras & Features

The new features that has been included and can be tested on the development branch. Note that this is RawSpeed and not Rawstudio.

  • Support for Sigma foveon cameras.
  • Support for Fuji cameras.
  • Support old Minolta, Panasonic, Sony and other cameras (contributed by Pedro Côrte-Real)
  • Arbitrary CFA definition sizes.
  • Use pugixml for xml parsing to avoid depending on libxml.

When “version 2″ is stabilized a bit, a formal relase will be made, whereafter the API will be locked.

Links

August 02, 2014

Krita: illustrated beginners guide in Russian

Some time ago our user Tyson Tan (creator of Krita's mascot Kiki) published his beginners guide for Krita. Now this tutorial is also available in Russian language!

If you happen to know Russian, please follow the link :)


слайд

Fanart by Anastasia Majzhegisheva – 13

Morevna Universe. Watercolor artwork painting by Anastasia Majzhegisheva

Morevna Universe.
Watercolor and ink artwork by Anastasia Majzhegisheva.

 

August 01, 2014

Predicting planetary visibility with PyEphem

Part II: Predicting Conjunctions

After I'd written a basic script to calculate when planets will be visible, the next step was predicting conjunctions, times when two or more planets are close together in the sky.

Finding separation between two objects is easy in PyEphem: it's just one line once you've set up your objects, observer and date.

p1 = ephem.Mars()
p2 = ephem.Jupiter()
observer = ephem.Observer()  # and then set it to your city, etc.
observer.date = ephem.date('2014/8/1')
p1.compute(observer)
p2.compute(observer)

ephem.separation(p1, p2)

So all I have to do is loop over all the visible planets and see when the separation is less than some set minimum, like 4 degrees, right?

Well, not really. That tells me if there's a conjunction between a particular pair of planets, like Mars and Jupiter. But the really interesting events are when you have three or more objects close together in the sky. And events like that often span several days. If there's a conjunction of Mars, Venus, and the moon, I don't want to print something awful like

Friday:
  Conjunction between Mars and Venus, separation 2.7 degrees.
  Conjunction between the moon and Mars, separation 3.8 degrees.
Saturday:
  Conjunction between Mars and Venus, separation 2.2 degrees.
  Conjunction between Venus and the moon, separation 3.9 degrees.
  Conjunction between the moon and Mars, separation 3.2 degrees.
Sunday:
  Conjunction between Venus and the moon, separation 4.0 degrees.
  Conjunction between the moon and Mars, separation 2.5 degrees.

... and so on, for each day. I'd prefer something like:

Conjunction between Mars, Venus and the moon lasts from Friday through Sunday.
  Mars and Venus are closest on Saturday (2.2 degrees).
  The moon and Mars are closest on Sunday (2.5 degrees).

At first I tried just keeping a list of planets involved in the conjunction. So if I see Mars and Jupiter close together, I'd make a list [mars, jupiter], and then if I see Venus and Mars on the same date, I search through all the current conjunction lists and see if either Venus or Mars is already in a list, and if so, add the other one. But that got out of hand quickly. What if my conjunction list looks like [ [mars, venus], [jupiter, saturn] ] and then I see there's also a conjunction between Mars and Jupiter? Oops -- how do you merge those two lists together?

The solution to taking all these pairs and turning them into a list of groups that are all connected actually lies in graph theory: each conjunction pair, like [mars, venus], is an edge, and the trick is to find all the connected edges. But turning my list of conjunction pairs into a graph so I could use a pre-made graph theory algorithm looked like it was going to be more code -- and a lot harder to read and less maintainable -- than making a bunch of custom Python classes.

I eventually ended up with three classes: ConjunctionPair, for a single conjunction observed between two bodies on a single date; Conjunction, a collection of ConjunctionPairs covering as many bodies and dates as needed; and ConjunctionList, the list of all Conjunctions currently active. That let me write methods to handle merging multiple conjunction events together if they turned out to be connected, as well as a method to summarize the event in a nice, readable way.

So predicting conjunctions ended up being a lot more code than I expected -- but only because of the problem of presenting it neatly to the user. As always, user interface represents the hardest part of coding.

The working script is on github at conjunctions.py.

July 31, 2014

Fanart by Anastasia Majzhegisheva – 12

Morevna-14-07-17-2-logo-small

Morevna. Artwork by Anastasia Majzhegisheva

We’ve got one more fanart submission from Anastasia Majzhegisheva. This time Anastasija also uncovering the details of her creative process by providing WIP images and screenshots. Enjoy!

Free From XP (LinuxPro Magazine) GIMP Article

So, the last time I talked about LinuxPro Magazine was about having a simple give-away of the promotional copies I had received of their GIMP Handbook issue. At that time, I joked with the editor that surely it couldn’t be complete without anything written by me. :)

Then he called me out on my joke and asked me if I wanted to write an article for them.

So, I’ve got an article in LinuxPro Magazine Special Edition #18: Free From XP!


The article is aimed at new users switching over from XP to Linux, so the stuff I cover is relatively basic, like:
  • The Interface
  • Cropping
  • Rotating
  • Correcting Levels
  • Brightness/Contrast
  • Color Levels
  • Curves
  • Resizing
  • Sharpening
  • Saving & Exporting
Still, if you know someone who could use a hand switching, it certainly can’t hurt to pick a copy up! (You can get print and digital copies from their website: LinuxPro Magazine).

Here’s a quick preview of the first page of the article:


My hair doesn’t look anywhere near as fabulous as this image would have you believe...

Also, if anyone sees a copy on a newsstand, it would be awesome if you could send me a quick snap of it.

writing a product vision for Metapolator

A week ago I kicked off my involvement with the Metapolator project as I always do: with a product vision session. Metapolator is an open project and it was the first time I did the session online, so you have the chance to see the session recording (warning: 2&half hours long), which is a rare opportunity to witness such a highly strategic meeting; normally this is top‐secret stuff.

boom boom

For those not familiar with a product vision, it is a statement that we define as ‘the heartbeat of your product, it is what you are making, reduced down to its core essence.’ A clear vision helps a project to focus, to fight off distractions and to take tough design decisions.

To get a vision on the table I moderate a session with the people who drive the product development, who I simply ask ‘what is it we are making, who is it for, and where is the value?’ The session lasts until I am satisfied with the answers. I then write up the vision statement in a few short paragraphs and fine-tune it with the session participants.

To cut to the chase, here is the product vision statement for Metapolator:

‘Metapolator is an open web tool for making many fonts. It supports working in a font design space, instead of one glyph, one face, at a time.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency. They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.
‘Metapolator is extendible through plugins and custom specimens. It contains all the tools and fine control that designers need to finish a font.’

mass deconstruction

I think that makes it already quite clear what Metapolator is. However, to demonstrate what goes into writing a product vision, and to serve as a more fleshed out vision briefing, I will now discuss it sentence by sentence.

‘Metapolator is an open web tool for making many fonts.’
  • There is no standard template for writing a product vision, the structure it needs is as varied as the projects I work with. But then again it has always worked for me to lead off with a statement of identity; to start answering the question ‘what is it we are making?’ And here we have it.
  • open or libre? This was discussed during the session. At the end Simon Egli, Metapolator founder and driving force, wanted to express that we aim beyond just libre (i.e. open source code) and that ‘open’ also applies to the vibe of the tool on the user side.
  • web‑based: this is not just a statement of the technology used, of the fact that it runs in the browser. It is also a solid commitment that it runs on all desktops—mac, win and linux. And it implies that starting to use Metapolator is as easy as clicking/typing the right URL; nothing more required.
  • tool or application? The former fits better with the fact that font design and typography are master crafts (I can just see the tool in the hand of the master).
  • making or designing fonts? I have learned in the last couple of weeks that there is a font design phase where a designer concentrates on shaping eight strategic characters (for latin fonts). This is followed by a production phase where the whole character set is fleshed out, the spacing between all character pairs set, then different weights (e.g. thin and bold) are derived and maybe also narrow end extended variants. This phase is very laborious and often outsourced. ‘Making’ fonts captures both design and production phases.
  • many fonts: this is the heart of the matter. You can see from the previous point that making fonts is up to now a piecemeal activity. Metapolator is going to change that. It is dedicated to either making many different fonts in a row, or a large font family, even a collection of related families. The implication is that in the user interaction of Metapolator the focus is on making many fonts and the user needs for making many fonts take precedence in all design decisions.
‘It supports working in a font design space, instead of one glyph, one face, at a time.’
  • The first sentence said that Metapolator is going to change the world—by introducing a tool for making many fonts, something not seen before; this second one tells us how.
  • supports is not a word one uses lightly in a vision. ‘Supports XYZ’ does not mean it is just technically possible to do XYZ; it means here that this is going to be a world‐class product to do XYZ, which can only be realised with world‐class user interaction to do XYZ.
  • design space is one of these wonderful things that come up in a product vision session. Super‐user Wei Huang coined the phrase when describing working with the current version of Metapolator. It captures very nicely the working in a continuum that Metapolator supports, as contrasted with the traditional piecemeal approach, represented by ‘one glyph, one face, at a time.’ What is great for a vision is that ‘design space’ captures the vibe that working with metapolator should have, but that it is not explicit on the realisation of it. This means there is room for innovation, through technological R&D and interaction design.
‘With Metapolator, “pro” font designers are able to create and edit fonts and font families much faster, with inherent consistency.’
  • With “pro” font designers we encounter the first user group, starting to answer ‘who is it for?’ “Pro” is in quotes because it is not the earning‑a‐living part that interests us, it is the fact that these people mastered a craft.
  • create and edit balances the two activities; it is not all about creating from scratch.
  • fonts and font families balances making very different fonts with making families; it is not all about the latter.
  • much faster is the first value statement, starting to answer ‘where is the value?’ Metapolator stands for an impressive speed increase in font design and production, by abolishing the piecemeal approach.
  • inherent consistency is the second value statement. Because the work is performed by users in the font design space, where everything is connected and continuous, the conventional user overhead of keeping everything consistent disappears.
‘They gain unique exploration possibilities and the tools to quickly adapt typefaces to different media and domains of use.’
  • exploration possibilities is part feature, part value statement, part field of use and part vibe. All these four are completely different things (e.g. there is inherently zero value in a feature), captured in two words.
  • quickly adapt is a continuation of the ‘much faster’ value statement above, highlighting complementary fields of use for it.
‘With Metapolator, typographers gain the possibility to change existing fonts—or even create new ones—to their needs.’
  • And with typographers we encounter the second user group. These are people who use fonts, with a whole set of typographical skills and expertise implied.
  • possibility to change is the value statement for this user group. This is a huge deal. Normally typographers have neither the skills, nor the time, to modify a font. Metapolator will open up this world to them, with that fast speed and inherent consistency that was mentioned before.
  • create new goes one step further than the previous point. Here we have now a commitment to enable more ambitious typographers (that is what ‘even’ stands for) to create new fonts.
  • to their needs is a context we should be aware of. These typographers will be designing something, anything with text, and that is their main goal. Changing or creating a font is for them a worthwhile way to get it done. But it is only part of their job, not the job. Note that the needs of typographers includes applying some very heavy graphical treatments to fonts.
‘Metapolator is extendible through plugins and custom specimens.’
  • extendible through plugins is one realisation of the ‘open’ aspect mentioned in the first sentence. This makes Metapolator a platform and its extendability will have to be taken into account in every step of its design.
  • custom specimens is slightly borderline to mention in a vision; you could say it is just a feature. I included it because it programs the project to properly support working with type specimens.
‘It contains all the tools and fine control that designers need to finish a font.’
  • all the tools: this was the result of me probing during the vision session whether Metapolator is thought to be part of a tool chain, or independent. This means that it must be designed to work stand‑alone.
  • fine control: again the result of probing, this time whether Metapolator includes the finesse to take care of those important details, on a glyph level. Yes, it all needs to be there.
  • that designers need makes it clear by whose standards the tools and control needs to be made: that of the two user groups.

this space has intentionally been left blank

Just as important as what it says in a product vision is what it doesn’t say. What it does not say Metapolator is, Metapolator is explicitly not. Not a vector drawing application, not a type layout program, not a system font manager, not a tablet or smartphone app.

The list goes on and on, and I am sure some users will come up with highly creative fields of use. That is up to them, maybe it works out or they are able to cover their needs with a plugin they write, or have written for them. For the Metapolator team that is charming to hear, but definitely out of scope.

User groups that are not mentioned, i.e. everybody who is not a “pro” font designer or a typographer, are welcome to check out Metapolator, it is free software. If their needs overlap partly with that of the defined user groups, then Metapolator will work out partly for them. But the needs of all these users are of no concern to the Metapolator team.

If that sounds harsh, then remember what a product vision is for: it helps a project to focus, to fight off distractions and to take tough design decisions. That part starts now.

July 30, 2014

A logo & icon for DevAssistant

logo-vert

This is a simple story about a logo design process for an open source project in case it might be informative or entertaining to you. :)

A little over a month ago, Tomas Radej contacted me to request a logo for DevAssistant. DevAssistant is a UI aimed at making developers’ lives easier by automating a lot of the menial tasks required to start up a software project – setting up the environment, starting services, installing dependencise, etc. His team was gearing up for a new release and really wanted a logo to help publicize the release. They came to me for help as colleagues familiar with some of the logo work I’ve done.

jumbotron-bg

When I first received Tomas’ request, I reviewed DevAsisstant’s website and had some questions:

  • Are there any parent or sibling projects to this one that have logos we’d need this to match up with?
  • Is an icon needed that coordinates with the logo as well?
  • There is existing artwork on the website (shown above) – should the logo coordinate with that? Is that design something you’re committed to?
  • Are there any competing projects / products (even on other platforms) that do something similar? (Just as a ‘competitive’ evaluation of their branding.)

He had some answers :) :

  • There aren’t currently any parent or sibling projects with logos, so from that persepctive we had a blank slate.
  • They definitely needed an icon, preferably in all the required sizes for the desktop GUI.
  • Tomas impressively had made the pre-existing artwork himself, but considered it a placeholder.
  • The related projects/products he suggested are: Software Collections, JBoss Forge, and Enide.

From the competition I saw a lot of clean lines, sharp angles, blues and greens, some bold splashes here and there. Software Collections has a logotype without a mark; JBoss Forge has a mark with an anvil (a construction tool of sorts); Enide doesn’t have a logo per se but is part of Node.js which has a very stylized logotype where letters are made out of hexagons.

I liked how Tomas’ placeholder artwork used shades of blue, and thought about how the triangles could be shaped such to make up the ‘D’ of ‘Dev’ and the ‘A’ of Assistant (similarly to how ‘node’ is spelled out with hexagons for each letter in the node.js logotype.) I played around a little be with the notion of ‘d’ and ‘a’ triangles and sketched some ideas out:

devassistant-logo-sketches

I grabbed an icon sheet template from the GNOME design icon repo and drew this out in Inkscape. This, actually, was pretty foolish of me since I hadn’t sent Tomas my sketches at this point and I didn’t even have a solid concept in terms of the mark’s meaning beyond being stylized ‘d’ and ‘a’ – it could have been a waste of time – but thankfully his team liked the design so it didn’t end up being a waste at all. :)

devassistant-logoidea-1

Then I thought a little about about meaning here. (Maybe this is backwards. Sometimes I start with meaning / concept, sometimes I start with a visual and try to build meaning into it. I did the latter this time; sue me!) I was thinking about how JBoss Forge used a construction tool in its logo (Logo copyright JBoss & Red Hat):

forge

And I thought about how Glade uses a carpenter’s square (another construction tool!) in its icon… hmmm… carpenter’s squares are essentially triangles… ! :) (Glade logo from the GNOME icon theme, LGPLv3+):

Glade_new_logo

I could think of a few other developer-centric tools that used other artifacts of construction – rulers, hard hats, hammers, wrenches, etc. – for their logo/icon design. It seemed to be the right family of metaphor anyway, so I started thinking the ‘D’ and ‘A’ triangles could be carpenter’s squares.

What I started out with didn’t yet have the ruler markings, or the transparency, and was a little hacky in the SVG… but it could have those markings. With Tomas’ go-ahead, I made the triangles into carpenter’s squares and created all of the various sizes needed for the icon:

devassistant-sheet

So we had a set of icons that could work! I exported them out to PNGs and tarred them up for Tomas and went to work on the logo.

Now why didn’t I start with the logo? Well, I decided to start with the icon just because the icon had the most amount of constraints on it – there’s certain requirements in terms of the sizes a desktop icon has to read at, and I wanted it to fit in with the style of other GNOME icons… so I figured, start where the most constraints are, and it’s easier to adapt what you come up with there in the arena where you have less constraints. This may have been a different story if the logo had more constraints – e.g., if there was a family of app brands it had to fit into.

So logos are a bit different than icons in that people like to print them on things in many different sizes, and when you pay for printed objects (especially screen-printed T-shirts) you pay for color, and it can be difficult to do effects like drop shadows and gradients. (Not impossible, but certainly more of a pain. :) ) The approach I took with the logo, then, was to simplify the design and flatten the colors down compared to the icon.

Anyhow, here’s the first set of ideas I sent to Tomas for the logomark & logotype:

logo-comps-1

From my email to him explaining the mockups:

Okay! Attached is a comp of two logo variations. I have it plain and flat in A & B (A is vertical, and B is a horizontal version of the same thing.) C & D are the same except I added a little faint mirror image frame to the blue D and A triangles – I was just playing around and it made me think of scaffolding which might be a nice analogy. The square scaffolding shape the logomark makes could also be used to create a texture/pattern for the website and associated graphics.

The font is an OFL font called Spinnaker – I’ve attached it and the OFL that it came with. The reason I really liked this font in particular compared to some of the others I evaluated is that the ‘A’ is very pointed and sharp like the triangles in the logo mark, and the ratio of space between the overall size of some of the lowercase letters (e.g., ‘a’ and ‘e’) to their enclosed spaces seemed similar to the ratio of the size of the triangles in the logomark and the enclosed space in the center of the logomark. I think it’s also a friendly-looking font – I would think an assistant to somebody would have a friendly personality to them.

Anyway, feel free to be brutal and let me know what you think, and we can go with this or take another direction if you’d prefer.

Tomas’ team unanimously favored the scaffolding versions (C&D), but were hoping the mirror image could be a bit darker for more contrast. So I did some versions with the mirror image at different darknesses:

scaffold-shade

I believe they picked B or C, and…. we have a logo.

Overall, this was a very smooth, painless logo design process for a very easy-going and cordial “customer.” :)

July 29, 2014

Prefeitura de Belo Horizonte

This is a project we did for a href=http://portalpbh.pbh.gov.br/pbh/ecp/comunidade.do?app=concursocentroadministrativo>competition for the new city hall of Belo Horizonte (Brazil). It didn't win (The link shows the winning entries), but we are pretty happy about the project anyway. The full presentation boards are at the bottom of this article, as well as the blender model. Below is...

July 27, 2014

Sun 2014/Jul/27

July 24, 2014

Predicting planetary visibility with PyEphem

Part 1: Basic Planetary Visibility

All through the years I was writing the planet observing column for the San Jose Astronomical Association, I was annoyed at the lack of places to go to find out about upcoming events like conjunctions, when two or more planets are close together in the sky. It's easy to find out about conjunctions in the next month, but not so easy to find sites that will tell you several months in advance, like you need if you're writing for a print publication (even a club newsletter).

For some reason I never thought about trying to calculate it myself. I just assumed it would be hard, and wanted a source that could spoon-feed me the predictions.

The best source I know of is the RASC Observer's Handbook, which I faithfully bought every year and checked each month so I could enter that month's events by hand. Except for January and February, when I didn't have the next year's handbook yet by the time my column went to press and I was on my own. I have to confess, I was happy to get away from that aspect of the column when I moved.

In my new town, I've been helping the local nature center with their website. They had some great pages already, like a What's Blooming Now? page that keeps track of which flowers are blooming now and only shows the current ones. I've been helping them extend it by adding features like showing only flowers of a particular color, separating the data into CSV databases so it's easier to add new flowers or butterflies, and so forth. Eventually we hope to build similar databases of birds, reptiles and amphibians.

And recently someone suggested that their astronomy page could use some help. Indeed it could -- it hadn't been updated in about five years. So we got to work looking for a source of upcoming astronomy events we could use as a data source for the page, and we found sources for a few things, like moon phases and eclipses, but not much.

Someone asked about planetary conjunctions, and remembering how I'd always struggled to find that data, especially in months when I didn't have the RASC handbook yet, I got to wondering about calculating it myself. Obviously it's possible to calculate when a planet will be visible, or whether two planets are close to each other in the sky. And I've done some programming with PyEphem before, and found it fairly easy to use. How hard could it be?

Note: this article covers only the basic problem of predicting when a planet will be visible in the evening. A followup article will discuss the harder problem of conjunctions.

Calculating planet visibility with PyEphem

The first step was figuring out when planets were up. That was straightforward. Make a list of the easily visible planets (remember, this is for a nature center, so people using the page aren't expected to have telescopes):

import ephem

planets = [
    ephem.Moon(),
    ephem.Mercury(),
    ephem.Venus(),
    ephem.Mars(),
    ephem.Jupiter(),
    ephem.Saturn()
    ]

Then we need an observer with the right latitude, longitude and elevation. Elevation is apparently in meters, though they never bother to mention that in the PyEphem documentation:

observer = ephem.Observer()
observer.name = "Los Alamos"
observer.lon = '-106.2978'
observer.lat = '35.8911'
observer.elevation = 2286  # meters, though the docs don't actually say

Then we loop over the date range for which we want predictions. For a given date d, we're going to need to know the time of sunset, because we want to know which planets will still be up after nightfall.

observer.date = d
sunset = observer.previous_setting(sun)

Then we need to loop over planets and figure out which ones are visible. It seems like a reasonable first approach to declare that any planet that's visible after sunset and before midnight is worth mentioning.

Now, PyEphem can tell you directly the rising and setting times of a planet on a given day. But I found it simplified the code if I just checked the planet's altitude at sunset and again at midnight. If either one of them is "high enough", then the planet is visible that night. (Fortunately, here in the mid latitudes we don't have to worry that a planet will rise after sunset and then set again before midnight. If we were closer to the arctic or antarctic circles, that would be a concern in some seasons.)

min_alt = 10. * math.pi / 180.
for planet in planets:
    observer.date = sunset
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "is already up at sunset"

Easy enough for sunset. But how do we set the date to midnight on that same night? That turns out to be a bit tricky with PyEphem's date class. Here's what I came up with:

    midnight = list(observer.date.tuple())
    midnight[3:6] = [7, 0, 0]
    observer.date = ephem.date(tuple(midnight))
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "will rise before midnight"

What's that 7 there? That's Greenwich Mean Time when it's midnight in our time zone. It's hardwired because this is for a web site meant for locals. Obviously, for a more general program, you should get the time zone from the computer and add accordingly, and you should also be smarter about daylight savings time and such. The PyEphem documentation, fortunately, gives you tips on how to deal with time zones. (In practice, though, the rise and set times of planets on a given day doesn't change much with time zone.)

And now you have your predictions of which planets will be visible on a given date. The rest is just a matter of writing it out into your chosen database format.

In the next article, I'll cover planetary and lunar conjunctions -- which were superficially very simple, but turned out to have some tricks that made the programming harder than I expected.

July 23, 2014

Watch out for DRI3 regressions

DRI3 has plenty of necessary fixes for X.org and Wayland, but it's still young in its integration. It's been integrated in the upcoming Fedora 21, and recently in Arch as well.

If WebKitGTK+ applications hang or become unusably slow when an HTML5 video is supposed to be, you might be hitting this bug.

If Totem crashes on startup, it's likely this problem, reported against cogl for now.

Feel free to add a comment if you see other bugs related to DRI3, or have more information about those.

Update: Wayland is already perfect, and doesn't use DRI3. The "DRI2" structures in Mesa are just that, structures. With Wayland, the DRI2 protocol isn't actually used.

Here’s a low-barrier way to help improve FLOSS apps – AppStream metadata: Round 1

UPDATE: This program is full now!

We are so excited that we’ve got the number of volunteers we needed to assign all of the developer-related packages we identified for this round! THANK YOU! Any further applications will be added to a wait list (in case any of the assignees need to drop any of their assigned packages.) Depending on how things go, we may open up another round in a couple of weeks or so, so we’ll keep you posted!

Thanks again!!

– Mo, Ryan, and Hughsie


appstream-logo

Do you love free and open source software? Would you like to help make it better, but don’t have the technical skills to know where you can jump in and help out? Here is a fantastic opportunity!

The Problem

There is an cross-desktop, cross-distro, Freedesktop.org project called AppStream. In a nutshell, AppStream is an effort to standardize metadata about free and open source applications. Rather than every distro have its own separate written description for Inkscape, for example, we’d have a shared and high-quality description of Inkscape that would be available to users of all distros. Why is this kind of data important? It helps free desktop users discover applications that might meet their needs – for example, via searching software center applications (such as GNOME Software and Apper.)

Screenshot of GNOME Software showing app metadata in action!

Screenshot of GNOME Software showing app metadata in action!

Running this project in a collaborative way is also a great way for us to combine efforts and come up with great quality content for everyone in the FLOSS community.

Contributors from Fedora and other distros have been working together to build the infrastructure to make this project work. But, we don’t yet have even close to full metadata coverage of the thousands of FLOSS applications we ship. Without metadata for all of the applications, users could be missing out on great applications or may opt out of installing an app that would work great for them because they don’t understand what the app does or how it could meet their needs.

The Plan

Ryan Lerch among other contributors have been working very hard for many weeks now generating a lot of the needed metadata, but as of today only have roughly 25% coverage for the desktop packages in Fedora. We’d love to see that number increase significantly for Fedora 21 and beyond, but we need your help to accomplish that!

Ryan, Richard Hughes, and I recently talked about the ongoing effort. Progress is slower than we’d like, we have less contributors than we’d like – but it is a great opportunity for new contributors, because of the low barrier to entry and big impact the work has!

So along that line, we thought of an idea for an ongoing program that we’d like to pilot: Basically, we’ll chunk the long list of applications that need the metadata into thematic lists – for example, graphics applications, development applications, social media applications, etc. etc. Each of those lists we’ll break into chunks of say 10 apps each, and volunteers can pick up those chunks and submit metadata for just those 10.

The specific metadata we are looking for in this pilot is a brief summary about what the application is and a description of what the application does. You do not need to be a coder to help out; you’ll need to be able and willing to research the applications in your chunk and draft an openly-licensed paragraph (we’ll provide specific guidelines) and submit it via a web form on github. That’s all you need to do.

This blog post will kick off our pilot (“round 1″) of this effort, and we’ll be focusing on applications geared towards developers.

Your mission

If you choose to participate in this program, your mission will be to research and write up both brief summaries about and long-form descriptions for each of ~10 free and open source applications.

You might want to check out the upstream sites for each application, see if any distros downstream have descriptions for the app, maybe install and try the app out for yourself, or ask current users of the app about it and its strengths and weaknesses. The final text you submit, however, will need to be original writing created by you.

Specifications

Summary field for application

The summary field is a short, one-line description of what the application enables users to do:

  • It should be around 5 – 12 words long, and a single sentence with no ending punctuation.
  • It should start with action verbs that describe what it allows the user to do, for example, “Create and edit Scalable Vector Graphics images” from the Inkscape summary field.
  • It shouldn’t contain extraneous information such as “Linux,” “open source,” “GNOME,” “gtk,” “kde,” “qt,” etc. It should focus on what the application enables the user to do, and not the technical or implementation details of the app itself.
Examples

Here are some examples of good AppStream summary metadata:

  • “Add or remove software installed on the system” (gpk-application / 8 words)
  • “Create and edit Scalable Vector Graphics images” (Inkscape / 7 words)
  • “Avoid the robots and make them crash into each other” (GNOME Robots / 10 words)
  • “View and manage system resources” (GNOME System Monitor / 5 words)
  • “Organize recipes, create shopping lists, calculate nutritional information, and more.” (Gourmet / 10 words)

Description field for application

The description field is a longer-form description of what the application does and how it works. It can be between 1 – 3 short paragraphs / around 75-100 words long.

Examples

Here are some examples of good AppStream description metadata:

  • GNOME System Monitor / 76 words:
    “System Monitor is a process viewer and system monitor with an attractive, easy-to-use interface.

    “System Monitor can help you find out what applications are using the processor or the memory of your computer, can manage the running applications, force stop processes not responding, and change the state or priority of existing processes.

    “The resource graphs feature shows you a quick overview of what is going on with your computer displaying recent network, memory and processor usage.”

  • Gourmet / 94 words:
    “Gourmet Recipe Manager is a recipe-organizer that allows you to collect, search, organize, and browse your recipes. Gourmet can also generate shopping lists and calculate nutritional information.

    “A simple index view allows you to look at all your recipes as a list and quickly search through them by ingredient, title, category, cuisine, rating, or instructions.

    “Individual recipes open in their own windows, just like recipe cards drawn out of a recipe box. From the recipe card view, you can instantly multiply or divide a recipe, and Gourmet will adjust all ingredient amounts for you.”

  • GNOME Robots / 102 words:
    “It is the distant future – the year 2000. Evil robots are trying to kill you. Avoid the robots or face certain death.

    “Fortunately, the robots are extremely stupid and will always move directly towards you. Trick them into colliding into each other, resulting in their destruction, or into the junk piles that result. You can defend yourself by moving the junk piles, or escape to safety with your handy teleportation device.

    “Your supply of safe teleports is limited, and once you run out, teleportation could land you right next to a robot, who will kill you. Survive for as long as possible!”

Content license

These summaries and descriptions are valuable content, and in order to be able to use them, you’ll need to be willing to license them under a license such that the AppStream project and greater free and open source software community can use them.

We are requesting that all submissions be licensed under the Creative Commons’ CC0 license.

What’s in it for you?

Folks who contribute metadata to this effort through this program will be recognized in the upstream appdata credits as official contributors to the project and will also be awarded a special Fedora Badges badge for contributing appdata!

appstream

When this pilot round is complete, we’ll also publish a Fedora Magazine article featuring all of the contributors – including you!

Oh, and of course – you’ll be making it easier for all free and open source software users (not just Fedora!) to find great FLOSS software and make their lives better! :)

Sign me up! How do I get started?

help-1

  1. First, if you don’t have one already, create an account at GitHub.
  2. In order to claim your badge and to interact with our wiki, you’ll need a Fedora account. Create a Fedora account now if you don’t alrady have one.
  3. Drop an email to appstream at lists dot fedoraproject [.] org with your GitHub username and your Fedora account username so we can register you as a contributor and assign you your applications to write metadata for!
  4. For each application you’ll need to write metadata for, we’ve generated an XML document in the Fedora AppStream GitHub repo. We will link you up to each of these when we give you your assignment.
  5. For each application, research the app via upstream websites, reviews, talking to users, and trying out the app for yourself, then write up the summary and description fields to the specifications given above.
  6. To submit your metadata, log into GitHub and visit the XML file for the given application we gave you in our assignment email. Take a look at this example appstream metadata file for an application called Insight. You’ll notice in the upper right corner there is an ‘Edit’ button – click on this, edit the ‘Summary’ and ‘Description’ fields, edit the copyright statement towards the very top of the file with your information, and then submit them using the form at the bottom.

Once we’ve received all of your submissions, we’ll update the credits file and award you your badge. :)

If you end up commiting to a batch of applications and end up not having the time to finish, we ask that you let us know so we can assign the apps to someone else. We’re asking that you take two weeks to complete the work – if you need more time, no problem, let us know. We just want to make sure we reopen up assigned apps for others to join in and help out with.

Let’s do this!

Ready to go? Drop us a line!

GUADEC 2014 Map

Want a custom map for GUADEC 2014?

Here’s a map I made that shows the venue, the suggested hotels, transit ports (airport/train station), vegetarian & veggie-friendly restaurants, and a few sights that look interesting.

I made this with Google Map Engine, exported to KML, and also changed to GeoJSON and GPX.

If you want an offline map on an Android phone, I suggest opening up the KML file with Maps.Me (proprietary OpenStreeMap-based app, but nice) or the GPX on OSMand (open source and powerful, but really clunky).

You can also use the Google Maps Engine version with Google Maps Engine on your Android phone, but it doesn’t really support offline mode all so well, so it’s frustratingly unreliable at best. (But it does have pretty icons!)

See you at GUADEC!

July 21, 2014

Development activity is moving to Github

Octicons_octoface_128

In just under a week’s time, on Sunday 27th July 2014, I’ll be moving MyPaint’s old Gitorious git repositories over to the new GitHub ones fully, and closing down the old location. For a while now we’ve been maintaining the codelines in parallel to give people some time to switch over and get used to the new site; it’s time to formally switch over now.

If you haven’t yet changed your remotes over on existing clones, now would be a very good time to do that!

The bug tracker is moving from Gna! to Github’s issues tracker too – albeit rather slowly. This is less a matter of just pushing code to a new place and telling people about the move; rather we have to triage bugs as we go, and the energy and will to do that has been somewhat lacking of late. Bug triage isn’t fun, but it needs to be done.

(Github’s tools are lovely, and we’re already benefiting from having more eyeballs focussed on the projects. libmypaint has started using Travis and Appveyor for CI, the MyPaint application’s docs will benefit tons from being more wiki-like to edit, and the issue tracker is just frankly better documented and nicer for pasting in screencaps and exception dumps)

FreeCAD release 0.14

This is certainly a bit overdue, since the official launch already happened more than two weeks ago, but at last,here it goes: The 0.14 version of FreeCAD has been released! It happened a long, long time after 0.13, about one year and a half, but we're decided to not let that happen again next time,...

July 19, 2014

Stellarium 0.13.0 has been released!

The Stellarium development team after 9 months of development is proud to announce the release of version 0.13.0 of Stellarium.

This release brings some interesting new features:
- New modulated core.
- Refactored shadows and introducing the normal mapping.
- Sporadic meteors and meteors has the colors now.
- Comet tails rendering.
- New translatable strings and new textures.
- New plugin: Equation of Time - provides solution for Equation of Time.
- New plugin: Field of View - provides shortcuts for quick changes field of view.
- New plugin: Navigational Stars - marks 58 navigational stars on the sky.
- New plugin: Pointer Coordinates - shows the coordinates of the mouse pointer.
- New plugin: Meteor Showers - provides visualization of meteor showers.
- New version of the Satellites plugin: introduces star-like satellites and bug fixes.
- New version of the Exoplanets plugin: displaying of the potential habitable exoplanets; improvements for performance and code refactoring.
- New version of the Angle Measure plugin: displaying of the position angle.
- New version of the Quasars plugin: improvements for performance; added marker_color parameter.
- New version of the Pulsars plugin: improvements for performance; display pulsars with glitches; setting color for marker for different types of the pulsars.
- New versions of the Compass Marks, Oculars, Historical Supernovae, Observability analysis and Bright Novae plugins: bug fixing, code refactoring and improvements.

There have also been a large number of bug fixes and serious performance improvements.

We have updated the configuration file and the Solar System file, so if you have an existing Stellarium installation, we highly recommended reset the settings when you will install the new version (you can choose required points in the installer).

A huge thanks to our community whose contributions help to make Stellarium better!

July 18, 2014

Fri 2014/Jul/18

July 17, 2014

Time-lapse photography: a simple Arduino-driven camera intervalometer

[Arduino intervalometer] While testing my automated critter camera, I was getting lots of false positives caused by clouds gathering and growing and then evaporating away. False positives are annoying, but I discovered that it's fun watching the clouds grow and change in all those photos ... which got me thinking about time-lapse photography.

First, a disclaimer: it's easy and cheap to just buy an intervalometer. Search for timer remote control or intervalometer and you'll find plenty of options for around $20-30. In fact, I ordered one. But, hey, it's not here yet, and I'm impatient. And I've always wanted to try controlling a camera from an Arduino. This seemed like the perfect excuse.

Why an Arduino rather than a Raspberry Pi or BeagleBone? Just because it's simpler and cheaper, and this project doesn't need much compute power. But everything here should be applicable to any microcontroller.

My Canon Rebel Xsi has a fairly simple wired remote control plug: a standard 2.5mm stereo phone plug. I say "standard" as though you can just walk into Radio Shack and buy one, but in fact it turned out to be surprisingly difficult, even when I was in Silicon Valley, to find them. Fortunately, I had found some, several years ago, and had cables already wired up waiting for an experiment.

The outside connector ("sleeve") of the plug is ground. Connecting ground to the middle ("ring") conductor makes the camera focus, like pressing the shutter button halfway; connecting ground to the center ("tip") conductor makes it take a picture. I have a wired cable release that I use for astronomy and spent a few minutes with an ohmmeter verifying what did what, but if you don't happen to have a cable release and a multimeter there are plenty of Canon remote control pinout diagrams on the web.

Now we need a way for the controller to connect one pin of the remote to another on command. There are ways to simulate that with transistors -- my Arduino-controlled robotic shark project did that. However, the shark was about a $40 toy, while my DSLR cost quite a bit more than that. While I did find several people on the web saying they'd used transistors with a DSLR with no ill effects, I found a lot more who were nervous about trying it. I decided I was one of the nervous ones.

The alternative to transistors is to use something like a relay. In a relay, voltage applied across one pair of contacts -- the signal from the controller -- creates a magnetic field that closes a switch and joins another pair of contacts -- the wires going to the camera's remote.

But there's a problem with relays: that magnetic field, when it collapses, can send a pulse of current back up the wire to the controller, possibly damaging it.

There's another alternative, though. An opto-isolator works like a relay but without the magnetic pulse problem. Instead of a magnetic field, it uses an LED (internally, inside the chip where you can't see it) and a photo sensor. I bought some opto-isolators a while back and had been looking for an excuse to try one. Actually two: I needed one for the focus pin and one for the shutter pin.

How do you choose which opto-isolator to use out of the gazillion options available in a components catalog? I don't know, but when I bought a selection of them a few years ago, it included a 4N25, 4N26 and 4N27, which seem to be popular and well documented, as well as a few other models that are so unpopular I couldn't even find a datasheet for them. So I went with the 4N25.

Wiring an opto-isolator is easy. You do need a resistor across the inputs (presumably because it's an LED). 380&ohm is apparently a good value for the 4N25, but it's not critical. I didn't have any 380&ohm but I had a bunch of 330&ohm so that's what I used. The inputs (the signals from the Arduino) go between pins 1 and 2, with a resistor; the outputs (the wires to the camera remote plug) go between pins 4 and 5, as shown in the diagram on this Arduino and Opto-isolators discussion, except that I didn't use any pull-up resistor on the output.

Then you just need a simple Arduino program to drive the inputs. Apparently the camera wants to see a focus half-press before it gets the input to trigger the shutter, so I put in a slight delay there, and another delay while I "hold the shutter button down" before releasing both of them.

Here's some Arduino code to shoot a photo every ten seconds:

int focusPin = 6;
int shutterPin = 7;

int focusDelay = 50;
int shutterOpen = 100;
int betweenPictures = 10000;

void setup()
{
    pinMode(focusPin, OUTPUT);
    pinMode(shutterPin, OUTPUT);
}

void snapPhoto()
{
    digitalWrite(focusPin, HIGH);
    delay(focusDelay);
    digitalWrite(shutterPin, HIGH);
    delay(shutterOpen);
    digitalWrite(shutterPin, LOW);
    digitalWrite(focusPin, LOW);
}

void loop()
{
    delay(betweenPictures);
    snapPhoto();
}

Naturally, since then we haven't had any dramatic clouds, and the lightning storms have all been late at night after I went to bed. (I don't want to leave my nice camera out unattended in a rainstorm.) But my intervalometer seemed to work fine in short tests. Eventually I'll make some actual time-lapse movies ... but that will be a separate article.

July 16, 2014

Wavelet Decompose (Again)

Yes, more fun things you can do with Wavelet Scales.

If you’ve been reading this blog for a bit (or just read through any of my previous postprocessing tutorials), then you should be familiar with Wavelet Decompose. I use them all the time for skin retouching as well as other things. I find that being able to think of your images in terms of detail scales opens up a new way of approaching problems (and some interesting solutions).

A short discussion on the GIMP Users G+ community led the member +Marty Keil to suggest a tutorial on using wavelets for other things (particularly sharpening). Since I tend to use wavelet scales often in my processing (including sharpening), I figured I would sit down and enumerate some ways to use them, like:



Wavelets? What?

For our purposes (image manipulation), wavelet decomposition allows us to consider the image as multiple levels of detail components, that when combined will yield the full image. That is, we can take an image and separate it out into multiple layers, with each layer representing a discrete level of detail.

To illustrate, let’s have a look at my rather fetching model:


It was kindly pointed out to me that the use of the Lena image might perpetuate the problems with the objectification of women. So I swapped out the Lena image with a model that doesn't carry those connotations.

The results of running Wavelet Decompose on the image yields these 6 layers. These are arranged in increasing order of detail magnitude (scales 1-5 + a residual layer).


Notice that each of the scales contains a particular set of details starting with the finest details and becoming larger until you reach the residual scale. The residual scale doesn’t contain any fine details, instead it consists mostly of color and tonal information.

This is very handy if you need to isolate particular features for modifications. Simply find the scale (or two) that contain the feature and modify it there without worrying as much about other details at the same location.

The Wavelet Decompose plug-in actually sets each of these layer modes (except Residual) as “Grain Merge”. This allows each layer to contribute their details to the final result (which will look identical to the original starting layer with no modifications). The “Grain Merge” layer blend mode means that pixels that are 50% value ( RGB(127,127,127) ) will not affect the final result. This also means that if we paint on one of the scale layers with gray, it will effectively erase those details from the final image (keep this in mind for later).

Skin Smoothing (Redux)

I previously talked about using Wavelet Decompose for image retouching:


The first link was my original post on how I use wavelets to smooth skin tones. The second and third are examples of applying those principles to portraits. The last two articles are complete walkthroughs of a postprocessing workflow, complete with full-resolution examples to download if you want to try it and follow along.

I guess my point is that I’m re-treading a well worn path here, but I have actually modified part of my workflow so it’s not for naught (sorry, I couldn’t resist).

Getting Started

So let’s have a look at using wavelets for skin retouching again. We’ll use my old friend, Mairi for this.

Pat David Mairi Headshot Base Image Wavelet Decompose Frequency Separation
Mairi

When approaching skin retouching like this, I feel it’s important to pay attention to how light interacts with the skin. The way that light will penetrate the epidermis and illuminate under the surface is important. Couple that with the different types of skin structures, and you get a complex surface to consider.

For instance, there are very fine details in skin such as faint wrinkles and pores. These often contribute to the perceived texture of skin overall. There is also the color and toning of the skin under the surface as well. These all contribute to what we will perceive.

Let’s have a look at a 100% crop of her forehead.

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation

If I decompose this image to wavelet scales, I can amplify the details on each level by isolating them over the original. So, turning off all the layers except the original, and the first few wavelet scales will amplify the fine details:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Wavelet scales 1,2,3 over the original image.

You may notice that these fine wavelet scales seem to sharpen up the image. Yes, but we’re not talking about them right now. Stick with me - we’ll look at them a little later.

On the same idea, if I leave the original and the two biggest scales visible, I’ll get a nicely exaggerated view of the sub-surface imperfections:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Wavelet scales 4,5 over the original image.

What we see here are uneven skin tones not caused by surface imperfections, but by deeper tones in the skin. It is this un-evenness that I often try to subdue, and that I think contributes to a more pleasing overall skin tone.

To illustrate, here I have used a bilateral blur on the largest detail scale (Wavelet scale 5) only. Consider the rather marked improvement over the original working on this single detail scale. Notice also that all of the finer details remain to keep skin texture looking real.

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation
Smoothing only the largest detail scale (Wavelet scale 5) results
(mouseover to compare to original)

Smoothing Skin Tones

With those results in mind, I can illustrate how I will generally approach this type of skin retouching on a face. I usually start by considering specific sections of a face. I try to isolate my work along common facial contours to avoid anything strange happening across features (like smile lines or noses).

Pat David Mairi Headshot Retouch Regions Wavelet Decompose Frequency Separation

I also like to work in these regions as shown because the amount of smoothing that may be needed is not always consistently the same. The forehead may require more than the cheeks, and both may require less than the nose for instance. This allows me to tailor the retouching I do for each region separately in order to arrive at a more consistent result across the entire face.

I’ll use the free-select tool to create a selection of my region, usually with the “Feather edges” option turned on with a large-ish radius (around 30-45 pixels usually). This lets my edits blend a little smoother into the untouched portions of the image.

These days I’ve adjusted my workflow to minimize how much I actually retouch. I’ll usually look at the residual layer first to check the color tones across an area. If they are too spotty or blotchy, I’ll use a bilateral blur to even them out. There is no bilateral blur built into GIMP directly, so on the suggestion of David Tschumperlé (G'MIC) I’ve started using G'MIC with:

Filters → G'MIC...
Repair → Smooth [bilateral]

Once I’m happy with the results on the residual layer (or it doesn’t need any work), I’ll look at the largest detail scale (usually Wavelet scale 5). Lately, this has been the scale level that usually produces the greatest impact quickly. I’ll usually use a Spatial variance of 10, and a Value variance of 7 (with 2 iterations) on the bilateral blur filter. Of course, adjust these as necessary to suit your image and taste.

Here is the result of following those steps on the image of Mairi (less than 5 minutes of work):

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation Residual
Bilateral smoothing on Residual and Wavelet scale 5 only
(mouseover to compare to original)

This was only touching the Residual and Wavelet scale 5 with a bilateral blur and nothing else. As you can see this method provides for a very easy way to get to a great base to begin further work on (spot healing as needed, etc.).

Sharpening

I had actually mentioned this in each of my previous workflow tutorials, but it’s worth repeating here. I tend to use the lowest couple of wavelet scales to sharpen my images when I’m done. This is really just a manual version of using the Wavelet Sharpen plugin.

The first couple of detail scales will contain the highest frequency details. I’ve found that using them to sharpen an image up works fantastic. Here, for example, is our photo of Mairi from above after retouching, but now I use a copy of Wavelet scales 1 & 2 over the image to sharpen those details:

Pat David Mairi Headshot Forehead Closeup Wavelet Decompose Frequency Separation Sharpen
Wavelet scale 1 & 2 copied over the result to sharpen.
(mouseover to compare)

I’ve purposefully left both of the detail scales on full opacity to demonstrate the effect. I feel this is a far better method for sharpening compared to regular sharpen (I’ve never gotten good results using it) or even to Unsharp Mask (USM). USM can tend to halo around high contrast areas depending on the settings.

I would also adjust the opacity of the scales to adjust how much they would sharpen. If I wanted to avoid sharpening the background for instance, I would also either mask it out or just paint gray on the detail scale to erase the data in that area.

It doesn’t need to stop just a fine detail sharpening, though. The nature of the wavelet decomposition is that you will also get other scale data that can be useful for enhancing contrast on larger details as well. For instance, if I wanted to enhance the local contrast in the sweater of my image, I could use one of the larger scales over the image again and use a layer mask to control the areas that are affected.

To illustrate, here I have also copied scales 3, 4, and 5 over my image. I’ve applied layer masks to the layers to only allow them to affect the sweater. Using these scales allows a nice local contrast to be applied, adding a bit of “pop” to the sweater texture without increasing contrast on the models face or hair.

Pat David Mairi Headshot Wavelet Decompose Frequency Separation Sharpen Enhance
Using coarser detail scales to add some local contrast to the texture of the sweater
(mouseover to compare to previous)

Normally, if I didn’t have a need to work on wavelet scales, I would just use the Wavelet Sharpen plugin to add a touch of sharpening as needed. If I do find it useful (for whatever reason) to work on detail scales already, then I normally just use the scales directly for manually sharpening the image. Occasionally I’ll create the wavelet scales just to have access to the coarse detail levels to bump local contrast to taste, too.

Once you start thinking in terms of detail scales, it’s hard to not get sucked in to finding all sort of uses for them that can be very, very handy.

Stain Removal

What if the thing we want to adjust is not sub-dermal skin retouching, but rather something more like a stain on a childs clothing? As far as wavelets are concerned, it’s the same thing. So let’s look at something like this:

Pat David Wavelet Frequency Separation Stain
30% of the food made it in!

So there’s a small stain on the shirt. We can fix this easy, right?!

Let’s zoom 100% into the area we are interested in fixing:

Pat David Wavelet Frequency Separation Stain Zoom

If we run a Wavelet decomposition on this image, we can see that the areas that we are interested in are mostly confined to the coarser scales + residual (mostly scales 4, 5, and residual):

Pat David Wavelet Frequency Separation Stain Zoom

More importantly, the very fine details that give texture to her shirt, like the weave of the cotton and stitching of the letters, are nicely isolated on the finer detail scales. We won’t really have to touch the finer scales to fix the stains - so it’s trivially easy to keep the texture in place.

As a comparison, imagine having to use a clone or heal tool to accomplish this. You would have a very hard time getting the cloth weave to match up correctly, thus creating a visual break that would make the repair more obvious.

I start on the residual scale, and work on getting the broad color information fixed. I like to use a combination of the Clone tool, and the Heal tool to do this. Paying attention to the color areas I want to keep, I’ll use the Clone tool to bring in the correct tone with a soft-edged brush. I’ll then use the Heal tool to blend it a bit better into the surrounding textures.

For example, here is the work I did on the Residual scale to remove the stain color information:

Pat David Wavelet Frequency Separation Stain Zoom Residual Fix
Clone/Heal of the Wavelet Residual layer
(mouseover to compare to original)

Yes, I know it’s not a pretty patch, but just a quick pass to illustrate what the results can look like. Here is what the above changes to the Wavelet residual layer produces:

Pat David Wavelet Frequency Separation Stain Zoom Residual Fix
Composite image with retouching only on the Wavelet Residual layer
(mouseover to compare to original)

Not bad for a couple of minutes work on a single wavelet layer. I follow the same method on the next two wavelet scales 4 & 5. Clone similar areas into place and Heal to blend into the surrounding texture. After a few minutes, I arrive at this result:

Pat David Wavelet Frequency Separation Stain Zoom Repair Fix
Result of retouching Wavelet residual, 4, and 5 layers only
(mouseover to compare to original)

Perfect? No. It’s not. It was less than 5 minutes of work total. I could spend another 5 minutes or so and get a pretty darn good looking result, I think. The point is more about how easy it is once the image is considered with respect to levels of detail. Look where the color is and you’ll notice that the fabric texture remains essentially unchanged.

As a father of a three year old, believe me when I say that this technique has proved invaluable to me the past few years...

Conclusion

I know I talk quite a bit about wavelet decomposition for retouching. There is just a wonderful bunch of tasks that become much easier when considering an image as a sum of discrete detail parts. It’s just another great tool to keep in mind as you work on your images.

Help support the site! Or don’t!
I’m not supporting my (growing) family or anything from this website. Seriously.
There is only one reason I am writing these tutorials and posts:
I love doing it.
Technically there is a second reason: to give back to the community. Others before me were instrumental in helping me learn things when I first got started, and I’m hoping to pay it forward here.

If you want to visit an ad, or make a donation, or even link/share my content, I would be absolutely grateful (and tickled pink). If you don’t it’s not going to affect me writing and posting here one bit.

I’ll keep writing, and I’ll keep it free.
If you get any use out of this site, I only ask that you do one thing:
pay it forward.


July 15, 2014

Fanart by Anastasia Majzhegisheva – 11

Anastasia keeps playing with a Morevna’s backstory and this time she brings a short manga/comic strip.2014-07-14-manga-morevna

July 14, 2014

Notes from Calligra Sprint. Part 2: Memory fragmentation in Krita fixed

During the second day of Calligra sprint in Deventer we split into two small groups. Friedrich, Thorsten, Jigar and Jaroslaw were discussing global Calligra issues, while Boud and me concentrated on the performance of Krita and its memory consumption.

We tried to find out why Krita is not fast enough for painting with big brushes on huge images. For our tests we created a two-layer image 8k by 8k pixels (which is 3x256 MiB (2 layers + projection)) and started to paint with 1k by 1k pixels brush. Just to compare, SAI Painting Tool simply forbids creating images more than 5k by 5k pixels and brushes more than 500 pixels wide. And during these tests we found out a really interesting thing...

I guess everyone has at least once read about custom memory management in C++. All these custom new/delete operators, pool allocators usually seem so "geekish" and for "really special purposes only". To tell you the truth, I though I would never need to use them in my life, because standard library allocators "should be enough for everyone". Well, until curious things started to happen...

Well, the first sign of the problems appeared quite long ago. People started to complain that according to system monitor tools (like 'top') Krita ate quite much memory. We could never reproduce it. And what's more 'massif' and internal tile counters always showed we have no memory leaks. We used exactly the number of tiles we needed to store the image of a particular size.

But while making these 8k-image tests, we started to notice that although the number of tiles doesn't grow, the memory reported by 'top' grows quite significantly. Instead of occupying usual 1.3 GiB, which such image would need (layers data + about 400MiB for brushes and textures) reported memory grew up to 3 GiB and higher until OOM Killer woke up and killed Krita. This gave us a clear evidence that we have some problems with fragmentation.

Indeed, during every stoke we have to create about 15000(!) 16KiB objects (tiles). It is quite probable that after a couple of strokes the memory becomes rather fragmented. So we decided to try boost::pool for allocation of these chunks... and it worked! Instead of growing the memory footprint stabilized on 1.3GiB. And that is not counting the fact that boost::pool doesn't free the free'd memory until destruction or explicit purging [0]

Now this new memory management code is already in master! According to some synthetic tests, the painting should become a bit fasted. Not speaking about the much smaller memory usage.

Conclusion:

If you see unusually high memory consumption in your application, and the results measured by massif significantly differ from what you see in 'top', you probably have some fragmentation problem. To proof it, try not to return the memory back to the system, but reuse it. The consumption might fall significantly, especially is you allocate memory in different threads.



[0] - You can release unused memory by explicitly calling release_memory(), but 1) the pool must be ordered, which is worse performance; 2) the release_memory() operation takes about 20-30 seconds(!), so there is no use of it for us.



July 13, 2014

Notes from Calligra Sprint in Deventer. Part 1: Translation-friendly code

Last weekend we had a really nice sprint Deventer, which was hosted by Irina and Boudewijn (thank you very much!). We spent two days on discussions, planning, coding and profiling our software, which had many fruitful results.

On Saturday we were mostly talking and discussing our current problems, like porting Calligra to Qt5 and splitting libraries more sanely (e.g. we shouldn't demand mobile applications compile and link QWidget-based libraries). Although these problems are quite important, I will not describe them now (the other people will blog about it very soon). Instead I'm going to tell you about a different problem we also discussed — translations.

The point is, when using i18n() macro, it is quite easy to make mistakes which will make translator's life a disaster, so we decided to make a set of rules of thumb which developers should follow for not creating such issues. Here are these five short rules:

  1. Avoid passing a localized string into a i18n macro
  2. Add context to your strings
  3. Undo commands must have (qtundo-format) context
  4. Use capitalization properly
  5. Beware of sticky strings
Next we will talk about each of the rules in details:

1. Avoid passing a localized string into a i18n macro

They might be not compatible in case, gender or anything else you have no idea about

// Such code is incorrect in 99% of the cases
QString str = i18n(“foo bar”);
i18n(“Some nice string %1”, str);


Example 1

// WRONG:
wrongString = i18n(“Delete %1”, XXX ? i18n(“Layer”) : i18n(“Mask”))

// CORRECT:

correctString = XXX ? i18n(“Delete Layer”) : i18n(“Delete Mask”)
 

Such string concatenation is correct in English, but it is completely inappropriate in many languages in which a noun can change its form depending on the case. The problem is that in macro i18n(“Mask”) the word "Mask" is used in nominative case (is a subject), but in expression "Delete Mask” it is in accusative case (is an object). For example is Russan the two strings will be different and the translator will not be able to solve the issue easily.

Example 2

// WRONG:
wrongString = i18n(“Last %1”, XXX ? i18n(“Monday”) : i18n(“Friday”))

// CORRECT:
correctString = XXX ? i18n(“Last Monday”) : i18n(“Last Friday”)

This case is more complicated. Both words "Monday" and "Friday" are used in the nominative case, so they will not change their form. But "Monday" and "Friday" have different gender in Russian, so the adjective "Last" must change its form depending on the second word used. Therefore we need to separate strings for the two terms.

The tricky thing here is that we have 7 days in a week, so ideally we should have 7 separate strings for "Last ...", 7 more strings for "Next ..." and so on.

Example 3 — Using registry values

// WRONG:
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply %1”, filter->name())

// CORRECT: is there a correct way at all?
KisFilter *filter = filterRegistry->getFilter(id);
i18n(“Apply: \”%1\””, filter->name())

Just imagine how many objects can be stored inside the registry. It can be a dozen, a hundred or a thousand of objects. We cannot control the case, gender and form of each object in the list (can we?). The easiest approach here is to put the object name in quotes and "cite" that literally. This will hide the problem in most of the languages.

2. Add context to your strings

Prefer adding context to your strings rather than expecting translators reading your thoughts

Here is an example of three strings for blur filter. They illustrate the three most important translation contexts

i18nc(“@title:window”, “Blur Filter”)

Window titles are usually nouns (and translated as nouns). There is no limit on the size of the string.

i18nc(“@action:button”, “Apply Blur Filter”)

Button actions are usually verbs. The length of the string is also not very important.

i18nc(“@action:inmenu”, “Blur”)

Menu actions are also verbs, but the length of the string should be as short as possible.

3. Undo commands must have (qtundo-format) context

Adding this context tells the translators to use “Magic String” functionality. Such strings are special and are not reusable anywhere else.

In Krita and Calligra this context is now added automatically, because we use C++ type-checking mechanism to limit the strings passed to an undo command:

KUndo2Command(const KUndo2MagicString &text, KUndo2Command *parent);

4. Use capitalization properly

See KDE policy for details.

5. Beware of sticky strings

When the same string without a context is reused in different places (and especially in different files), doublecheck whether it is appropriate.

E.g. i18n("Duplicate") can be either a brush engine name (noun) or a menu action for cloning a layer (verb). Obviously enough not all the languages have the same form of a word for both verb and noun meanings. Such strings must be split by assigning them different contexts.

Alexander Potashev has created a special python script that can iterate through all the strings in a .po file and report all the sticky strings in a convenient format.

Conclusion

Of course all these rules are only recommendation. They all have exceptions and limitations, but following them in the most trivial cases will make the life of translators much easier.

In the next part of my notes from the sprint I will write how Boud and me were hunting down memory fragmentation problems in Krita on Sunday... :)

July 12, 2014

Trapped our first pack rat

[White throated woodrat in a trap] One great thing about living in the country: the wildlife. I love watching animals and trying to photograph them.

One down side of living in the country: the wildlife.

Mice in the house! Pack rats in the shed and the crawlspace! We found out pretty quickly that we needed to learn about traps.

We looked at traps at the local hardware store. Dave assumed we'd get simple snap-traps, but I wanted to try other options first. I'd prefer to avoid killing if I don't have to, especially killing in what sounds like a painful way.

They only had one live mousetrap. It was a flimsy plastic thing, and we were both skeptical that it would work. We made a deal: we'd try two of them for a week or two, and when (not if) they didn't work, then we'd get some snap-traps.

We baited the traps with peanut butter and left them in the areas where we'd seen mice. On the second morning, one of the traps had been sprung, and sure enough, there was a mouse inside! Or at least a bit of fur, bunched up at the far inside end of the trap.

We drove it out to open country across the highway, away from houses. I opened the trap, and ... nothing. I looked in -- yep, there was still a furball in there. Had we somehow killed it, even in this seemingly humane trap?

I pointed the open end down and shook the trap. Nothing came out. I shook harder, looked again, shook some more. And suddenly the mouse burst out of the plastic box and went HOP-HOP-HOPping across the grass away from us, bounding like a tiny kangaroo over tufts of grass, leaving us both giggling madly. The entertainment alone was worth the price of the traps.

Since then we've seen no evidence of mice inside, and neither of the traps has been sprung again. So our upstairs and downstairs mice must have been the same mouse.

But meanwhile, we still had a pack rat problem (actually, probably, white-throated woodrats, the creature that's called a pack rat locally). Finding no traps for sale at the hardware store, we went to Craigslist, where we found a retired wildlife biologist just down the road selling three live Havahart rat traps. (They also had some raccoon-sized traps, but the only raccoon we've seen has stayed out in the yard.)

We bought the traps, adjusted one a bit where its trigger mechanism was bent, baited them with peanut butter and set them in likely locations. About four days later, we had our first captive little brown furball. Much smaller than some of the woodrats we've seen; probably just a youngster.

[White throated woodrat bounding away] We drove quite a bit farther than we had for the mouse. Woodrats can apparently range over a fairly wide area, and we didn't want to let it go near houses. We hiked a little way out on a trail, put the trap down and opened both doors. The woodrat looked up, walked to one open end of the trap, decided that looked too scary; walked to the other open end, decided that looked too scary too; and retreated back to the middle of the trap.

We had to tilt and shake the trap a bit, but eventually the woodrat gathered up its courage, chose a side, darted out and HOP-HOP-HOPped away into the bunchgrass, just like the mouse had.

No reference I've found says anything about woodrats hopping, but the mouse did that too. I guess hopping is just what you do when you're a rodent suddenly set free.

I was only able to snap one picture before it disappeared. It's not in focus, but at least I managed to catch it with both hind legs off the ground.

Call to translators

We plan to release Stellarium 0.13.0 around July 20.

There are new strings to translate in this release because we have several new plugins and features, and refactored GUI. If you can assist with translation to any of the 132 languages which Stellarium supports, please go to Launchpad Translations and help us out: https://translations.launchpad.net/stellarium

Thank you!

July 11, 2014

This Land Is Mine is yours

Due to horrific recent events, This Land Is Mine has gone viral again.

Here’s a reminder that you don’t need permission to copy, share, broadcast, post, embed, subtitle, etc. Copying is an act of love, please copy and share. Yes means yes.

copying is an act of love, please copy and shareAs for the music, it is Fair UseThis Land Is Mine is a PARODY of “The Exodus Song.” That music was sort of the soundtrack of American zionism in the 1960’s and 70’s. It was supposed to express Jewish entitlement to Israel. By putting the song in the mouth of every warring party, I’m critiquing the original song.

 

Share/Bookmark

flattr this!

July 09, 2014

Invert the colors of qcad3 icons

QCad is an open-source 2D CAD program I've already been kind of fond of. It runs on Windows, Mac and Linux, its version 2 has been the base of LibreCAD, and version 3, which is a couple of months old already, is a huge evolution after version 2. Their developers have always struggled between the...

July 08, 2014

Big and contrasty mouse cursors

[Big mouse cursor from Comix theme] My new home office with the big picture windows and the light streaming in come with one downside: it's harder to see my screen.

A sensible person would, no doubt, keep the shades drawn when working, or move the office to a nice dim interior room without any windows. But I am not sensible and I love my view of the mountains, the gorge and the birds at the feeders. So accommodations must be made.

The biggest problem is finding the mouse cursor. When I first sit down at my machine, I move my mouse wildly around looking for any motion on the screen. But the default cursors, in X and in most windows, are little subtle black things. They don't show up at all. Sometimes it takes half a minute to figure out where the mouse pointer is.

(This wasn't helped by a recent bug in Debian Sid where the USB mouse would disappear entirely, and need to be unplugged from USB and plugged back in before the computer would see it. I never did find a solution to that, and for now I've downgraded from Sid to Debian testing to make my mouse work. I hope they fix the bug in Sid eventually, rather than porting whatever "improvement" caused the bug to more stable versions. Dealing with that bug trained me so that when I can't see the mouse cursor, I always wonder whether I'm just not seeing it, or whether it really isn't there because the kernel or X has lost track of the mouse again.)

What I really wanted was bigger mouse cursor icons in bright colors that are visible against any background. This is possible, but it isn't documented at all. I did manage to get much better cursors, though different windows use different systems.

So I wrote up what I learned. It ended up too long for a blog post, so I put it on a separate page: X Cursor Themes for big and contrasty mouse cursors.

It turned out to be fairly complicated. You can replace the existing cursor font, or install new cursor "themes" that many (but not all) apps will honor. You can change theme name and size (if you choose a scalable theme), and some apps will honor that. You have to specify theme and size separately for GTK apps versus other apps. I don't know what KDE/Qt apps do.

I still have a lot of unanswered questions. In particular, I was unable to specify a themed cursor for xterm windows, and for non text areas in emacs and firefox, and I'd love to know how to do that.

But at least for now, I have a great big contrasty blue mouse cursor that I can easily see, even when I have the shades on the big windows open and the light streaming in.

Important AppData milestone

Today we reached an important milestone. Over 25% of applications in Fedora now ship AppData files. The actual numbers look like this:

  • Applications with descriptions: 262/1037 (25.3%)
  • Applications with keywords: 112/1037 (10.8%)
  • Applications with screenshots: 235/1037 (22.7%)
  • Applications in GNOME with AppData: 91/134 (67.9%)
  • Applications in KDE with AppData: 5/67 (7.5%)
  • Applications in XFCE with AppData: 2/20 (10.0%)
  • Application addons with MetaInfo: 30

We’ve gone up a couple of percentage points in the last few weeks, mostely from the help of Ryan Lerch, who’s actually been writing AppData files and taking screenshots for upstream projects. He’s been concentrating on the developer tools for the last week or so, as this is one of the key groups of people we’re targetting for Fedora 21.

One of the things that AppData files allow us to do is be smarter suggesting “Picks” on the overview page. For 3.10 and 3.12 we had a farly short static list that we chose from at random. For 3.14 we’ve got a new algorithm that tries to find similar software to the apps you already have installed, and also suggests those. So if I have Anjunta and Devhelp installed, it might suggest D-Feet or Glade.

July 04, 2014

Detecting wildlife with a PIR sensor (or not)

[PIR sensor] In my last crittercam installment, the NoIR night-vision crittercam, I was having trouble with false positives, where the camera would trigger repeatedly after dawn as leaves moved in the wind and the morning shadows marched across the camera's field of view. I wondered if a passive infra-red (PIR) sensor would be the answer.

I got one, and the answer is: no. It was very easy to hook up, and didn't cost much, so it was a worthwhile experiment; but it gets nearly as many false positives as camera-based motion detection. It isn't as sensitive to wind, but as the ground and the foliage heat up at dawn, the moving shadows are just as much a problem as they were with image-based motion detection.

Still, I might be able to combine the two, so I figure it's worth writing up.

Reading inputs from the HC-SR501 PIR sensor

[PIR sensor pins]

The PIR sensor I chose was the common HC-SR501 module. It has three pins -- Vcc, ground, and signal -- and two potentiometer adjustments.

It's easy to hook up to a Raspberry Pi because it can take 5 volts in on its Vcc pin, but its signal is 3.3v (a digital signal -- either motion is detected or it isn't), so you don't have to fool with voltage dividers or other means to get a 5v signal down to the 3v the Pi can handle. I used GPIO pin 7 for signal, because it's right on the corner of the Pi's GPIO header and easy to find.

There are two ways to track a digital signal like this. Either you can poll the pin in an infinfte loop:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 1

GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)

while True:
    if GPIO.input(pir_pin):
        print "Motion detected!"
    time.sleep(sleeptime)

or you can use interrupts: tell the Pi to call a function whenever it sees a low-to-high transition on a pin:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 300

def motion_detected(pir_pin):
    print "Motion Detected!"

GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)

GPIO.add_event_detect(pir_pin, GPIO.RISING, callback=motion_detected)

while True:
    print "Sleeping for %d sec" % sleeptime
    time.sleep(sleeptime)

Obviously the second method is more efficient. But I already had a loop set up checking the camera output and comparing it against previous output, so I tried that method first, adding support to my motion_detect.py script. I set up the camera pointing at the wall, and, as root, ran the script telling it to use a PIR sensor on pin 7, and the local and remote directories to store photos:

# python motion_detect.py -p 7 /tmp ~pi/shared/snapshots/
and whenever I walked in front of the camera, it triggered and took a photo. That was easy!

Reliability problems with add_event_detect

So easy that I decided to switch to the more efficient interrupt-driven model. Writing the code was easy, but I found it triggered more often: if I walked in front of the camera (and stayed the requisite 7 seconds or so that it takes raspistill to get around to taking a photo), when I walked back to my desk, I would find two photos, one showing my feet and the other showing nothing. It seemed like it was triggering when I got there, but also when I left the scene.

A bit of web searching indicates this is fairly common: that with RPi.GPIO a lot of people see triggers on both rising and falling edges -- e.g. when the PIR sensor starts seeing motion, and when it stops seeing motion and goes back to its neutral state -- when they've asked for just GPIO.RISING. Reports for this go back to 2011.

On the other hand, it's also possible that instead of seeing a GPIO falling edge, what was happening was that I was getting multiple calls to my function while I was standing there, even though the RPi hadn't finished processing the first image yet. To guard against that, I put a line at the beginning of my callback function that disabled further callbacks, then I re-enabled them at the end of the function after the Pi had finished copying the photo to the remote filesystem. That reduced the false triggers, but didn't eliminate them entirely.

Oh, well, The sun was getting low by this point, so I stopped fiddling with the code and put the camera out in the yard with a pile of birdseed and peanut suet nuggets in front of it. I powered on, sshed to the Pi and ran the motion_detect script, came back inside and ran a tail -f on the output file.

I had dinner and worked on other things, occasionally checking the output -- nothing! Finally I sshed to the Pi and ran ps aux and discovered the script was no longer running.

I started it again, this time keeping my connection to the Pi active so I could see when the script died. Then I went outside to check the hardware. Most of the peanut suet nuggets were gone -- animals had definitely been by. I waved my hands in front of the camera a few times to make sure it got some triggers.

Came back inside -- to discover that Python had gotten a segmentation fault. It turns out that nifty GPIO.add_event_detect() code isn't all that reliable, and can cause Python to crash and dump core. I ran it a few more times and sure enough, it crashed pretty quickly every time. Apparently GPIO.add_event_detect needs a bit more debugging, and isn't safe to use in a program that has to run unattended.

Back to polling

Bummer! Fortunately, I had saved the polling version of my program, so I hastily copied that back to the Pi and started things up again. I triggered it a few times with my hand, and everything worked fine. In fact, it ran all night and through the morning, with no problems except the excessive number of false positives, already mentioned.

[piñon mouse] False positives weren't a problem at all during the night. I'm fairly sure the problem happens when the sun starts hitting the ground. Then there's a hot spot that marches along the ground, changing position in a way that's all too obvious to the infra-red sensor.

I may try cross-checking between the PIR sensor and image changes from the camera. But I'm not optimistic about that working: they both get the most false positives at the same times, at dawn and dusk when the shadow angle is changing rapidly. I suspect I'll have to find a smarter solution, doing some image processing on the images as well as cross-checking with the PIR sensor.

I've been uploading photos from my various tests here: Tests of the Raspberry Pi Night Vision Crittercam. And as always, the code is on github: scripts/motioncam with some basic documentation on my site: motion-detect.py: a motion sensitive camera for Raspberry Pi or other Linux machines. (I can't use github for the documentation because I can't seem to find a way to get github to display html as anything other than source code.)

July 02, 2014

Anaconda Crash Recovery

Whoah! Another anaconda post! Yes! You should know that the anaconda developers are working hard at fixing bugs, improving features, and adding enhancements all the time, blog posts about it or not. :)

Today Chris and I talked about how the UI might work for anaconda crash recovery. So here’s the thing: Anaconda is completely driven by kickstart. Every button, selection, or thing you type out in the UI gets translated into kickstart instructions in memory. So, why not save that kickstart out to disk when anaconda crashes? Then, any configuration and customization you’ve done would be saved. You could then load up anaconda afterwards with the kickstart and it would pre-fill in all of your work so you could continue where you left off!

However! Anaconda is a special environment, of course. We can’t just save to disk. I mean, okay, we could, but then we can’t use that disk as an install target after restarting the installer post crash because we’d have to mount it for reading the kickstart file off of it! Eh. So it’s a bit complicated. Chris and I thought it’d be best to keep this simple (at least to start) and allow allow for saving the kickstart to an external disk to avoid these kind of hairy issues.

Chris and I talked about how it would be cool if the crash screen could just say, “insert a USB disk if you’d like to save your progress,” and we could auto-detect when the disk was inserted, save, and report back to the user that we saved. However, blivet (the storage library used by anaconda) doesn’t yet have support for autodetecting devices. So what I thought we could do instead is have a “Save kickstart” button, and that button would kick off the process of searching for the new disk, reporting to the user if they still needed to insert one or if there was some issue with the disk. Finally, once the kickstart is saved out, it could report a status that it was successfully saved.

Another design consideration I talked over with bcl for a bit – it would be nice to keep this saving process as simple as possible. Can we avoid having a file chooser? Can we just save to the root of the inserted disk and leave it at that? That would save users a lot of mental effort.

The primary use case for this functionality is crash recovery. It crashes, we offer to save your work. One additional case is that you’re quitting the installer and want to save your work – this case is rarer, but maybe it would be worth offering to save during normal quit too.

So here are my first cuts at trying to mock something out here. Please fire away and poke holes!

So this is what you’d first see when anaconda crashes:
00-CrashDialog

You insert the disk and then you hit the “Save kickstart” button, and it tries to look for the disk:
01-MountingDisk

Success – it saved out without issue.
02A-Success

Oooopsie! You got too excited and hit “Save kickstart” without inserting the disk.
02B-NoDiskFound

Maybe your USB stick is bricked? Something went wrong. Maybe the file system’s messed up? Better try another stick:
02C-MountingError

Hope this makes sense. My Inkscape SVG source is available if you’d like to tweak or play around with this!

Comments / feedback / ideas welcomed in the comments or on the anaconda-devel list.

Blurry Screenshots in GNOME Software?

Are you a pixel perfect kind of maintainer? Frustrated by slight blurriness in screenshots when using GNOME Software?

If you have one screenshot, capture a PNG of size 752×423. If you have more than one screenshot use a size of 624×351.

If you use any other 16:9 aspect ratio resolution, we’ll scale your screenshot when we display it. If you use some crazy non-16:9 aspect ratio, we’ll add padding and possibly scale it as well, which is going to look pretty bad. That said, any screenshot is better than no screenshot, so please don’t start removing <screenshot> tags.

June development results

Last month we have worked to improve user interface of Synfig and now we are happy to share results of our work....

July 01, 2014

KDE aux RMLL 2014

Post in French, English translation below…

Dans quelques jours débuteront les 15em Rencontres Mondiales du Logiciel Libre à Montpellier, du 5 au 11 Juillet.
Ces rencontres débuteront par un week-end grand public dans le Village du Libre, dans lequel nous aurons un stand de démonstration des logiciels de la communauté KDE.

Ensuite durant toute la semaine se tiendront des conférences sur différents thèmes, la programmation complète se trouve ici . J’aurai le plaisir de présenter une conférence sur les nouveautés récentes concernant les logiciels libre pour l’animation 2D, programmée le Jeudi à 10h30, et suivie par un atelier de crétion libre sur le logiciel de dessin Krita de 14h à 17h.

Passez nous voir au stand KDE ou profiter des conférences et ateliers si vous êtes dans le coin!

En passant, un petit rappel pour deux campagnes importantes de financement participatif:
-Le Kickstarter pour booster le dévelopment de la prochaine version de Krita vient de passer le premier palier d’objectif! Il reste maintenant 9 jours pour atteindre le second palier qui nous permettrai d’embaucher Sven avec Dmitry pour les 6 prochains mois.

-La campagne pour financer le Randa Meeting 2014, réunion permettant aux contributeurs de projets phares de la communauté KDE de concentrer leurs efforts. Celle ci se termine dans 8 jours.

Pensez donc si ce n’est déjà fait à soutenir ces projets ;)

show_RMLL_small

meeting_RMLL_small

banner_RMLL_hz_566x73

In a few days will begin the 15th “Rencontres Moniales du Logiciel Libre” in Montpellier, from 5th to 11th of July. This event will begin with a week-end for general publicaudience at the “Village du Libre”, where we will have a KDE stand to show the cool software from our community.

Then for the whole week there will be some conferences about several topics, the full schedule is here. I’ll have the pleasure to present a talk about recent news on free software for 2D animation on thursday 10.30 am, followed by a workshop about free creation on Krita painting software from 2 to 5 pm.

Come say hello at the KDE stand or enjoy the conferences and workshop if you’re around!

On a side note, a little reminder for two crowdfunding campaign:
-The Kickstarter to boost Krita development just reached the first step today! We now have 9 days left to reach the next step that will allow us to hire Sven together with Dmitry for the next 6 months.

-The Randa Meeting 2014 campaign, this meeting will allow contributors from key KDE projects to gather and get even more productive than usual.

So think about helping those projects if ou haven’t already ;)

Success!

With 518 backers and 15,157 euros, we've passed the target goal and we're 100% funded. That means that Dmitry can work on Krita for the next six months, adding a dozen hot new features and improvements to Krita. We're not done with the kickstarter, though, there are still eight days to go! And any extra funding will go straight into Krita development as well. If we reach the 30,000 euro level, we'll be able to fund Sven Langkamp as well, and that will double the number of features we can work on for Krita 2.9.


And then there's the super-stretch goal... We already have a basic package for OSX, but it needs some really heavy development. It currently only runs on OSX 10.9 Mavericks, krita only seees 1GB of memory, there are OpenGL issues, there GUI issues, there are missing dependencies, missing brush engines. Lots of work to be done. But we've proven now that this goal is attainable, so please help us get there!

It would be really cool to be able to release the next version of Krita for Linux, Windows and OSX, wouldn't it :-)

 And now it's also possible to select your reward and use Paypal -- which Kickstarter still doesn't offer.

Reward Selection

June 30, 2014

WebODF v0.5.0 released: Highlights

Today, after a long period of hard work and preparation, having deemed the existing WebODF codebase stable enough for everyday use and for integration into other projects, we have tagged the v0.5.0 release and published an announcement on the project website.

Some of the features that this article will talk about have already made their way into various other projects a long time ago, most notably ownCloud Documents and ViewerJS. Such features will have been mentioned before in other posts, but this one talks about what is new since the last release.

The products that have been released as ‘supported’ are:

  • The WebODF library
  • A TextEditor component
  • Firefox extension

Just to recap, WebODF is a JavaScript library that lets you display and edit ODF files in the browser. There is no conversion of ODF to HTML. Since ODF is an XML-based format, you can directly render it in a browser, styled with CSS. This way, no information is lost in translation. Unlike other text editors, WebODF leaves your file structure completely intact.

The Editor Components

WebODF has had, for a long time, an Editor application. This was until now not a feature ‘supported’ to the general public, but was simply available in the master branch of the git repo. We worked over the months with ownCloud to understand how such an editor would be integrated within a larger product, and then based on our own experimentation for a couple of awesome-new to-be-announced products, designed an API for it.

As a result, the new “Wodo” Editor Components are a family of APIs that let you embed an editor into your own application. The demo editor is a reference implementation that uses the Wodo.TextEditor component.

There are two major components in WebODF right now:

  1. Wodo.TextEditor provides for straightforward local-user text editing,by providing methods for opening and saving documents. The example implementation runs 100% client-side, in which you can open a local file directly in the editor without uploading it anywhere, edit it, and save it right back to the filesystem. No extra permissions required.
  2. Wodo.CollabTextEditor lets you specify a session backend that communicates with a server and relays operations. If your application wants collaborative editing, you would use this Editor API. The use-cases and implementation details being significantly more complex than the Wodo.TextEditor component, this is not a ‘supported’ part of the v0.5.0 release, but will, I’m sure, be in the next release(s) very soon. We are still figuring out the best possible API it could provide, while not tying it to any specific flavor of backend. There is a collabeditor example in WebODF master, which can work with an ownCloud-like HTTP request polling backend.

These provide options to configure the editor to switch on/off certain features.

Of course, we wholeheartedly recommend that people play with both components, build great things, and give us lots of feedback and/or Pull Requests. :)

New features

Notable new features that WebODF now has include:

  • SVG Selections. It is impossible to have multiple selections in the same window in most modern browsers. This is an important requirement for collaborative editing, i.e., the ability to see other people’s selections in their respective authorship colors. For this, we had to implement our own text selection mechanism, without totally relying on browser-provided APIs.
    Selections are now smartly computed using dimensions of elements in a given text range, and are drawn as SVG polygon overlays, affording numerous ways to style them using CSS, including in author colors. :)
  • Touch support:
    • Pinch-to-zoom was a feature requested by ownCloud, and is now implemented in WebODF. This was fairly non-trivial to do, considering that no help from touch browsers’ native pinch/zoom/pan implementations could be taken because that would only operate on the whole window. With this release, the document canvas will transform with your pinch events.
    • Another important highlight is the implementation of touch selections, necessitated by the fact that native touch selections provided by the mobile versions of Safari, Firefox, and Chrome all behave differently and do not work well enough for tasks which require precision, like document editing. This is activated by long-pressing with a finger on a word, following which the word gets a selection with draggable handles at each end.
Touch selections

Drawing a selection on an iPad

  • More collaborative features. We added OT (Operation Transformations) for more new editing operations, and filled in all the gaps in the current OT Matrix. This means that previously there were some cases when certain pairs of simultaneous edits by different clients would lead to unpredictable outcomes and/or invalid convergence. This is now fixed, and all enabled operations transform correctly against each other (verified by lots of new unit tests). Newly enabled editing features in collaborative mode now include paragraph alignment and indent/outdent.

  • Input Method Editor (IME). Thanks to the persistent efforts of peitschie of QSR International, WebODF got IME support. Since WebODF text editing does not use any native text fields with the assistance of the browser, but listens for keystrokes and converts them into operations, it was necessary to implement support for it using JavaScript using Composition Events. This means that you can now do this:

Chinese - Pinyin (IBUS)

Chinese – Pinyin (IBUS)

and type in your own language (IBUS is great at transliteration!)

Typing in Hindi

Typing in Hindi

  • Benchmarking. Again thanks to peitschie, WebODF now has benchmarks for various important/frequent edit types. benchmark

  • Edit Controllers.  Unlike the previous release when the editor had to specifically generate various operations to perform edits, WebODF now provides certain classes called Controllers. A Controller provides methods to perform certain kinds of edit ‘actions’ that may be decomposed into a sequence smaller ‘atomic’ collaborative operations. For example, the TextController interface provides a removeCurrentSelection method. If the selection is across several paragraphs, this method will decompose this edit into a complex sequence of 3 kinds of operations: RemoveText, MergeParagraph, and SetParagraphStyle. Larger edits described by smaller operations is a great design, because then you only have to write OT for very simple operations, and complex edit actions all collaboratively resolve themselves to the same state on each client. The added benefit is that users of the library have a simpler API to deal with.

On that note…

We now have some very powerful operations available in WebODF. As a consequence, it should now be possible for new developers to rapidly implement new editing features, because the most significant OT infrastructure is already in place. Adding support for text/background coloring, subscript/superscript, etc should simply be a matter of writing the relevant toolbar widgets. :) I expect to see some rapid growth in user-facing features from this point onwards.

A Qt Editor

Thanks to the new Components and Controllers APIs, it is now possible to write native editor applications that embed WebODF as a canvas, and provide the editor UI as native Qt widgets. And work on this has started! The NLnet Foundation has funded work on writing just such an editor that works with Blink, an amazing open source SIP communication client that is cross-platform and provides video/audio conferencing and chat.

To fulfill that, Arjen Hiemstra at KO has started work on a native editor using Qt widgets, that embeds WebODF and works with Blink! Operations will be relayed using XMPP.

Teaser:
blink-prototype

Other future tasks include:

  • Migrating the editor from Dojo widgets to the Closure Library, to allow more flexibility with styling and integration into larger applications.
  • Image manipulation operations.
  • OT for annotations and hyperlinks.
  • A split-screen collaborative editing demo for easy testing.
  • Pagination support.
  • Operations to manipulate tables.
  • Liberating users from Google’s claws cloud. :)

If you like a challenge and would like to make a difference, have a go at WebODF. :)


Krita Kickstarter

Krita Kickstarter

I know that I primarily write about photography here, but sometimes something comes along that’s too important to pass up talking about.

Krita just happens to be one of those things. Krita is a digital painting and sketching software by artists for artists. While I love GIMP and have seen some incredible work by talented artists using it for painting and sketching, sometimes it’s better to use a dedicated tool for the job. This is where Krita really shines.

The reason I’m writing about Krita today is that they are looking for community support to accelerate development through their Kickstarter campaign.


That is where you come in. It doesn’t take much to make a difference in great software, and every little bit helps. If you can skip a fancy coffee, pastry, or one drink while out this month, consider using the money you saved to help a great project instead!

There’s only 9 days left in their Kickstarter, and they are less than €800 to hitting their goal of €15,000!


Metamorphosis by Enrico Guarnieri

Of course, the team makes it hard to keep up with them. They seem to be rapidly implementing goals in their Kickstarter before they even get funding. For instance, their “super-stretch” goal was to get an OSX implementation of Krita running. Then this shows up in my feed this morning. A prototype already!

I am in constant awe at the talent and results from digital artists, and this is a great tool to help them produce amazing works. As a photographer I am deeply indebted to those who helped support GIMP development over they years, and if we all pull together maybe we can be the ones who future Krita users thank for helping them get access to a great program...

Skip a few fancy coffee drinks, possibly inspire a future artist? Count me in!


Krita Kickstarter


Still here? Ok, how about some cool video tutorials by Ramón Miranda to help support the campaign?

















If you still need more, check out his YouTube channel.

Ok, that's enough out of me.

Go Donate!
Krita Kickstarter

Last week in Krita — weeks 25 & 26

This last two weeks have been very exiting with the kickstarter campaign getting closer and closer to the pledge objective. At the time of writing we just crossed 13k! And with the wave of new users, drawn by the great word spreading labor of collaborators and enthusiasts, we have been very busy bringing new functions and building beta versions for you.

And now there's also the first public build for OSX:

http://www.valdyas.org/~boud/krita_osx_2.8.79.0.tgz

It is just a prototype, with stuff missing and rather detailed instructions on getting it to run... But if you've always wanted to have Krita on OSX, this is your chance to help us make it happen!

Before getting into the new hot stuff in the code I can’t go without mentioning the useful videos from Ramon Miranda. Aiming to improve the common knowledge of Krita features and capabilities as a painting software for those who hear about it for the first time, he has created a short series of video tips: Short video introductions to many functions and fundamentals. Even for the initiated these are a good resource. I wasn’t aware of some depicted functions on the videos. All tips and info on kickstarter post, Ramon youtube channel

Week 25 & 26 progress

Amongst the notable changes and develops we can cite the efforts of Boudewijn to create a building environment to eventually allow the creation of an alpha version for OSX users. Still in experimental phase, the current steps show steady progress as its now possible to open the program and do minor paintings. Of course this is far from been a version to distribute, but if we remember the Windows humble beginnings, this is a great sign. go krita!

In other news. Somsubhra, developer of Krita Animation spin, has added, aside many bug fixes and tweaks, a first rough animation player. I wanted to make a short video for you, but still the build is very fragile and on my system it crashed after creating the document. You can see the player in action in a video made by Sohsumbra

{youtube}VEHJ-JIunII{/youtube}

This week’s new features:

  • Implemented “Delayed Stroke” feature for brush smoothing. (Dmitry Kazakov)
  • Edit Selection Mask. (Dmitry Kazakov)
  • Add import/export for r8 and r16 heightmaps, extensions .r8 and .r16. (Boudewijn Rempt)
  • Add ability to zoom and sync for resource item choosers (Ctrl + Wheel). (Sven Langkamp)
  • Brush stabilizer by Juan Luis Boya García. (Juan Luis Boya García)
  • Allow activation of the Isolated Mode with Alt+click on a layer. (Dmitry Kazakov)

This week’s main Bug fixes

  • Make ABR brush loading code more robust. (Boudewijn Rempt)
  • FIX #319279: Drop the full brush image after loading it to save memory. (Boudewijn Rempt)
  • Enable the vector shape in Krita. This make possible to show embedded svg images. (Boudewijn Rempt)
  • CCBUG #333451: Add basic svg support to the vector shape. (Boudewijn Rempt)
  • FIX #335041: Fix crash when installing a bundle. (Boudewijn Rempt)
  • FIX #33592: Fix saving the lock status. (Boudewijn Rempt)
  • Fix crash when trying to paint in scratchpad. (Dmitry Kazakov)
  • Don’t crash if deleting the last layer. (Boudewijn Rempt)
  • FIX #336470: Fix Lens Blur filter artifacts when used as an Adjustment Layer. (Dmitry Kazakov)
  • FIX #334538: Fix anisotropy in Color Smudge brush engine. (Dmitry Kazakov)
  • FIX #336478: Fix convert of clone layers into paint layers and selection masks. (Dmitry Kazakov)
  • CCBUG #285420: Multilayer selection: implement drag & drop of multiple layers. (Boudewijn Rempt)
  • FIX #336476: Fix edge duplication in Clone tool. (Dmitry Kazakov)
  • FIX #336473: Fixed crash when color picking from a group layer. (Dmitry Kazakov)
  • FIX #336115: Fixed painting and color picking on global selections. (Dmitry Kazakov)
  • Fixed moving of the global selection with Move Tool. (Dmitry Kazakov)
  • CCBUG #285420: Add an action to merge the selected layers. (Boudewijn Rempt)
  • CCBUG #285420: Layerbox multi-selection. Make it possible to delete multiple layers. (Boudewijn Rempt)
  • FIX #336804: (Boudewijn Rempt)
  • FIX #336803: (Boudewijn Rempt)
  • FIX #330479: Fix memory leak in KoLcmsColorTransformation. (Boudewijn Rempt)
  • And many code optimizations, memory leak patching, spelling and translation updates and other fixes.

Delay stroke and brush stabilizer

A new way of creating smooth controlled lines. The new Stabilizer smooth mode works using both the distance of the stroke and the speed. It uses 3 important options that can be described as follows:

  • Distance: The less distance the weaker the stabilization force.
  • Delay: When activated, it adds a halo around the cursor. This area is defined as a “Dead Zone”, no stroke is made while the cursor is inside it. Very useful when you need to create a controlled line with explicit angles in it. The Pixel value defines the size of the halo.
  • Finish line: If switched off, line rendering will stop in the spot it was when the pen lifted. Otherwise, it will draw the missing gap between the current brush position and the cursor last location.

Multiselection

Developers have been working to re implement working on multiple layers. This time they made possible to select more than one layer to reorganize the stack, merge and delete actions. After selecting multiple layers you can:

  • Drag and drop from on location to another
  • Drag and drop layers inside a group
  • Click the erase layer button to remove all selected layers
  • Go to “Layer -> Merge selected layers” to merge.

This first implementation allows a much more faster workflow when dealing with many layers. However it is still necessary to use groups to make some actions, like transform, on multiple layers.

NEW :Layer -> Merge selected layers

Edit selection mask (global selection)

To activate go to Selection menu and turn on “Show global Selection Mask”.

When activated all global selections will appear in the layer stacks as local selection do. You can deactivate the selection, hide or edit using any available tool, like transform, brushes or filters.

At the moment it is not possible to see the effect a tool causes on all tools, but you can transform the selection to a painter layer to make finer adjustments.

NEW :Selection -> Show Global Selection Mask

Alt + click isolated layer

Added a new action to toggle isolate layer.

NEW :[ALt] + [Click] over a layer in layer docker.

This actions shows instantly the selected layer hiding all other. It will return to normal mode after another layer is selected, but while the isolated layer mode is on its possible to paint, transform and adjust visualizing only the isolated layer.

June 26, 2014

A Raspberry Pi Night Vision Camera

[Mouse caught on IR camera]

When I built my http://shallowsky.com/blog/hardware/raspberry-pi-motion-camera.html (and part 2), I always had the NoIR camera in the back of my mind. The NoIR is a version of the Pi camera module with the infra-red blocking filter removed, so you can shoot IR photos at night without disturbing nocturnal wildlife (or alerting nocturnal burglars, if that's your target).

After I got the daylight version of the camera working, I ordered a NoIR camera module and plugged it in to my RPi. I snapped some daylight photos with raspstill and verified that it was connected and working; then I waited for nightfall.

In the dark, I set up the camera and put my cup of hot chocolate in front of it. Nothing. I hadn't realized that although CCD cameras are sensitive in the near IR, the wavelengths only slightly longer than visible light, they aren't sensitive anywhere near the IR wavelengths that hot objects emit. For that, you need a special thermal camera. For a near-IR CCD camera like the Pi NoIR, you need an IR light source.

Knowing nothing about IR light sources, I did a search and came up with something called a "Infrared IR 12 Led Illuminator Board Plate for CCTV Security CCD Camera" for about $5. It seemed similar to the light sources used on a few pages I'd found for home-made night vision cameras, so I ordered it. Then I waited, because I stupidly didn't notice until a week and a half later that it was coming from China and wouldn't arrive for three weeks. Always check the shipping time when ordering hardware!

When it finally arrived, it had a tiny 2-pin connector that I couldn't match locally. In the end I bought a package of female-female SchmartBoard jumpers at Radio Shack which were small enough to make decent contact on the light's tiny-gauge power and ground pins. I soldered up a connector that would let me use a a universal power supply, taking a guess that it wanted 12 volts (most of the cheap LED rings for CCD cameras seem to be 12V, though this one came with no documentation at all). I was ready to test.

Testing the IR light

[IR light and NoIR Pi camera]

One problem with buying a cheap IR light with no documentation: how do you tell if your power supply is working? Since the light is completely invisible.

The only way to find out was to check on the Pi. I didn't want to have to run back and forth between the dark room where the camera was set up and the desktop where I was viewing raspistill images. So I started a video stream on the RPi:

$ raspivid -o - -t 9999999 -w 800 -h 600 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264

Then, on the desktop: I ran vlc, and opened the network stream:
rtsp://pi:8554/
(I have a "pi" entry in /etc/hosts, but using an IP address also works).

Now I could fiddle with hardware in the dark room while looking through the doorway at the video output on my monitor.

It took some fiddling to get a good connection on that tiny connector ... but eventually I got a black-and-white view of my darkened room, just as I'd expect under IR illumination. I poked some holes in the milk carton and used twist-ties to seccure the light source next to the NoIR camera.

Lights, camera, action

Next problem: mute all the blinkenlights, so my camera wouldn't look like a christmas tree and scare off the nocturnal critters.

The Pi itself has a relatively dim red run light, and it's inside the milk carton so I wasn't too worried about it. But the Pi camera has quite a bright red light that goes on whenever the camera is being used. Even through the thick milk carton bottom, it was glaring and obvious. Fortunately, you can disable the Pi camera light: edit /boot/config.txt and add this line

disable_camera_led=1

My USB wi-fi dongle has a blue light that flickers as it gets traffic. Not super bright, but attention-grabbing. I addressed that issue with a triple thickness of duct tape.

The IR LEDs -- remember those invisible, impossible-to-test LEDs? Well, it turns out that in darkness, they emit a faint but still easily visible glow. Obviously there's nothing I can do about that -- I can't cover the camera's only light source! But it's quite dim, so with any luck it's not spooking away too many animals.

Results, and problems

For most of my daytime testing I'd used a threshold of 30 -- meaning a pixel was considered to have changed if its value differed by more than 30 from the previous photo. That didn't work at all in IR: changes are much more subtle since we're seeing essentially a black-and-white image, and I had to divide by three and use a sensitivity of 10 or 11 if I wanted the camera to trigger at all.

With that change, I did capture some nocturnal visitors, and some early morning ones too. Note the funny colors on the daylight shots: that's why cameras generally have IR-blocking filters if they're not specifically intended for night shots.

[mouse] [rabbit] [rock squirrel] [house finch]

Here are more photos, and larger versions of those: Images from my night-vision camera tests.

But I'm not happy with the setup. For one thing, it has far too many false positives. Maybe one out of ten or fifteen images actually has an animal in it; the rest just triggered because the wind made the leaves blow, or because a shadow moved or the color of the light changed. A simple count of differing pixels is clearly not enough for this task.

Of course, the software could be smarter about things: it could try to identify large blobs that had changed, rather than small changes (blowing leaves) all over the image. I already know SimpleCV runs fine on the Raspberry Pi, and I could try using it to do object detection.

But there's another problem with detection purely through camera images: the Pi is incredibly slow to capture an image. It takes around 20 seconds per cycle; some of that is waiting for the network but I think most of it is the Pi talking to the camera. With quick-moving animals, the animal may well be gone by the time the system has noticed a change. I've caught several images of animal tails disappearing out of the frame, including a quail who visited yesterday morning. Adding smarts like SimpleCV will only make that problem worse.

So I'm going to try another solution: hooking up an infra-red motion detector. I'm already working on setting up tests for that, and should have a report soon. Meanwhile, pure image-based motion detection has been an interesting experiment.

June 25, 2014

Firewalls and per-network sharing

Firewalls

Fedora has had problems for a long while with the default firewall rules. They would make a lot of things not work (media and file sharing of various sorts, usually, whether as a client or a server) and users would usually disable the firewall altogether, or work around it through micro-management of opened ports.

We went through multiple discussions over the years trying to break the security folks' resolve on what should be allowed to be exposed on the local network (sometimes trying to get rid of the firewall). Or rather we tried to agree on a setup that would be implementable for desktop developers and usable for users, while still providing the amount of security and dependability that the security folks wanted.

The last round of discussions was more productive, and I posted the end plan on the Fedora Desktop mailing-list.

By Fedora 21, Fedora will have a firewall that's completely open for the user's applications (with better tracking of what applications do what once we have application sandboxing). This reflects how the firewall was used on the systems that the Fedora Workstation version targets. System services will still be blocked by default, except a select few such as ssh or mDNS, which might need some tightening.

But this change means that you'd be sharing your music through DLNA on the café's Wi-Fi right? Well, this is what this next change is here to avoid.

Per-network Sharing

To avoid showing your music in the caf, or exposing your holiday photographs at work, we needed a way to restrict sharing to wireless networks where you'd already shared this data, and provide a way to avoid sharing in the future, should you change your mind.

Allan Day mocked up such controls in our Sharing panel which I diligently implemented. Personal File Sharing (through gnome-user-share and WedDAV), Media Sharing (through rygel and DLNA) and Screen Sharing (through vino and VNC) implement the same per-network sharing mechanism.

Make sure that your versions of gnome-settings-daemon (which implements the starting/stopping of services based on the network) and gnome-control-center match for this all to work. You'll also need the latest version of all 3 of the aforementioned sharing utilities.

(and it also works with wired network profiles :)



June 24, 2014

Fedora.next Branding Update

So we’ve gone through a lot of iterations of the Fedora.next logo design based on your feedback; here’s the full list of designs and mockups that Ryan, Sarup, and myself have posted:

That’s a lot of work, a lot of feedback, and a lot of iteration. The dust has settled over the past 2 weeks and I think from the feedback we’ve gotten that there is a pretty clear winner that we should move forward with:

Let’s walk through some points about this here:

  • F/G and H I think should both be valid logo treatments. I think that F/G is good for contexts in which it’s clear we’re talking about Fedora (e.g., a Fedora website with a big Fedora logo in the corner), and H is good for contexts in which we need to promote Fedora as well (e.g., a conference T-shirt with other distro logos on it.)
  • Single-color versions of F/G & H are of course completely fine to use as well.
  • F/G are exactly the same except the texture underneath is shifted and scaled a bit. I think it should be okay to play with the texture and change it up. We can talk about this, though.
  • Feedback seemed a bit divided about the cloud mark – it was about 50/50, folks liking it full height on all three bars vs. liking it with some of the bars shorter so it looked like a stylized cloud. I think we should go with the full-height version since it’s a stronger mark (it’s bolder and stands out more) and these are clearly all abstract marks, anyway.
  • Several folks suggested trying to replace the circles in version H with the Fedora speech bubble. I did play around with this, and Ryan and I both agreed that the speech bubble shape complicates things – it makes the marks inside look off-center when they are centered, and it also creates some awkward situations when the entire logo has to interact with other elements on a page or screen, so we thought it’d be better to keep things simple and stick with a simpler shape like a circle.
  • We’ll definitely build some official textures using the pattern in F/G and make them available so you can use them! Ryan has a very cool Inkscape technique for creating these so I’m still hoping to make a screencast showing how to do it.
  • Did I forget a particular point you brought up and would like some more discussion about? Let me know.

We’ll definitely need some logo usage guidelines written up and we’ll have to create a supplemental logo pack that can be dispensed via the logo at fedoraproject.org logo queue. Those things aren’t quite ready yet – if you want to help with that, let us know at the Fedora design team list or here in the comments.

Anyway, thanks for watching and participating in this process. It’s always a lot of fun to work on designs in the open with everyone like this :)