January 28, 2015

Detecting fake flash

I’ve been using F3 to check my flash drives, and this is how I discovered my drives were counterfeit. It seems to me this kind of feature needs to be built inside gnome-multi-writer itself to avoid sending fake flash out to customers. Last night I wrote a simple tool called gnome-multi-writer-probe which does the following few things:

* Reads the existing data from the drive in 32kb chunks every 32Mbish into RAM
* Writes random blocks of 32kb every 32MBish, and also stores in RAM
* Resets the drive
* Reads all the 32k blocks from slightly different addresses and sizes and compares them to the random data in RAM
* Writes all the saved data back to the drive.

I only takes a few seconds on most drives. It also tries to be paranoid, and saves the data back to the drive the best it can when it encounters an error. That said, please don’t use this tool on any drives that have important data on them; assume you’ll have to reformat them after using this tool. Also, it’s probably a really good idea to unmount any drives before you try this.

If you’ve got access to gnome-multi-writer from git (either from jhbuild, or from my repo) then please could you try this:

sudo gnome-multi-writer-probe /dev/sdX

Where sdX is the USB drive you want to test. I’d be interested of the output, and especially interested if you have any fake flash media you can test this with. Either leave a comment here, grab me on IRC or send me an email. Thanks.

January 27, 2015

Tue 2015/Jan/27

  • An inlaid GNOME logo, part 2

    Esta parte en español

    To continue with yesterday's piece — the amargoso board which I glued is now dry, and now it is time to flatten it. We use a straightedge to see how bad it is on the "good" side.

    Not flat

    We use a jack plane with a cambered blade. There is a slight curvature to the edge; this lets us remove wood quickly. We plane across the grain to remove the cupping of the board. I put some shavings in strategic spots between the board and the workbench to keep the board from rocking around, as its bottom is not flat yet.

    Cambered iron Cross-grain planing

    We use winding sticks at the ends of the board to test if the wood is twisted. Sight almost level across them, and if they look parallel, then the wood is not twisted. Otherwise, plane away the high spots.

    Winding sticks Not twisted

    This gives us a flat board with scalloped tracks. We use a smoothing plane to remove the tracks, planing along the grain. This finally gives us a perfectly flat, smooth surface. This will be our reference face.

    Scalloped board Smoothing plane Smooth, flat surface

    On that last picture, you'll see that both halves of the board are not of the same thickness, and we need to even them up. We set a marking gauge to the thinnest part of the boards. Mark all four sides, using the flat side as the reference face, so we have a line around the board at a constant distance to the reference face.

    Gauging the thinnest part Marking all around Marked all around

    Again, plane the board flat across the grain with a jack plane and its cambered iron. When you reach the gauged line, you are done. Use a smoothing plane along the grain to make the surface pretty. Now we have a perfectly flat board of uniform thickness.

    Thicknessing with the jack plane Smoothing plane Flat and uniform board

    Now we go back to the light-colored maple board from yesterday. First I finished flattening the reference face. Then, I used the marking gauge to put a line all around at about 5mm to the reference face. This will be our slice of maple for the inlaid GNOME logo.

    Marking the maple board

    We have to resaw the board in order to extract that slice. I took my coarsest ripsaw and started a bit away from the line at a corner, being careful to sight down the saw to make it coplanar with the lines on two edges. It is useful to clamp the board at about 45 degrees from level.

    Starting to resaw at a corner

    Once the saw is into the corner, tilt it down gradually to lengthen the kerf...

    Kerfing one side

    Tilt it gradually the other way to make the kerf on the other edge...

    Kerfing the other side

    And now you can really begin to saw powerfully, since the kerfs will guide the saw.

    Resawing

    Gradually extend the cut until the other corner, and repeat the process on all four sides.

    Extending the cut Resawing

    Admire your handiwork; wipe away the sweat.

    Resawn slice

    Plane to the line and leave a smooth surface. Since the board is too thin to hold down with the normal planing stops on the workbench, I used a couple of nails as planing stops to keep the board from sliding forward.

    Nail as planing stop

    Now we can see the contrast between the woods. The next step is to glue templates on each board, and start cutting.

    Contrast between woods

Scammers at promo-newa.com

tl;dr Don’t use promo-newa.com, they are scammers that sell fake flash.

Longer version: For the ColorHug project we buy a lot of the custom parts direct from China at a fraction of the price available to us in the UK, even with import tax considered. It would be impossible to produce such a low cost device and still make enough money to make it worth giving up our evenings and weekends. This often means sending thousands of dollars to sketchy-looking companies willing to take on small (to them!) custom orders of a few thousand parts.

So far we’ve been very lucky, until last week. I ordered 1000 customized 1GB flash drives to use as a LiveUSB image rather than using a LiveCD. I checked out the company as usual, and ordered a sample. The sample came back good quality, with 1GB of fast flash. Payment in full was sent, which isn’t unusual for my other suppliers in China.

Fast forward a few weeks. 1000 USB drives arrived, which look great. Great, until you start using them with GNOME MultiWriter, which kept throwing validation warnings. Using the awesome F3 and a few remove-insert cylces later, the f3probe tool told me the flash chip was fake, reporting the capacity to be 1GB, when it was actually 96Mb looped around 10 times.

Taking the drives apart you could also see the chip itself was different from the sample, and the plastic molding and metal retaining tray was a lower quality. I contacted the seller, who said he would speak to the factory later that day. The seller got back to me today, and told me that the factory has produced “B quality drives” and basically, that I got what I paid for. For another 1600USD they would send me the 1GB ICs, which I would have to switch in the USB units. Fool me once, shame on you; fool me twice, shame on me.

I suppose people can use the tiny flash drives to get the .icc profile off the LiveCD image, which was always a stumbling block for some people, but basically the drives are worthless to me as LiveUSB devices. I’m still undecided whether to include them in the ColorHug box; i.e. is a free 96Mb drive better than them all going into landfill?

As this is China, I understand all my money is gone. The company listing is gone from Alibaba, so there’s not a lot I can do there. So other people can hopefully avoid this same mistake, I’ve listed all the details here, which hopefully will become googleable:

Promo-Newa Electronic Limited(Shenzhen)
Wei and Ping Group Limited(Hongkong)  

Office: Building A, HuaQiang Garden, North HuaQiang Road, Futian district, Shenzhen China, 0755-3631 4600
Factory: Building 4, DengXinKeng Industrial Zone, JiHua Road,LongGang District, Shenzhen, China
Registered Address: 15/B—15/F Cheuk Nang Plaza 250 Hennessy Road, HongKong
Email: sales@promo-newa.com
Skype: promonewa

January 26, 2015

Mon 2015/Jan/26

  • An inlaid GNOME logo, part 1

    Esta parte en español

    I am making a special little piece. It will be an inlaid GNOME logo, made of light-colored wood on a dark-colored background.

    First, we need to make a board wide enough. Here I'm looking for which two sections of those longer/narrower boards to use.

    Grain matching pieces

    Once I am happy with the sections to use — similar grain, not too many flaws — I cross-cut them to length.

    Cross cutting

    (Yes, working in one's pajamas is fantastic and I thoroughly recommend it.)

    This is a local wood which the sawmill people call "amargoso", or bitter one. And indeed — the sawdust feels bitter in your nose.

    Once cut, we have two pieces of approximately the same length and width. They have matching grain in a V shape down the middle, which is what I want for the shape of this piece.

    V shaped grain match

    We clamp the pieces togther and match-plane them. Once we open them like a book, there should be no gaps between them and we can glue them.

    Clamped pieces Match-planing Match-planed pieces

    No light shows between the boards, so there are no gaps! On to gluing. Rub both boards back and forth to spread the glue evenly. Clamp them, and wait overnight.

    No gaps! Gluing boards Clamped boards

    Meanwhile, we can prepare the wood for the inlaid pieces. I used a piece of soft maple, which is of course pretty hard — unlike hard maple, which would be too goddamn hard.

    Rough maple board

    This little board is not flat. Plane it cross-wise and check for flatness.

    Checking for flatness Planing

    Tomorrow I'll finish flattening this face of the maple, and I'll resaw a thinner slice for the inlay.

    Planed board

January 23, 2015

Sister-Doctor concept

A quick concept sketch of Sister-Doctor character by Anastasia Majzhegisheva

A quick concept sketch of Sister-Doctor character by Anastasia Majzhegisheva

Scientific and Technical Academy Award for the development of Bullet Physics!

The Academy of Motion Picture Arts and Sciences today announced that 21 scientific and technical achievements represented by 58 individual award recipients will be honored at its annual Scientific and Technical Awards Presentation on Saturday, February 7, at the Beverly Wilshire in Beverly Hills.

“To Erwin Coumans for the development of the Bullet physics library, and to Nafees Bin Zafar and Stephen Marshall for the separate development of two large-scale destruction simulation systems based on Bullet.

These pioneering systems demonstrated that large numbers of constrained rigid bodies could be used to animate visually complex, believable destruction effects with minimal simulation time.”

Thanks to all Bullet contributors and users!
See https://www.oscars.org/news/21-scientific-and-technical-achievements-be-honored-academy-awardsr

January 21, 2015

Plugable USB Hubs

Joshua from Plugable sent me 4 different USB hubs this week so they could be added as quirks to gnome-multi-writer. If you’re going to be writing lots of USB drives, the Plugable USB3 hubs now work really well. I’ve got a feeling that inserting and removing the drive is going to be slower than the actual writing and verifying now…

Moving update information from the distribution to upstream

I’ve been talking to various people about the update descriptions we show to the user. Without exception, the messages we show to end users are really bad. For example, the overly-complex-but-not-actually-useful:

Screenshot from 2015-01-21 10:56:34

Or, the even more-to-the-point:

Update to 3.15.4

I’m guilty of both myself. Why is this? Typically this text is written an over-worked and under-paid packager doing updates to many applications and packages. Sometimes the packager might be the upstream maintainer, or at least involved in the project, but many times it’s just some random person that got fingered to maintain a particular package. This doesn’t make an awesome person to write beautiful prose and text that thousands of end users are going to read. It also doesn’t make sense to write the same beautiful prose again and again for every distribution out there.

So, what do we need? We need a person who probably wrote the code, or at least signed it off, who cares about the project and cares about the user experience. i.e. the upstream maintainer.

What I’m proposing is that we ask upstream maintainers to write the release information in a way that can be shown in the software center. NEWS files are not stanardized, and you don’t typically have a NEWS file for each application in your upstream tarball, so we need something else.

Suprise suprise, it’s AppStream to the rescue. AppStream has a <release> object that fits the bill almost completely; you can put upstream version information and long (optionally translated) formatted descriptions.

Of course, you don’t want to write both NEWS and the various appdata files at release time, as that just increased the workload of the overly-busy upstream maintainer. In this case we can use appstream-util appdata-to-news in the buildsystem and generate the former from the latter automatically. We’re outputting markdown for the NEWS file, which seems to be a fairly good approximation of what NEWS files actually look like at least for GNOME.

For a real-world example, see the GNOME MultiWriter example commit that uses this.

There are several problems with this approach. One is that the translators might have to translate lots more text; and the obvious solution to that seems to be to only mark strings to be translated for stable versions. Alas, projects like GNOME don’t allow any new strings in stable versions, so we’ll either have to come up with an except for release notes ammendment to that, or just say that all the release notes are only ever available in C locale.

The huge thing to take away from this blog, if you are intending to use this new feature is that update descriptions have to be understandable by end users. Various bug fixes is not helpful, but Fixes a crash when adding a joystick is. End users neither care or understand Use libgusb rather than libusbx and technical details that do not affect the UI or UX of the application should be omitted.

This doesn’t work for distribution releases, e.g. 3.14.1-1 to 3.14.1-2, but typically these are not huge changes that we need to show so prominently to the user.

I’m also writing a news-to-appdata.py script, so if anyone wants to take the plunge on an existing project it might be good to wait for that unless you like lots of copy and pasting.

Comments, as always, welcome.

January 20, 2015

Stellarium 0.13.2 and 0.12.5 has been released!

The Stellarium development team after 3 months of development is proud to announce the second correcting release of Stellarium in series 0.13.x - version 0.13.2. This version contains over 70 closed bugs and includes some wishes and new nice features - like visualization of the zodiacal light and new sky cultures.

Also we announce the new release for series 0.12.x - version 0.12.5 - we are backported of some features from series 0.13.x for this version.

A huge thanks to our community whose contributions help to make Stellarium better!

Full list of changes: https://launchpad.net/stellarium/+milestone/0.13.2

Searching for Morevna image

Nikolai Mamashev takes his own search for the image of Morevna character. Here’s the first result!

Morevna by Nikolai Mamashev

Morevna by Nikolai Mamashev

January 19, 2015

Morevna Child (coloured)

Some time ago we have published a “Morevna Child” artwork by Anastasia Majzhegisheva. Now it’s time for the coloured version!

Morevna Child by Anastasia Majzhegisheva

Morevna Child by Anastasia Majzhegisheva

January 18, 2015

Another stick figure in peril

One of my favorite categories of funny sign: "Stick figures in peril". This one was on one of those automated gates, where you type in a code and it rolls aside, and on the way out it automatically senses your car.

[Moving gate can cause serious injury or death]

January 17, 2015

The Gorilla and the Gibbon

As a Krita developer, I'm not too happy comparing Krita to Photoshop. In fact, I have been known to scream loudly, start rolling my eyes and in general gibber like a lunatic whenever someone reports a bug with as its sole rationale "Photoshop does it like this, Krita must, too!".

But when we published the news that a group at University Paris 8 had replaced Photoshop with Krita, that comparison becomes inevitable. Even though it is just one group that used Photoshop for a specific purpose, one that Krita can fill as well. The news got picked rather widely and even brought the krita.org webserver to its knees for a moment. The discussion on Hacker News got interesting when people started claiming that Krita, like Gimp was missing so much stuff, like 16 bit/channel, adjustment layers and so on -- even things that we've had since 2004 or 2005.

So, where are we, what's our position? In the first place, with Adobe having about a hundred developers on Photoshop, they can spend thousands of hours a week on developing Photoshop. We're lucky to get a hundred hours a week on Krita. Of course, we're awesome, but that's a huge disparity. We simply cannot duplicate all the features in Photoshop: even if we'd want to, there are not enough developer hours available. And even if there were, there's the pesky problem of incomplete file format specifications. That means choices have to be made.

We develop Krita for a specific purpose: for people to create artwork. Comics, illustrations, matte paintings, concept-art, textures. Anything that's not relevant for those purposes isn't relevant for Krita. No website design, no wedding album editing, no 3D printing, no embedded email client.

But anything that artists need for their work is relevant. And I think we're doing pretty well in that regard. Krita is an efficient tool that people find fun to use.

If that's the purpose you use Photoshop for, give Krita a try. If you use Photoshop for something else, don't bother. Or, well, you can give Krita a try, but don't be surprised if Krita is missing stuff.

Sometimes, we will do a direct clone of a Photoshop feature: the layer styles dialog Dmitry and I are working on is an example. People want that feature, even if we've got filter layers and transformation masks already. It's going to take about two to three hundred hours just to do all the typing, let alone the actual thinking about how to fit the algorithms into Krita's image composition code.

But mostly, cloning another application is a bad idea. You are always be running behind, because you cannot clone what hasn't been released yet, and unless you have more hours per week available than the clonee, you won't ever have time for introducing unique features that make your project more interesting than the clonee -- like Krita's wrap-around mode. Or the opengl canvas, which we had in 2005, and which Photoshop now also has. (This Nvidia page on how Opengl makes Photoshop more responsive could have been written for Krita. The only thing we miss is embedding a 3D model in your painting, and we've already made two Summer of Code students attempt just that.)

So, what's our roadmap for 2015, if it isn't "be Photoshop unto all people"?

These are the big issues that we need to spend serious time on:

  • Port to Qt and KDE Frameworks 5. In many respects, a waste of time, since it won't bring us anything of actual use to our users, but it has to be done.
  • Implement a performance optimization called Levels of Detail. This will make Krita work much faster with bigger images, at the expense of actually doing the pixel mangling several times.
  • Animation. Somsubhra's animation timeline is a great start, but it's not ready for end users. We had hoped for a big donation kickstarting this development, but that doesn't seem likely to materialize.
  • OSX. We've got an experimental OSX port, but it's buggy and broken and missing features. (No HDR painting, OpenGL is broken, the popup palette doesn't show in OpenGL mode, memory handling is broken -- and a thousand smaller issues.
  • Python. Krita has been scriptable in the past. First through KJS, in the 1.x days, then through Kross (which meant javascript, ruby, python). Neither scripting interface exposed all of Krita, or even the right parts. You could create filters in Python, but automating workflow was much harder. There's a new prototype Python scripting plugin, modelled after Kate's Python plugin that would make a good start.

To make this possible, we simply have to add hours per week to Krita. Which means starting on the next fund raiser, and publising Krita in more app stores, sell more DVD's, get more people to join the Development Fund!

January 16, 2015

Surviving winter as a motorsports fan.

Winter is that time of the year where nothing happens in the motorsport world (one exception: Dakar). Here are a few recommendations to help you through the agonizing wait:

Formula One

Start out with It Is What It Is, the autobiography of David Coulthard. It only goes until the end of 2007, but nevertheless it’s a fascinating read: rarely do you hear a sportsman speak with such openness. A good and honest insight into the mind of a sportsman and definitely not the politically correct version you’ll see on the BBC.

It Is What It Is

Next up: The Mechanic’s Tale: Life in the Pit-Lanes of Formula One by Steve Matchett, a former Benetton F1 mechanic. This covers the other side of the team: the mechanics and the engineers.

The Mechanic's Tale: Life in the Pit-Lanes of Formula One

Still feel like reading? Dive into the books of Sid Watkins, who deserves huge amounts of credit for transforming a very deadly sport into something surprisingly safe (or as he likes to point out: riding a horse is much more dangerous).

He wrote two books:

Both describe the efforts on improving safety and are filled with anecdotes.

And finally, if you prefer movies, two more recommendations. Rush, an epic story about the rivalry between Niki Lauda and James Hunt. Even my girlfriend enjoyed it and she has zero interest in motorsports.

Rush

And finally Senna, the documentary about Ayrton Senna, probably the most mythical Formula One driver of all time.

Rush

Le Mans

On to that other legend: The 24 hours of Le Mans.

I cannot recommend the book Le Mans by Koen Vergeer enough. It’s beautiful, it captures the atmosphere brilliantly and seamlessly mixes it with the history of this event.

But you’ll have to go the extra mile for it: it’s in Dutch, it’s out of print and it’s getting exceedingly rare to find.

Le Mans

Nothing is lost if you can’t get hold of it. There’s also the 1971 movie with Steve McQueen: Le Mans.

It’s everything that modern racing movies are not: there’s no CG here, barely any dialog and the story is agonizingly slow if you compare it to the average Hollywood blockbuster.

But that’s the beauty of it: in this movie the talking is done by the engines. Probably the last great racing movie that featured only real cars and real driving.

Le Mans

Motorcycles

Motorcycles aren’t really my thing (not enough wheels), but I have always been in awe for the street racing that happens during the Isle of Man TT. Probably one of the most crazy races in the world.

Riding Man by Mark Gardiner documents the experiences of a reporter who decides to participate in the TT.

Riding Man

And to finish, the brilliant documentary TT3D: Closer to the Edge gives a good insight into the minds of these drivers.

It seems to be available online. If nothing else, I recommend you watch the first two minutes: the onboard shots of the bike accelerating on the first straight are downright terrifying.

TT3D: Closer to the Edge

Rounding up

By the time you’ve read/seen all of the above, it should finally be spring again. I hope you enjoyed this list. Any suggestions about things that would belong in this list are greatly appreciated, send them over!

January 15, 2015

gnome-battery-bench

One thing we want to do for the next versions of GNOME and Fedora is to improve battery performance. Your laptop may well be advertised by the manufacturer to have “up to 10 hours of battery life” or some such claim. You probably don’t get anywhere near this.

Let’s put out some rough numbers here to give an overall sense of scale for the problem. For a modern ultrabook:

  • The battery is 50 watt-hours (Wh) – it can power a load of 50W for an hour or a load of 5W for 10 hours.
  • The baseline idle consumption of the system – this is RAM refresh, the power consumption of peripherals in power-saving mode, etc, is 5W.
  • The screen and keyboard backlights, if both turned on to 100%, draw 5W.
  • The CPU/GPU can sustain about 15W – this is a thermal limit, so it can draw more for short bursts, but over time it will be throttled to an average.
  • All other peripherals (Wifi, bluetooth, touchpad, etc.) can use about 5W of power combined when not in power-saving mode.

So the power draw of the system can range from about 5W (the manufacturer’s 10 hours) to 30W (1 hour 40 minutes). If you have such an ultrabook, how is it doing right now? I’d guess it’s using about 15W unless you pay a lot of attention to power usage. Some of the things that might be going wrong:

  • Your keyboard/screen backlights are likely higher than is needed.
  • Some of the devices on your system don’t have power-saving turned on properly.
  • You likely have some background CPU activity (webpage ads, for example).

Of course, if you are running a compilation in the background, you want your CPU to be using that full 15W of power to get it done as soon as possible – in this case, your battery isn’t going to last very long. But a lot of usage is closer to idle.

Measuring power usage

I’ve made assertions above about power used by different things. But how did I actually measure that? powertop is the state of the art for measuring power usage and tweaking it on Linux. It will show you a figure for current battery discharge rate, but it bounces around by several watts; partly because powertop’s own data collection loads the system. The effect of a kernel option is usually much smaller than that. One of the larger effects I discovered on my laptop was that turning USB autosuspend for the touchscreen saves about 150mW. When you tweak a tunable in powertop, without a way to measure power usage more accurately, it’s hard to know whether any observed differences are real.

To support figuring out what is going on with power, I wrote gnome-battery-bench. What it does is pretty simple – it plays back recorded sequences of events in a loop and monitors battery charge to estimate power usage. Because battery usage is being averaged over minutes, a lot of the noise is averaged out. And the event sequences can be changed to exercise different usage patterns by the user.

gnome-battery-benchThe above screenshot shows gnome-battery-bench running a “Light Duty” benchmark that combines scrolling around in a Wikipedia page and typing in gedit. Instantaneous usage bounces around a lot from the activity and from random noise, but after a few cycles the averaged power and estimated battery lifetime converge. The corresponding idle power usage is about 5.5W, so we see then know that we’re using about 2.9W from the activity.

gnome-battery-bench is designed as a graphical application because I want to encourage people to explore with it and find out interactively what is using power on their system. And graphing is also useful so that the user can see when something is going wrong with the measurement; sometimes batteries will report data that jumps around. But there’s also a command line version that can be used for automatic scripting of benchmarks.

I decided to use recorded sequences of events for a couple of reasons: first, it’s easy for anybody to create new test sequences – you just run the gnome-battery-bench command line tool in record mode and do what you want to test. Second, playing back event sequences at a low level simulates user interaction very accurately. There is little CPU overhead, and as far as the desktop is concerned it’s exactly like user input. Of course, not everything can be easily tested by simply playing back canned sequences, but our goal here isn’t to test everything, just to be able to test some things that are reasonably representative.

The gnome-battery-bench README file describes in more detail how it works and how to install it on your system.

Next steps

gnome-battery-bench is basically usable as is. The main remaining thing to do with it is to spend some time designing and recording a couple of sequences that better reflect actual usage. The current tests I checked in are basically just placeholders.

On the operating system, we need to make sure that we are actually shipping with as many power-saving options on for peripherals as can be supported. In particular, “SATA link power management” makes a several-watt difference.

Backlight management is another place we can make improvements. Some problems are simply bad defaults. If ambient light sensors are present on the system, using them can be a big win. If not, simply using appropriate defaults is already an improvement.

Beyond that, in GNOME, we can optimize application and system code for efficiency and to not do things unnecessarily often in the background. Eventually I’d like to figure out a way to have power consumption also tracked by perf.gnome.org so we can see how code changes affect our power consumption and avoid regressions.


Guy-In-Black concept

A coloured concept of Guy-In-Black character, made by Nikolai Mamashev.

2015-01-10-mib

Koschei the Deathless – Artwork #2

Following by Nikolai’s progress, Anastasia Majzhegisheva takes up the torch and presents her artwork on Koschei character.

2014-12-28-Koshei-2-4

2014-12-28-Koshei-3-4

January 14, 2015

Fedora Design Team FAD this weekend!

Design Team FAD Logo

Starting this Friday through the weekend, we’re having the very first Fedora Design Team FAD here at Red Hat’s Boston-area office. A number of design team members are going to come together for two and a half days and plan the basic roadmap for the design team over the next year or two, as well as more hands-on tasks that could involve cleaning out our ticket queue and maybe even working on wallpaper ideas for Fedora 22. :)

Join Us Virtually!

We want to allow remote participants (yes, even you :) ) to join us, so we will have an OpenTokRTC video stream as well as a Google On Air Hangout for each day of the event. We will also be in #fedora-design on irc.freenode.net, and we’ll have a shared notepad on piratepad.

Video Stream Links

OpenTokRTC

OpenTokRTC is an open source project for webrtc video conferencing; opentokrtc.com is the demo site set up by TokBox, the project’s sponsor. If you have issues with this feed, please jump to the appropriate Google Hangout.

Google Hangouts

Google Hangouts are unfortunately not open source, but we have set these up as “On Air” hangouts so you do not need to be logged into Google to view them nor should you need to install Google’s plugin to view them.

Other Resources

Chat + Notes
  • #fedora-design on irc.freenode.net – this is the official chat channel for the event.
  • Design FAD piratepad – we’ll take notes as the event progresses here; for example, as we make decisions we’ll track them here.

What are We Working on, When?

We’ll flesh out the fine details of the schedule during the first hour or so of the event; I will try to update the session titles on FedoCal to reflect that as we hash it out. (Likely, it will be documented on the piratepad first.)

Schedule
  • Schedule on FedoCal (note that FedoCal has built-in timezone conversion so you can view in your local timezone :) )

 

See you there! :)

Arbitrary contour quadrangulation

Hi

Quadrangulating as ¨even¨ as possible an arbitrary contour, is by no means an easy task, many algorithm exists and even complete thesis have being written about that.

Recently I was developing such a tool that can be used for improve realtime retopology.

I have tested it with arbitrary contours and even ill hspaed ones and generally it performs quite good and for patch like contours used in automatic retopo it really excels :)

Cheersret bestofall bestofallcube good jgy quad1 quads3 quads4 quads6 quads8 qube5


Morevna wallpapers by Anastasia Majzhegisheva

Anastasia Majzhegisheva brings a coloured version of her recent Morevna artwork. Enjoy!

2015-01-11-Morevna-3

2015-01-11-Morevna-2

The artwork is painted completely in Krita. Here are a few WIP screenshots.

2011-01-11-2 2011-01-11-3 2011-01-11-4

Koschei The Deathless – Artwork #1

After having some fun with concept images, Nikolai Mamashev presents the sample lineart of Koschei The Deathless.

2014-12-12-sketch-med

 

January 13, 2015

A a litle fun.

So its has been a long time since I published anything Oxygen-KDE related. Well been taking some time off from the extreme amount of responsibility/work Oxygen/KDE was. It was for the best and its great fun seeing Breeze develop its own little magic. They are just great.
Plus gets me the time to reinvent my design language and skill sets in a vastly different design language world from what we had just a few years ago.
I might start something new for fun that is a bit more structured, things are starting to make sense to me and more important its fun again.

Any way a picture in every post right? So here goes a new take on a wallpaper I did a few years ago, quadros2.

Concepts of Koschei the Deathless by Nikolai Mamashev

Last weeks Nikolai Mamashev have been intensively researching the main villain of Morevna series – Koschei the Deathless. Here we would like to share some concepts made by Nikolai.

2014-10-15-1 2014-10-01-4k 2014-10-15-h 2014-12-15-k-v1 2014-10-15-1L 2014-12-08-k3 2014-12-10-k4 2014-12-11-k8 2014-10-15-gr

 

January 12, 2015

Fanart by Anastasia Majzhegisheva – 18

Morevna sketch by Anastasia Majzhegisheva

Morevna sketch by Anastasia Majzhegisheva

January 09, 2015

Finding hidden applications with GNOME Software

When you do a search in GNOME Software it returns any result of any application with AppStream metadata and with a package name it can resolve in any remote repository. This works really well for software you’re installing from the main distribution repos, but less well for some other common cases.

Lets say I want to install Google Chrome so that my 2 year old daughter can ring me on hangouts, and tell me that dinner is ready. Lets search for Chrome on my Fedora Rawhide system.

Screenshot from 2015-01-09 16:37:45

Whoa! Wait, how did you do that? First, this exists in /etc/yum.repos.d/google-chrome.repo — the important line being enabled_metadata=1. This means “download just the metadata even when enabled=0” and means we can get information about what packages are available in repos we are not enabling by default for legal or policy reasons.

[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=0
gpgcheck=1
repo_gpgcheck=1
enabled_metadata=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

We’ve also got a little XML document with the AppStream metadata (just the long description and keywords) called /usr/share/app-info/xmls/google-chrome.xml which could be included in the usual vendor-supplied fedora-22.xml if that’s what we want to do.

Screenshot from 2015-01-09 16:40:09

The other awesome feature this unlocks is when we have addon repos that are not enabled by default. For instance, my utopia repo full of super new free software applications could be included in Fedora, and if the user types in the search word we ask if the repo should be enabled. This would solve a lot of use cases if we could ship .repo files for a few popular COPRs of stuff we don’t (yet) ship in Fedora, but are otherwise free and open source software.

Screenshot from 2015-01-09 16:51:00

All the components to do this are upstream in Fedora 22 (you need a new librepo, libhif, PackageKit, libappstream-glib and gnome-software, phew!) although I’m sure we’ll be tweaking the UI and UX before Fedora 22 is released. Comments welcome.

 

GNOME MultiWriter 3.15.2

I’ve just released GNOME MultiWriter 3.15.2, which is the first release that makes it pretty much feature complete for me.

Reads and writes are now spread over root hubs to increase throughput. If you’ve got a hub with more than 7 ports and the port numbers don’t match the decals on the device please contact me for more instructions.

In this release I’ve also added the ability to completely wipe a drive (write the image, then NULs to pad it out to the size of the media) and made that and the verification step optional. We also now show a warning dialog to the user the very first time the application is used, and some global progress in the title bar so you can see the total read and write throughput of the application.

With this release I’ve now moved the source to git.gnome.org and will do future releases to ftp.gnome.org like all the other GNOME modules. If you see something obviously broken and you have GNOME commit access, please just jump in and fix it. The translators have done a wonderful job using transifex, but now I’m leaving the just-as-awesome GNOME translator teams handle localisation.

If you’ve got a few minutes, and want to try it out, you can clone the git repo or install a package for Fedora.

Richard

January 08, 2015

Accessing image metadata: storing tags inside the image file

A recent Slashdot discussion on image tagging and organization a while back got me thinking about putting image tags inside each image, in its metadata.

Currently, I use my MetaPho image tagger to update a file named Tags in the same directory as the images I'm tagging. Then I have a script called fotogr that searches for combinations of tags in these Tags files.

That works fine. But I have occasionally wondered if I should also be saving tags inside the images themselves, in case I ever want compatibility with other programs. I decided I should at least figure out how that would work, in case I want to add it to MetaPho.

I thought it would be simple -- add some sort of key in the images's EXIF tags. But no -- EXIF has no provision for tags or keywords. But JPEG (and some other formats) supports lots of tags besides EXIF. Was it one of the XMP tags?

Web searching only increased my confusion; it seems that there is no standard for this, but there have been lots of pseudo-standards over the years. It's not clear what tag most programs read, but my impression is that the most common is the "Keywords" IPTC tag.

Okay. So how would I read or change that from a Python program?

Lots of Python libraries can read EXIF tags, including Python's own PIL library -- I even wrote a few years ago about reading EXIF from PIL. But writing it is another story.

Nearly everybody points to pyexiv2, a fairly mature library that even has a well-written pyexiv2 tutorial. Great! The only problem with it is that the pyexiv2 front page has a big red Deprecation warning saying that it's being replaced by GExiv2. With a link that goes to a nonexistent page; and Debian doesn't seem to have a package for GExiv2, nor could I find a tutorial on it anywhere.

Sigh. I have to say that pyexiv2 sounds like a much better bet for now even if it is supposedly deprecated.

Following the tutorial, I was able to whip up a little proof of concept that can look for an IPTC Keywords tag in an existing image, print out its value, add new tags to it and write it back to the file.

import sys
import pyexiv2

if len(sys.argv) < 2:
    print "Usage:", sys.argv[0], "imagename.jpg [tag ...]"
    sys.exit(1)

metadata = pyexiv2.ImageMetadata(sys.argv[1])
metadata.read()

newkeywords = sys.argv[2:]

keyword_tag = 'Iptc.Application2.Keywords'
if keyword_tag in metadata.iptc_keys:
    tag = metadata[keyword_tag]
    oldkeywords = tag.value
    print "Existing keywords:", oldkeywords
    if not newkeywords:
        sys.exit(0)
    for newkey in newkeywords:
        oldkeywords.append(newkey)
    tag.value = oldkeywords
else:
    print "No IPTC keywords set yet"
    if not newkeywords:
        sys.exit(0)
    metadata[keyword_tag] = pyexiv2.IptcTag(keyword_tag, newkeywords)

tag = metadata[keyword_tag]
print "New keywords:", tag.value

metadata.write()

Does that mean I'm immediately adding it to MetaPho? No. To be honest, I'm not sure I care very much, since I don't have any other software that uses that IPTC field and no other MetaPho user has ever asked for it. But it's nice to know that if I ever have a reason to add it, I can.

SVG Working Group Meeting Report — Santa Clara (TPAC)

This post got delayed due to work on ‘units’ for the 0.91 Inkscape release followed by the holidays.

The SVG Working Group had a two day meeting in Santa Clara as part of TPAC (the yearly meeting of all W3C working groups) at the end of October. This is an occasion to meet in person with other groups who have some shared interests in your group’s work. I would like to thank the Inkscape board for partially funding my attendance and W3C for waiving the conference fee.

Here are some highlights of the meeting:

Day 1, Morning

Minutes

The morning session was divided into two parts: the first part was an SVG only meeting while the second part was a joint meeting with the Task Force for Accessibility.

  • SVG blending when embedded via <img>:

    This is probably not a real interesting topic to readers of this blog other than it can give one a flavor of the types if discussions that go on inside the SVG working group. We spent considerable time debating if elements inside an SVG that are included into a web page by the HTML <img> tag should blend with elements outside the SVG (other than following the simple “painters model” where transparency is allowed). Recall that in SVG 2 (and CSS) it is possible to select blend modes using the ‘mix-blend-mode’ CSS property (see my blog post about blending). So the question becomes should objects like a rectangle (inside the SVG referenced by an <img> element) with a ‘mix-blend-mode’ value of say ‘screen’ blend with an image in the HTML page behind? We finally concluded that an author would expect an external SVG to be isolated and not blend with other objects in the HTML page.

  • Accessibility:

    The Accessibility Task Force asked to meet with us to discuss accessibility issues in graphics. Work has begun on SVG2 Accessibility API Mappings. An example of how accessibility can work with graphics can be found in a Surfin’ Safari blog post.

Day 1, Afternoon

Minutes

The afternoon session was a joint meeting with the CSS working group.

  • Text Decoration

    CSS has expanded the possibilities of how text is decorated (underlines, over-lines, etc.) by adding three new properties in CSS Text Decorations Module Level 3. The new properties ‘text-decoration-line’ and ‘text-decoration-style’ are easy to adopt into SVG (and in fact are already read and rendered by Inkscape 0.91). The new property ‘text-decoration-color’ is more problematic. SVG has long supported separate ‘fill’ and ‘stroke’ properties on text which also applies to text decoration. By careful nesting of <tspan>’s one can have a different underline color from the text color. Furthermore, SVG allows various paints to be applied to the text decoration, like a gradent or pattern fill. The ‘text-decoration-color’ property allows the color of the text decoration to be set directly, without the need for nested <tspan>’s so it is a quite attractive idea but how to support the richness found in SVG?

    I proposed a number of solutions (see my presentation). The CSS group agreed that my favorite solution, that adding ‘text-decoration-fill’ and ‘text-decoration-stroke’ was the proper way to move forward. (BTW, the CSS working would like to eventually allow fill and stroke on HTML text.)

  • Fitting Text in a Box

    We’ve had numerous requests for the ability to adjust the size of text to fit it inside a given box (note, this is not the same as wrapping text into a shape). SVG has the attribute ‘textLength’ which allows a renderer to adjust the spacing or glyph width to match text to a given length. It was intended to allow renderers to adjust the length of a given text string to account for differences in font metrics if a the specified font wasn’t available; it was never intended to be an overall solution to fitting text inside a box, in fact the SVG 2 spec currently warns against using it in this way. I received a proposal from another Inkscape developer on expanding ‘textLength’ to be more useful in fitting text in a box. It seems to me that finding a solution to this problem would be of more general interest than just for SVG so I added this topic to the SVG/CSS agenda. I prepared a presentation to provide a starting point for the discussion.

    We had quite a lengthy discussion. The consensus seemed to be that CSS could use a set of simple knobs to make small adjustments to text, mostly for the purpose of labels. This would satisfy most use cases. Large adjustments could (should?) be the domain of script libraries. It was decided to solicit more feedback from users.

  • Image Rendering

    CSS Images 3 has co-opted the SVG ‘image-rendering‘ property and redefined in to specify what about an image is important to preserve when scaling as compared to a speed/accuracy trade off as in SVG 1.1. I prepared a short report on a couple of issues I found. The first is that the specification does not describe very well the meaning of the new ‘crisp-edges’ value. Tab Atkins, one of the spec’s authors has agreed to elaborate and add some figures to demonstrate what is intended. I found the Wikipedia section Pixel art scaling algorithms to be particularly enlightening on the subject.

    The second issue is that some browsers and Inkscape use the now deprecated ‘optimizeSpeed’ value to indicate that the nearest neighbor algorithm should be used for scaling. This is important when scaling line art. I asked, and Tab agreed, that ‘optimizeSpeed’ value should correspond to the new ‘pixelated’ value to not break existing content (and not ‘auto’ as is currently in the spec).

  • Connectors

    I’ve been working on a connectors proposal for SVG. There is renewed interest as being able to show relationships between elements would greatly aid accessibility. We even had a brief meeting with the HTML working group where it was suggested that connectors (possibly without visual links) may be of interest to aid accessibility of HTML. One problem I’ve had is how to reference ports inside a <symbol> element. I asked the CSS group for suggestions (this is obviously not a styling issue but the CSS group members are experts at syntax). Tab Atkins suggested: url(#AndGate1) Out, Mid1, Mid2, url(#AndGate2) InA, where, for example, Out is the point defined inside the symbol with the ‘id’ AndGate1.

Day 2

Minutes

The SVG working group met for entire day covering a real hodge-podge of topics, some not well minuted. Here are a few highlights:

  • NVidia presentation.

    NVidia gave a quite impressive demonstration of their OpenGL extensions for rendering 2D vectors, (think SVG), showing an entire HTML web page from the New York Times being rotated and scaled in real time on their Tegra based Shield tablet with all the text rendered as vectors (they can render 700,000 paths per second). They are trying to get other vendors interested in the extensions but it doesn’t seem to be a high priority for them.

  • CTM Calculations

    For mapping applications, a precision of greater than single precision is necessary for calculating the Current Transformation Matrix (CTM) due to rounding errors. It was proposed and accepted that SVG dictate that such calculations be done as double precision (as Inkscape already does). (Note: single precision is sufficient for actual rendering.)

  • Going to Last Call Working Draft

    We discussed when we’ll get SVG 2 out the door. It is a very large specification with various parts in various stages of readiness. We decided to target the February face-to-face meeting in Sydney as the date we move to the next stage in the specification process… where no new features can be added and incomplete ones removed.

  • HTML in SVG

    There has been a desire by some for quite awhile to allow HTML directly inside SVG (not wrapped by a <foriegnElement> tag). I personally am quite hesitant to see this happen. SVG as at the moment a nice stand-alone graphics specification that doesn’t necessarily have to be rendered in a Web browser. Incorporating HTML would threaten this.

  • SVG in HTML

    This is the opposite of the previous topic, allowing SVG to be directly embedded in HTML without using a name space.

  • Non-scaling Patterns

    Just as it often useful to have non-scaling stroke widths (especially for technical drawings), it would also be useful to have non-scaling patterns and hatches. We agreed that this should be added to the specification.

  • Minimum Stroke Width

    It would be useful to have a minimum stroke-width so that certain strokes do not disappear when a drawing is scaled down. It was claimed that this will be handled by vector-effect but I don’t see how.

  • SVG in Industry

    It was mentioned that Boeing is moving all their 787 docs to SVG so they can be viewed in browsers.

Unfortunately, we ran out of time before we could cover some of my other topics: stroke-miterlimit, text on a shape, and auto-path closing.

Fanart by Anastasia Majzhegisheva – 17

2014-11-20-proc

Morevna artwork by Anastasia Majzhegisheva.

January 07, 2015

Guy-In-Black: First sketch

Translated by:

2015-01-06-mibGuy-In-Black is another new character, who will appear in the new episode. Artwork by Nikolai Mamashev.

January 06, 2015

Project Activity in Bug Reports

Sven Langkamp recently mentiond that Krita had crept up to second place in the list of projects with most new bugs opened in bugzilla in a year. So I decided play around a litte, while Krita is building.

Bugzilla has a nice little report that can show the top X projects with open bugs for certain periods. Krita never is in the default top 20, because other KDE projects always have more open bugs. But let's take the top 100 of KDE projects with open bugs sort the data a bit and then make top 10 lists from the other columns.

Note, there might be projects where more bugs were opened and closed in the past year, but I cannot get that information without going into SQL directly. But I think most active KDE projects are in the top 100.

New bugs created. This is a pretty fair indication of userbase, actually. A project that has a lot of users will get a lot of bug reports. Some might quibble that there's a component of code quality involved, but honestly, pretty much all code is pretty much equal. If you just use an application, you'll mostly be fine, and if you start hacking on it, you'll be horrified. That's normal, it holds for all software.

  • plasmashell: 1012
  • krita: 748
  • plasma: 674
  • kwin: 482
  • digikam: 460
  • kmail2: 388
  • valgrind: 274
  • Akonadi: 270
  • kate: 267
  • kdevelop: 258

I have to admit to being a bit fuzzy about the difference between plasma and plasmashell. It looks like our own developers know how to find bugzilla without trouble, given that there are two, three developer-oriented projects in the top-ten. Of course, valgrind is also widely used outside the KDE community.

Now for bugs closed. This might say something about project activity, either development or triaging. It's a good statistic to be in the top-ten in!

  • plasmashell: -917
  • krita: -637
  • digikam: -615
  • plasma: -479
  • kwin: -391
  • okular: -346
  • dolphin: -263
  • amarok: -255
  • valgrind: -254
  • kate: -249

Not a hugely different list, but it's interesting to see that there are several projects that are in the top-ten for closing bugs, that aren't in the top-ten for receiving new bugs. Maybe that is an indication of code quality? Or maybe better bug triagers? If a project is in the first list, but not in the second list, it might be taken to mean that it's got users, but that development is lagging.

Open bugs. A project can go a long time and collect a huge amount of bugs over that period without having much activity. For instance, in this list, KMail has 880 bugs, but there were zero new bugs in 2014 and only seven bugs closed. I'd say that it's time to remove kmail from bugzilla entirely, or mark all remaining kmail bugs as "unmaintained". The same goes, I guess, for the kio component: 550 open bugs, 1 new, 1 closed in a year.

  • plasma: 1449
  • konqueror: 1432
  • kmail2: 1107
  • kopete: 942
  • kdelibs: 921
  • kmail: 880
  • Akonadi: 650
  • valgrind: 580
  • kio: 550
  • systemsettings: 495
  • kontact: 479

Krita has 237 open bugs, by the way, but since we're working the 2.9 release, that number fluctuates quite a bit.

Conclusions? Well, perhaps none. If bugs are any indication of a project's user base and activity, it's clear that KDE's desktop (plasma, kwin) have the biggest userbase, followed by Krita and Digikam. Maybe that comes as a surprise -- I know I was surprised when Sven noted it.

And there's one more twist -- everyone who uses the Plasma shell or kwin can easily report crashes to bugzilla, because they're on Linux. Most Krita (and I guess Digikam) users are actually not on Linux. Krita's Windows crashes right now still get reported to a server hosted by KO, which is something I need to work on to change...

January 05, 2015

GNOME MultiWriter and Large Hubs

Today I released the first version of GNOME MultiWriter, and built it for Rawhide and F21. It’s good enough for a first release, although there are still a few things left to do. The most important now is probably the self-profiling mode so that we know the best number of parallel threads to use for the read and the write. I want this to Just Work without any user interaction, but I’ll have to wait for my shipment of USB drives to arrive before I can add that functionality.

Also important to the UX is how we display failed devices. Most new USB devices accept the ISO image without a fuss, but the odd device will disconnect before completion or throw a write error. In this case it’s important to know which device is the one that belongs in the rubbish bin. This is harder than you think, as the electrical port number is not always what matches the decal on the plastic box.

For my test system I purchased a 10-port USB hub. I was interested to know how the vendor implemented this, as I don’t know of a SOIC chip that can drive more than 7 ports. It turns out, my 10-port hub is actually a 4-port hub, with a 7-port hub attached to the last port of the first hub. The second hub was also wired 1,2,3,4,7,6,5 rather than 1,2,3,4,5,6,7. This could cause my dad some issues when we tell him that device #5 needs removing.

I’ve merged some code into GNOME MultiWriter to work around this, but if you’ve got a hub with >7 ports I’d be interested to know if this works for you, or if we need to add some more VID/PID matches. If you do try this out you need libgusb from git master today. Helpfully gnome-multi-writer outputs quirk info to the command line if you use --verbose, so that makes debugging this stuff easier.

January 04, 2015

Mechanic-Sister Concept

Translated by:

Concept of the youngest sister of Ivan (our main character). We haven’t defined her name yet – just calling her “Mechanic”, because this is who she is. Artwork by Anastasia Majzhegisheva.

2014-12-22-mechanic-2

2014-12-22-mechanic

A modular manual to guide new krita users

First post of the year, so I can start wishing you all an awesome and happy new year!

As part of my work with Activ Design, I need to prepare some new training material to teach digital painting with Krita. Part of this material will be some kind of software manual. As we don’t have an up to date manual and the booksprint project we started to discuss is on stand-by for now, I started to write a modular manual to guide new Krita users. I call it modular because I want to split each aspect of the software to its own chapter, in separate pdf files (for convenience I’ll also make a version with all chapters in one file at the end). I’m writing both english and french versions: english as it’s the best base for everyone to read and translate, and french because our current students at Activ Design are french ;)

To follow the good old rule “release early, release often”, the first files are already available. This is just the beginning, I’ll add the next chapters in the coming days. Next topics will be the layer stack, the brush editor, and one chapter for each group of tools.

English files:

French files:

If you want to add some translations, the sources are in a git repository: https://gitorious.org/krita-guide

 

A little side note on another topic: if you’ve followed our fundraising campaign for GCompris, you may have noticed we extended the deadline to February 1st. We really need more budget to be able to complete new illustrations for all the different activities, which is needed to reach unified graphics. Please support this project if you can!

January 03, 2015

Launching the new production!

Translated by:

2015-01-03-4-new-year-v2

Hello, everyone! I am happy to announce that we have started preparations for  production of the new episode for “Morevna” series!

We will take a part of screenplay and produce the animated short with a storyline and dialogues (approximate planned duration is 7 minutes). As usual, the production will be done completely with open-source software only – the main tools are: Synfig, Blender, Krita.

Like in previous production two years ago, we have Nikolai Mamashev in the core team. Also, Anastasia Majzhegisheva will join us (you probably already familiar with her works). For the development we can expect support from Ivan Mahonin, who is famous by his work on Synfig.

In the next few weeks we plan to push massive updates for the website and more production details to be announced soon.


Artwork by: Anastasia Majzhegisheva.


January 02, 2015

Introducing GNOME MultiWriter

I spent last night writing a GNOME application to duplicate a ton of USB devices. I looked at mdcp, Clonezilla and also just writing something loopy in bash, but I need something simple my dad could use for a couple of hours a week without help.

It’s going to be super useful for me when I start sending our LiveUSB disks in the ColorHug box (rather than LiveCDs) and possibly useful to other people just wanting to copy a USB drive for QA testing on a small group of people, a XFCE live CD of Fedora rawhide for a code sprint, and that kind of thing.

GNOME MultiWriter allows you to write and verify an ISO file to up to 20 USB devices at once.

Screenshot from 2015-01-02 16:24:35

Bugs (and especially pull requests) accepted on GitHub; if there’s sufficient interest I’ll move the project to git.gnome.org after a few releases.

December 31, 2014

18 anticipated Blender development projects of 2015

Here’s my random ordered list of 18 projects which are expected to make it in Blender next year. Nearly all of these already started already – but some are still in the design and ideas phase. Most of the projects are being worked on supported by the Blender Foundation Development Fund or by Blender Institute – thanks to Blender Cloud subscribers and open movie supporters.

 

It’s an impressive list, I can’t wait for all of this in 2015 to happen! Please keep in mind that – as for any end-of-year listing – it’s subjective and personal overview. I’m sure there are developers out there who have great surprises up their sleeves for us.

 

On behalf of everyone at blender.org I wish you a happy new year!
Ton Roosendaal – Amsterdam, 31-12-2014

 

ManyNodesGraphSmallDependency Graph

There’s no doubt about the importance of this project, yet it’s a difficult one to present or promote.
Just think of it like this: the “Depsgraph” is the core Animation Engine of Blender, the engine that ensures all the updates work, reliably, threaded, linkable, massively similated or in other ways we predict artists will (ab)use Blender animation for the rest of the decade – without running into too much troubles!

Working on it: Sergey Sharybin, Joshua Leung
Likely to happen in 2015: 100%

Manual_multiview_viewport_settingsMulti View

The real hype for stereo (multi-view) film is over now. But there’s no doubt that stereo-3D film has proven to be of sufficient added value to stay around for a while.

Getting this feature work meant to work on the viewport, UI drawing, imaging systems, movie/IO, render engines, compositor and sequence editor. A huge job that has completed now and will get merged in 2.74.

Working on it: Dalai Felinto
Likely to happen in 2015: 100%

 

OpenSubdiv6a0163057a21c8970d017ee5b48bbf970d

OpenSubdiv is a library that enables GPU generating and drawing of Subdivision Surfaces. The Blender branch with OpenSubdiv was  ready for release last summer – but for performance reasons we decided to wait for the new OpenSubdiv release by Pixar. The release was set to Q4 of 2014, but will likely be Q1 or Q2 2015 then.

Working on it: Sergey Sharybin
Likely to happen in 2015: 90%

alembicAlembic

We currently use many different cache file formats for animation or physics in Blender. The Alembic library (by ILM/SPI) has been designed to replace all of that with a single compact compression format. And even more! It’s the most popular format in film and animation pipelines currently (hey Game Industry, wake up and support something similar!).

Working on it: Lukas Toenne
Likely to happen in 2015: 90% (for Blender caching)

Manipulator_SpinCustom manipulators

Blender suffers from an old disease – the Button Panelitis… which is a contagious plague all the bigger 3d software suites suffer from. Attempts to menufy, tabbify, iconify, and shortcuttify this has only helped to keep the disease within (nearly) acceptable limits. But there’s a cure too!

The real challenge is to rethink adding buttons/panelsl radically and to bring back UI to the viewports – and to the regions in a UI where you actually do your work.
In Blender we now test a (Python controlled) generic viewport widget system, which will first be tested and used for character rigs.

Working on it: Antony Ryakiotakis
Likely to happen in 2015: Still prototyping/designing

imagesWorkflow

As we all know, Blender’s screen estate gets too many buttons, everything gets too many options, and we can’t find keys for shortcuts anymore.

It’s time to admit that Blender has grown too large for a single configuration, that people are just too different, and that a good default just depends on a reference situation!

The solution is to go back to good old design efforts – creativity comes out of well defined restrictions, not out of unlimited choices. Exit: “default”, entrance: “Workflow”.

Working on it: Jonathan Williamson & the UI team, also related to Blender 101 code work.
Likely to happen in 2015: At least prototyping/designing

classroomBlender 101

The “Blender 2.5 project” took off in 2009. It was a massive success for Blender, giving us serious attention and involvement from the media industry.

There are still unfinished code jobs for 2.5 – mostly related to enable advanced configuration of editors and the whole UI (layouts, keymaps, etc). In Python!

As a proof-of-concept for completing this target, we’d like to see a “Blender 101″ prototype released, which is a fully release-compatible Blender version configured for teaching 3D to high school students.

Working on it: Campbell Barton, Bastien Montagne
Prototype likely to happen in 2015: 90%

300px-Frankie_mesh_in_BlenderMesh editing Tools

Innovations in Mesh editing keep happening in many ways – for Blender this was even taking off surprisingly using the Python API and by releasing add-ons.

When it comes to efficient modeling, for example for painting tools, sculpting, retopo and UV unwrapping – we still have enough work to do in Blender. Campbell Barton – the lead coder of the Mesh module – will reserve many months next year to work on Mesh tools.

Working on it: Campbell Barton
Likely to happen in 2015: 100% surprise

 

slider__2026_2Viewport

Blender has been using OpenGL since the beginning, almost 20 years ago. The viewport – as we currently have still – was designed keeping the OpenGL 1.0 spec in mind. It’s really time to upgrade that!

This work is for the larger part very technical – adding APIs, redesigning core functions for drawing. For that reason it took a while to see the real benefits.

Feasible for 2015 is to get a shader-based viewport that can be configured to align with the Workflow project – to ensure that Sculpters, Modelers, Animators, Compositors, Game designers, etc… that they all have their own preferred and useful view on the 3d world.
Shaders can be manually edited glsl scripts, but we should be making a decent Node editor for it as well.

Working on it: Jason Wilkins and Antony Ryakiotakis
Likely to happen in 2015: 95%

franck_017_paint-over-deevadHair and particles

The Gooseberry open movie project requires sophisticard control of hair and fur in animated characters. We’re still working on fixing and updating the old system, by replacing parts of this by newer code that’s more in control.

New is having much better control over physics simulation of hair, including proper collisions using Bullet. There will be a better hair edit mode and hair keyframing as well.

Ideally this was all meant to become a node system. For that, the outcome is still uncertain at this moment.

Working on it: Lukas Toenne
Likely to happen in 2015: 100% for working sims, 50% for nodes

 

splashbot_nodesEverything Nodes

So while we’re on Particle nodes anyway – we should check on having Modifier-Nodes, Constraint-Nodes, Transform-nodes, Generative modeling Nodes, and so on.

And – why not then try to unify it into one system? Here a lot of experimenting and design work is needed still. It’s surprising and inspiring to see how well Python scripters manage to do this (partially) already.

Working on it: Lukas Toenne, Campbell Barton, …
Likely to happen in 2015: Unknown.

matcaps_01_08Asset browsing

Blender already allows to make complicated shots with a lot of linking in of data from other .blend files. It’s quite abstract and basic still though.

What we need is to expose this functionality much better to the user, including adding tools and new paradigms to manage this well.

An editor is going to be made to manage all of this linking, re-using, appending, allowing revisions or ‘level of detail’ versions and to manage preset assets and asset libraries in general.

Working on it: Bastien Montagne
Likely to happen in 2015: 98%

 

splashesBlender Asset Manager + Cloud

With an asset browser working on the Blender side, we also need to have something working on server side – either intranet or via internet.

For this we have the Blender Cloud website running now – which is meant to be the Gooseberry Open Movie’s production platform as well – making it a true open production system.

Currently work is completed on efficient checkouts of parts of a film svn (a shot for render farm, or a job for animator). Logging progress and reviewing is a short term target as well.

Working on it: Francesco Siddi, Campbell Barton, …
Likely to happen in 2015: 100% (we need it ourselves)

imagesPTex support

PTex is a Disney library to support image-textures on Meshes without a need for unwrapping in a UV space first. It’s a massive workflow optimizer, but yet has to be verified that we can make it work for us.

Work will be done on editing (painting, baking) and rendering (in Cycles).

Working on it: Nicholas Bishop
Likely to happen in 2015: 90%

 

Screen Shot 2014-12-31 at 16.33.53Compositor Canvas

Blender’s compositor recode project is still in progress – work is being done on better memory consumption and to replace the last bad (non tiled, non threaded) nodes – which will make composites much more responsive.

Biggest project here would be to give the compositor “Canvas awareness” – to give the compositor an own 2D space in which inputs and outputs will get flexibly mapped.

Working on it: Nobody has it assigned yet
Likely to happen in 2015: 50%

 Screen Shot 2014-12-31 at 17.26.56Cycles speedup

The Cycles render engine now has more or less the features we want (although baked rendering and ‘shadow catcher’ material is still high on the list).

To survive as a real production render system we have to find ways to get render times under control. Work can still be done on optimizing BVH, more efficient sampling, coherence, noise reduction.

We don’t expect miracles here though – in the end it comes down to the artist – and giving artists sufficient tools to construct an efficient pipeline for fast renders of shots.

Working on it: Sergey Sharybin, Thomas Dinges
Likely to happen in 2015: 100%

 

Screen Shot 2014-12-31 at 16.36.18Game Engine – revisited

We don’t forget about BGE users and game modelers! They can expect a lot from the Viewport project – it should allow to model and texture using advanced shading and lighting models as currently common in Unreal and similar engines.

If that works, and our animation system is smooth and fast, and physics unified, then we only have one main target to upgrade: A decent logic editor and a way to have logic and playback work smoothly inside the animation system. (Check my proposal 18 months ago about the future of BGE).

Working on it: Not assigned yet
Likely to happen in 2015: 50%

Tears of SteelMotion tracking

A couple of important updates are being scheduled for (video) motion tracking. This especially to get a decent automatic camera solving work – just one button press to get quick solves!

Working on it: Keir Mierle and Sergey Sharybin
Likely to happen in 2015: 90%

 

 

 

 

 

 

 

 

 

 

December 29, 2014

The International Obfuscated Clojure Coding Contest

While some people may argue that Clojure is already obfuscated by default, I think it is about time to organise an official International Obfuscated Clojure Coding Contest similar to the IOCCC. This idea was born out of my own attempts to fit my Clojure experiments in one single tweet, that is 140 characters.

Winning IOCCC entry: flight simulator

The plan

First get some feed back from the Clojure community on this idea. You are invited to share your thoughts as comments to this blogpost. I will also twitter about this idea. If this particular tweet will get 100+ retweets, I will go ahead with the next step in the plan, which is establishing the rules for this contest.

These rules are also open to discussion. At the moment I’m considering for example a category ‘Code fits in a single tweet’ and another one like ‘Code size is limited to 1024 characters’.

After these preliminary steps I will set up a website, find a jury to judge the submissions and will continue from there.

Inspiration

Since Clojure is such a powerful language, there are also plenty opportunities to make the code more challenging to read. Mike Anderson already created a GitHub project called clojure-golf with some tricks.

You are also invited to violate the first rule of the macro club: Don’t write macros. And obviously the second rule (write macros if that is the only way to encapsulate a pattern) should be ignored as well.

Also extending datatypes in unexpected ways is alway a good idea. See for example my answer to this StackOverflow question about ‘an idiomatic clojure way to repeat a string n times‘.

Generating code on the fly is of course a breeze in a ‘code-as-data’ language like Clojure.

Finally

So if you think you can create a fully functional chess engine in 1024 characters, a Java interpreter in a single tweet or managed to make the Clojure REPL self-conscious with your obfuscated code, leave a comment. Also if you have suggestions for rules, want to help with setting up a website, want to be a judge or want to help in another way, I would love to hear from you.

And most importantly: have fun!


December 23, 2014

GCompris needs your help!

At the beginning of this month, we launched a crowdfunding campaign to support my project to work on the artwork and graphics redesign for GCompris. The goal of this project is to provide better unified graphics to all different activities inside GCompris. It is quite a big work, so I really need some financial support to work on it fast enough.

The fundraiser started very good, but then we had less contributions in the next days.. We need to make more noise and reach more pepole, so please donate and keep spreading the word to your friends and family. Still I want to thank all who already donated and/or left some nice comments. As well I want to thank Mageia.org for their nice support, see their interesting blog post about it.

I count on your support, and hope the spirit of Christmas will help us.
And by the way, I wish you all some happy holidays!

GCompris crowdfunding

 

Descending into the bowels of Inkscape code

Introduction

This post is more geared to Inkscape developers than Inkscape users. I hope that by recording my trials and tribulations here it can help others in their coding efforts.

I have been working on adding support for ‘context-fill’ and ‘context-stroke’ to Inkscape. These magical ‘fill’ ans ‘stroke’ property values will allow Inkscape to finally match marker fill (e.g. arrowhead color) to path stroke (e.g. arrow tail) in a simple way. These new property values are part of SVG 2.

Adding this support is a multi-step process. First one must be able to read the new values and then one must be able to apply the values. The former part is rather straight forward. It simply required modifying the SPIPaint class by adding a new ‘enum’ SPPaintOrigin with entries that keep track of where the paint originates. The previous ‘currentColor’ boolean has been incorporated into this new ‘enum’. The latter part proved to be much more of a challenge.

Where does the ‘apply’ code go?

The ‘context-fill’ and ‘context-stroke’ values are applicable to things that are cloned. The driving force for these values is certainly the <marker> element but <symbol> and <pattern> elements could also find the values useful as could anything cloned by the <use> element. For the moment, I concentrated on implementing the values in markers and things cloned by the <use> element.

The first question that comes to mind is: Where in the code is styling applied? This turns out to not be best starting question for cloned objects. A better question is: How does the cloning take place? To answer this question I implemented three routines with the same name: recursivePrintTree(); but, as member functions of different classes: SimpleNode, SPObject, and DisplayItem. These represent different stages in Inkscape’s processing of an SVG document. Here are the findings:

XML Tree (SimpleNode)
The XML Tree matches the structure of the XML file with some extra elements possibly added by Inkscape such as <metadata> and <sodipodi:namedview>. This is the tree that is shown in the XML editor dialog. No cloning is evident.
Object Tree (SPObject)
The object tree is similar to the XML tree. Its main purpose is to handle the CSS styling which involves merging the various sources of styling (external style sheets (rect: {fill:red;}), styling properties (style=”fill:red”), and presentation attributes (fill=”red”), as well as handling all the necessary cascading of styles from parent to child. A few non-rendering elements are missing such as <RFD:rfd> and some new elements appear. Here is our first clue: the object tree includes the unnamed clones of elements created by the <use> element. It makes sense that they appear here. Cloned objects descending from a <use> element participate in the style cascading process. Marker clones, however, are no where to be seen.
Display Tree (DisplayItem)
The display (or rendering) tree includes only elements that will be rendered. All the styling has been worked out; all the placement and sizing of graphical items has been calculated. The metadata, the Inkscape editing data, and the <defs> section are all gone. But now clones of the markers appear, one clone for each node where a marker is placed, each containing the geometric data of position, scale, and orientation. This is a quite reasonable way to handle markers as each marker clone is identical to its siblings (at least until ‘context-fill’ and ‘context-stroke’ came along).

The above “tree” analysis gives the structure of each tree after it has been built, but doesn’t explain how each tree is created. This has ramifiations of how one can handle the ‘context-fill’ and ‘context-stroke’ values.

Creating the XML tree

There are a couple of ways to create the XML tree. The most common is via sp_repr_read_file(). This is used in SPDocument::createNewDoc() as well as routines to read in filters, templates, etc. Files opened by Inkscape from the command line use XSLT::open() which allows XSLT stylesheets to be applied to the SVG file (something I didn’t know we supported). Both functions call sp_repr_do_read() to read the actual files and both end up with an Inkscape::XML::Document.

Creating the object tree

Once the XML tree is created, it is passed to SPDocument::creatDoc() which builds the object tree top down via the recursive calling of SPObject::invoke_build() starting with the document root <svg> element. SPObject::invoke_build() calls the virtual function SPObject::build() which handles element specific things such as reading in attributes. Attributes are read by calling SPObject::readAttr() which performs a few checks before calling the virtual SPObject::set() function. The set() function for SPUse reads in ‘href’, the reference to the object to be cloned by the <use> element. Reading in the reference inserts a copy of the referenced object (the clone) into the object tree via SPUse::href_changed().

Markers are not cloned; only references to the marker elements are added at this step, in the SPShape::build() function.

Creating the display tree

The display tree is created by calling recursively SPItem::invoke_show() on the root object (SPItem, derived from SPObject, is for visible objects). This is done in SPDesktop::setDocument(). SPItem::invoke_show immediately calls the virtual function SPItem::show() which handles element specific things. (SPRoot::show() calls SPGroup::show() which calls SPGroup::_showChildren.) SPItem::show() creates an instance of an appropriate display class object that is derived from Inkscape::DrawingItem(). The virtual function DrawingItem::setStyle() is called to set the style information. One thing to note is that the a child is constructed (with styling) before adding it to the tree so a child’s style cannot be directly dependent on an ancestor in the tree. But ‘context-fill’ and ‘context-stroke’ need ancestor style information so we need to supply that in a different way.

Markers are tracked by a map of vectors of pointers to Inkscape::DrawingItem instances. This map is stored in SPMarker. The map is indexed by a key for each type of marker (start, mid, end) on a particular path. The vector has an entry for each position along the path a marker is to be placed. The DrawingItem instances are created in a two step process from SPShape::show(). First a call to sp_marker_show_dimensions() ensures that the vector is of the correct size. Then a call to sp_shape_update_marker_view() calls sp_marker_show_instance() which creates a new Inkscape::DrawingItem instance if it doesn’t already exist.

Setting the style of a clone

As mentioned above, style is handled in the virtual function DrawingItem()::setStyle(). The DrawingShape and DrawingText setStyle() functions call NRStyle.set() where the work of translating the style information from the object tree form to the display tree form takes place (in the display tree, the styling information is in a form used by the Cairo rendering library). To handle ‘context-fill’ and ‘context-stroke’ we would like to walk up the display tree to find the style of the element that references the clone, i.e. the <use> element or the <path> or <shape> elements in the case of markers. But at the point the setStyle() function is called on an element in the clone, the clone has not yet been inserted into the display tree, so one cannot walk up the display tree. What we can do is hand two sets of styling information from the object tree to to the setStyle() function, the first being the stying information for the object at hand and the second being the styling information that should be used in the case of ‘context-fill’ and ‘context-stroke’. We know the later for the <use> element as its clone is in the object tree. All that is required is setting a pointer to the context style information in the SPUse class and then passing it down to its children. This solution doesn’t work for markers as their clones are not in the object tree. The solution for markers is calling a function that walks down the tree setting the correct styling information after the cloned marker elements are added to the display tree.

Future work

There are still quite a few things to be done. In particular, before we can switch to using ‘context-fill’ and ‘context-stroke’, as well as the ‘orient’ attribute value ‘auto-start-reverse’ (which allows us to handle both start and end arrows with just one marker) we’ll need a fallback solution for SVG 1.1 renderers. Here is a list of things to do:

  • Handle gradients and patterns.
  • Handle text (appendText()).
  • Export to SVG 1.1.
  • Redo markers.svg (source of markers in Inkscape).

Reflections

All the code is now in trunk. To implement ‘context-fill’ and ‘context-stroke’, I added about 200 lines of code. I don’t know the exact amount of time it took, but I would guess a minimum of 20 hours. That works out to about 10 lines per hour… not very productive. The first part, reading in ‘context-stroke’ and ‘context-fill’ took about an hour. I am quite familiar with this part of the code, having C++ified the SPStyle and related classes and having implemented other new properties and property values. It was the second part that took so long. Although I have worked with parts of this code before, it was often quite opaque as to which code is doing what (and why). I ended up going down many false paths. There is a serious lack of comments and often confusing variable and function names (what the heck is an ‘arenaitem’?). A few comments at the appropriate places could have shaved up to 90% of the time it took. In this case, the different way markers and clones are handled required solving the same problem twice.

When you begin a project, it is very hard to estimate the time it will take. I would have thought this project could be done in less than 5 hours, possibly in just a couple. It didn’t turn out that way. On the otherhand, implementing the new CSS property ‘paint-order’ which I thought would take considerable time, took only a couple of hours.

December 22, 2014

Passwordless ssh with a key: the part most tutorials skip

I'm working on my Raspberry Pi crittercam again. I got a battery, so it can be a standalone box -- it was such a hassle to set it up with two power cords dangling from it at all times -- and set it up to run automatically at boot time.

But there was one aspect of the camera that wasn't automated: if close enough to the house to see the wi-fi router, I want it to mount a filesystem from our server and store its image files there. That makes it a lot easier to check on its progress, and also saves wear on the Pi's SD card.

Only one problem: I was using sshfs to mount the disk remotely, and ssh always prompts me for a password.

Now, there are a gazillion tutorials on how to set up an ssh key. Just do a web search for ssh key or passwordless ssh key. They vary a bit in their details, but they're all the same in the important aspects. They're all the same in one other detail: none of them work for me. I generate a new key (various types) with no pass phrase, I copy it to the server's authorized keys file (several different ways, two possible filenames), I try to ssh -- and I'm prompted for a password.

After much flailing I finally found out what was missing. In addition to those two steps, you need to modify your .ssh/config file to tell it which key to use. This is especially critical if you have multiple keys on the client machine, or if you've named the file anything but the default id_dsa or id_rsa.

So here are the real steps for making an ssh key. Assume the server, the machine to which you want to ssh, is named "myserver". But these steps are all run on the client machine, the one from which you want to run ssh.

ssh-keygen -t rsa -C "Comment"
When it prompts you for a filename, give it a full pathname, e.g. ~/.ssh/id_rsa_myserver. Type in a pass phrase, or hit return twice if you want to be able to ssh without a password.
ssh-copy-id -i .ssh/id_rsa_myserver user@myserver
You can omit the user@ if you're using the same username on both machines. You'll have to type in your password on myserver.

Then edit ~/.ssh/config, and add an entry like this:

Host myserver
  User my_username
  IdentityFile ~/.ssh/id_rsa_myserver
The User line is optional, and refers to your username on myserver if it's different from the one on the client. For instance, on the Raspberry Pi, everything has to run as root because most of the hardware and camera libraries can't work any other way. But I want it using my user ID on the server side, not root.

Eliminating strict host key checking

Of course, you can use this to go the other way too, and ssh to your Pi without needing to type a password every time. If you do that, and if you have several Pis, Beaglebones, plug computers or other little Linux gizmos which sometimes share the same IP address, you may run into the annoying whine ssh is prone to:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
The only way to get around this once it happens is by editing ~/.ssh/known_hosts, finding the line corresponding to the pi, and removing it (or just removing the whole file).

You're supposed to be able to turn off this check with StrictHostKeyChecking no, but it doesn't work. Fortunately, there's a trick I discovered several years ago and discussed in Three SSH tips. Here's how the Pi entry ends up looking in my desktop's ~/.ssh/config:

Host pipi
  HostName pi
  User pi
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  IdentityFile ~/.ssh/id_pi

OpenHardware : Ambient Light Sensor

My OpenHardware post about an entropy source got loads of high quality comments, huge thanks to all who chimed in. There look to be a few existing projects producing OpenHardware, and the various comments have links to the better ones. I’ll put this idea back on the shelf for another holiday-hacking session. I’ve still not given up on the SD card interface, although it looks like emulating a storage device might be the easiest and quickest route for any future project.

So, on to the next idea. An OpenHardware USB ambient light sensor. A lot of hardware doesn’t have a way of testing the ambient light level. Webcams don’t count, they use up waaaay too much power and the auto-white balence is normally hardcoded in hardware. So I was thinking of producing a very low cost mini-dongle to measure the ambient light level so that lower-spec laptops could be saving tons of power. With smartphones people are now acutely aware than up to 60% of their battery power is just making the screen light up and I’m sure we could be smarter about what we do in GNOME. The problem traditionally, has been the lack of hardware with this support.

Anyone interested?

December 20, 2014

Switching between Stable, Nearly Stable and Unstable Krita -- a guide for artists.

The first thing to do is to open the Cat's Guide To Building Krita, because that's the base for what I'm going to try to explain here. It's what I use for developing Krita, I usually have every version from 2.4 up ready for testing.

Read more ...

December 19, 2014

Programming with LLDB

Some notes on poking around with LLDB programatically.

Build

Building on OSX was a bit of a pain - use `./configure` then `make` as suggested here, but to build the debugserver you need to open and build debugserver from tools/debugserver. You'll need to set up code signing like this.

LLDB Text API

Start with  `lldb`

Has a Complete API - complex to parse responses programatically, no way to tie a response to the commnd that triggered it.

A ruby example invoking the lldb text api https://gist.github.com/jorj1988/e6c3df64199b6932f39d

LLDB Machine Interface

Start with `lldb-mi --interpreter`

Incomplete API (seemed to be missing a reasonable amount of commands?), easier to tie responses to commands. Still requires parsing. 


C++ Interface

Include `include/lldb/API`, link to `liblldb.dylib`

Has a stable api here. Example here.


Python Interface

Mirrors C++ API. Example of usage here.


Summary

The python API seem like the most sensible option for interacting with LLDB... But I don't think I want to program a large project in python - maybe C++?

OpenHardware Random Number Generator

Before I spend another night reading datasheets; would anyone be interested in an OpenHardware random number generator in an full-size SD card format? The idea being you insert the RNG into the SD slot of your laptop, leave it there, and the kernel module just slurps trusted entropy when required.

Why SD? It’s a port that a a lot of laptops already have empty, and on server class hardware you can just install a PCIe addon card. I think I can build such a thing for less than $50, but at the moment I’m just waiting for parts for a prototype so that’s really just a finger-in-the-air estimate. Are there enough free software people who care about entropy-related stuff?

December 18, 2014

Firefox deprecates flash. How to get it back (on Debian).

Recently Firefox started refusing to run flash, including youtube videos (about the only flash I run). A bar would appear at the top of the page saying "This plug-in is vulnerable and should be upgraded". Apparently Adobe had another security bug. There's an "Update now" button in the Firefox bar, but it's a chimera: Firefox has never known how to install plug-ins for Linux (there are longstanding bugs filed on why it claims to be able to but can't), and it certainly doesn't know how to update a Debian package.

I use a Firefox downloaded from Mozilla.org, but flash from Debian's flashplugin-nonfree package. So I figured updating Debian -- apt-get update; apt-get dist-upgrade -- would fix it. Nope. I still got the same message.

A little googling found several pages recommending update-flashplugin-nonfree --install; I tried that but it didn't help either. It seemed to download a tarball, but as far as I could tell it never unpacked or installed the tarball it downloaded.

What finally did the trick was

apt-get install --reinstall flashplugin-nonfree
That downloaded a new tarball, AND unpacked and installed it. After restarting Firefox, I was able to view the video I'd been trying to watch.

Are you awesome? Would you like to work with me …

Are you awesome? Would you like to work with me? Every day? Silverorange, the web development company at which I enjoy spending most of my days, is considering hiring a designer / front-end developer.

Wikipedia #Edit2014 Video

About a two months ago I was approached by Victor Grigas, a video producer for the Wikimedia Foundation (the non-profit that supports Wikipedia), about using some of the techniques I had previously discussed to create 2.5D parallax video images from single photographs. The intention was to use these 2.5D videos as part of their first ever "Year in Review" video:



For reference, this was my previous result using F/OSS to create the 2.5D parallax effect with still images:



For the Wikipedia video, Victor asked if I could use some images from Wiki Loves Monuments (apparently the worlds largest photo competition according to the Guiness World Records). How could I say no? (Disclaimer: I donate every year during their funding drives).

So I agreed, and after a short wait for the finalists from the competition to be chosen, was sent these two awesome images to turn into 2.5D parallax videos:



After a bit of slicing and dicing, I ended up with these short segments that ended up in the final video. As before, I did the main plane separations in GIMP manually. I divided the planes to best accommodate the anticipated camera movement through the scene (simple dolly pans). Once I had the planes separated, it was a simple process to bring them into Blender and offset the planes as the camera tracked across the scene:





This was a fun project to work on, and I want to thank the Wikimedia Foundation for giving me a chance to play with some gorgeous images and hopefully to help out in my own small way with the final outcome!

Also, Victor does a nice interview with the Wikimedia blog about producing the overall video. Great work everyone!

Leaving KO

Inge, Tobias and I founded KO GmbH in 2007 in Magdeburg. We named it KOfficeSource, because we believed that KOffice, which is Calligra these days, was getting ready for the big time, especially on mobile. Nokia was beginning to invest heavily into open source, Intel joining in with Moblin, the times were heady and exciting! After a bit of rough-and-tumble about the name, we renamed KOfficeSource GmbH to KO GmbH and from 2010 on, we were in business!

For a couple of years we had a great time. We ported Calligra to Maemo, Meego, Sailfish and Windows. We created half a dozen mobile versions of the core Calligra applications: viewers, editors. Along the way, we found some other customers, next to Nokia and Intel: NLNet helped with the port to Windows, SKF used Calligra in their Orpheus ball-bearing modeling tool as the report-writing component, ROSA was getting interested in the WebODF technology we had developed together with NLNet.

Our customers were happy: we really delivered amazing technology, applications with a great user experience, were good at working together with other teams and, well, basically, we always delivered. Whether it was C++, Python or Javascript, Qt, QML or HTML5.

Then things began to go awry. Even after dropping Meego, Nokia was still a customer of ours for some time, but we were doing prototype stuff in j2me for Asha phones. Not really exciting! ROSA went broke. We lost SKF as a customer when they had to reorganize to turn their development process around. Other customers had to cut down -- and we were also basically a bunch of tech nerds with no idea about doing sales: until now we never had to do sales.

Which meant that we failed to build enough of a business to sustain ourselves. We tried to expand, with Krita being an obvious choice for a mature product. But that still needed sales, and we failed at that, too.

So, from January on, I'll be no longer with KO GmbH. The Krita Foundation has taken over Krita on Steam and the support for the Krita Studio customers. We'll first release Krita 2.9, which is going to be awesome! And then, I'll be looking for work again, as a project lead or developer, freelance or with a company, on Krita or something else altogether.

December 17, 2014

Actually shipping AppStream metadata in the repodata

For the last couple of releases Fedora has been shipping the appstream metadata in a package. First it was the gnome-software package, but this wasn’t an awesome dep for KDE applications like Apper and was a pain to keep updated. We then moved the data to an appstream-data package, but this was just as much of a hack that was slightly more palatable for KDE. What I’ve wanted for a long time is to actually ship the metadata as metadata, i.e. next to the other files like primary.xml.gz on the mirrors.

I’ve just pushed the final patches to libhif, PackageKit and appstream-glib, which that means if you ship metadata of type appstream and appstream-icons in repomd.xml then they get downloaded automatically and decompressed into the right place so that gnome-software and apper can use the data magically.

I had not worked on this much before, as appstream-builder (which actually produces the two AppStream files) wasn’t suitable for the Fedora builders for two reasons:

  • Even just processing the changed packages, it took a lot of CPU, memory, and thus time.
  • Downloading screenshots from random websites all over the internet wasn’t something that a build server can do.

So, createrepo_c and modifyrepo_c to the rescue. This is what I’m currently doing for the Utopia repo.

createrepo_c --no-database x86_64/
createrepo_c --no-database SRPMS/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream.xml.gz		\
	x86_64/repodata/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

If you actually do want to create the metadata on the build server, this is what I use for Utopia:

appstream-builder			\
	--api-version=0.8		\
	--origin=utopia			\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=4			\
	--min-icon-size=48		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons	\
	--screenshot-uri=http://people.freedesktop.org/~hughsient/fedora/21/screenshots/

For Fedora, I’m going to suggest getting the data files from alt.fedoraproject.org during compose. It’s not ideal as it still needs a separate server to build them on (currently sitting in the corner of my office) but gets us a step closer to what we want. Comments, as always, welcome.

December 12, 2014

OpenRaster and OpenDocument

OpenRaster is a file format for layered images. The OpenRaster specification is small and relatively easy to understand, essentially each layer is represented by a PNG image, and other information is contained written in XML and it is all contained in a Zip Archive. OpenRaster is inspired by OpenDocument.
OpenDocument is a group of different file formats, including word processing, spreadsheets, and vector drawings. The specification is huge and continues to grow. It cleverly reuses many existing standards, avoiding repeating old mistakes, and building on existing knowledge.

OpenRaster can and should reuse more from OpenDocument.



It is easy to say but putting it into practice is harder. OpenDocument is a huge standard so where to begin? I am not even talking about the OpenDocument Graphics (.odg) specifically but more generally than that. It is best that show it with an example. So I created an example OpenRaster image with some fractal designs. You can unzip this file and see that like a standard OpenRaster file it contains:


fractal.ora  
 ├ mimetype
 ├ stack.xml
 ├ data/
 │  ├ layer0.png
 │  ├ layer1.png
 │  ├ layer2.png
 │  ├ layer3.png
 │  ├ layer4.png
 │  └ layer5.png
 ├ Thumbnails/
 │  └ thumbnail.png
 └ mergedimage.png

It also unusually contains two other files manifest.xml content.xml. Despite the fact that OpenDocument is a huge standard the minimum requirements for a valid OpenDocument file comes down to just a few files. The manifest is a list of all the files contained in the archive, and content.xml is the main body of the file, and does some of the things that stack.xml does in OpenRaster (for the purposes of this example, it does many other things too). The result of these two extra files, a few kilobytes of extra XML, is that the image is both OpenRaster AND OpenDocument "compatible" too. Admittedly it is an extremely small tiny subset of OpenDocument but it allows a small intersection between the two formats. You can test it for yourself, rename the file from .ora .odg and LibreOffice can open the image.

To better demonstrate the point, I wanted to "show it with code!" I decided to modify Pinta (a Paint program written in GTK and C#) and my changes are on GitHub. The relevant file is Pinta/Pinta.Core/ImageFormats/OraFormat.cs which is the OpenRaster importer and exporter.

This is a proof of concept, it is limited and not useful to ordinary users. The point is only to show that OpenRaster could borrow more from OpenDocument. It is a small bit of compatibility that is not important by itself but being part of the larger group could be useful.

December 10, 2014

Not exponential after all

We're saved! From the embarrassing slogan "Live exponentially", that is.

Last night the Los Alamos city council voted to bow to public opinion and reconsider the contract to spend $50,000 on a logo and brand strategy based around the slogan "Live Exponentially." Though nearly all the councilors (besides Pete Sheehey) said they still liked the slogan, and made it clear that the slogan isn't for residents but for people in distant states who might consider visiting as tourists, they now felt that basing a campaign around a theme nearly of the residents revile was not the best idea.

There were quite a few public comments (mine included); everyone was civil and sensible and stuck well under the recommended 3-minute time limit.

Instead, the plan is to go ahead with the contract, but ask the ad agency (Atlas Services) to choose two of the alternate straplines from the initial list of eight that North Star Research had originally provided.

Wait -- eight options? How come none of the previous press or the previous meeting mentioned that there were options? Even in the 364 page Agenda Packets PDF provided for this meeting, there was no hint of that report or of any alternate strap lines.

But when they displayed the list of eight on the board, it became a little clearer why they didn't want to make the report public: they were embarrassed to have paid for work of this quality. Check out the list:

  • Where Everything is Elevated
  • High Intelligence in the High Desert
  • Think Bigger. Live Brighter.
  • Great. Beyond.
  • Live Exponentially
  • Absolutely Brilliant
  • Get to a Higher Plane
  • Never Stop Questioning What's Possible

I mean, really. Great Beyond? Are we're all dead? High Intelligence in the High Desert? That'll certainly help with people who think this might be a bunch of snobbish intellectuals.

It was also revealed that at no point during the plan was there ever any sort of focus group study or other tests to see how anyone reacted to any of these slogans.

Anyway, after a complex series of motions and amendments and counter-motions and amendments and amendments to the amendments, they finally decided to ask Atlas to take the above list, minus "Live Exponentially"; add the slogan currently displayed on the rocks as you drive into town, "Where Discoveries are Made" (which came out of a community contest years ago and is very popular among residents); and ask Atlas to choose two from the list to make logos, plus one logo that has no slogan at all attached to it.

If we're lucky, Atlas will pick Discoveries as one of the slogans, or maybe even come up with something decent of their own.

The chicken ordinance discussion went well, too. They amended the ordinance to allow ten chickens (instead of six) and to try to allow people in duplexes and quads to keep chickens if there's enough space between the chickens and their neighbors. One commenter asked for the "non-commercial' clause to be struck because his kids sell eggs from a stand, like lemonade, which sounded like a very reasonable request (nobody's going to run a large commercial egg ranch with ten chickens); but it turned out there's a state law requiring permits and inspections to sell eggs.

So, folks can have chickens, and we won't have to live exponentially. I'm sure everyone's breathing a little more easily now.

Revamp of Volumetric Shell

Hi
These days I have to go back to volumetric Shell (Non intersecting extrusion/offset tool) and along with many tweaks Ive improved the core algorithm so now non manifold cases should never happen, borders quality is improved too.

This are some random dev screenshots I would like to share because I love to watch other devs random screens! LOL

uyop Capturejopu Captureuip jlktru tyiyioo


December 08, 2014

A look at new developer features

As the development window for GNOME 3.16 advances, I've been adding a few new developer features, selfishly, so I could use them in my own programs.

Connectivity support for applications

Picking up from where Dan Winship left off, we've merged support for application to detect the network availability, especially the "connected to a network but not to the Internet" case.

In glib/gio now, watch the value of the "connectivity" property in GNetworkMonitor.

Grilo automatic network awareness

This glib/gio feature allows us to show/hide Grilo sources from applications' view if they require Internet and LAN access to work. This should be landing very soon, once we've made the new feature optional based on the presence of the new GLib.

Totem

And finally, this means we'll soon be able to show a nice placeholder when no network connection is available, and there are no channels left.

Grilo Lua resources support

A long-standing request, GResources support has landed for Grilo Lua plugins. When a script is loaded, we'll look for a separate GResource file with ".gresource" as the suffix, and automatically load it. This means you can use a local icon for sources with the URL "resource:///org/gnome/grilo/foo.png". Your favourite Lua sources will soon have icons!

Grilo Opensubtitles plugin

The developers affected by this new feature may be a group of one, but if the group is ever to expand, it's the right place to do it. This new Grilo plugin will fetch the list of available text subtitles for specific videos, given their "hashes", which are now exported by Tracker.

GDK-Pixbuf enhancements

I can point you to the NEWS file for the latest version, but the main gains are that GIF animations won't eat all your memory, DPI metadata support in JPEG, PNG and TIFF formats, and, for image viewers, you can tell whether a TIFF file is multi-page to open it in a more capable viewer.

Batched inserts, and better filters in GOM

Does what it says on the tin. This is useful for populating the database quicker than through piecemeal inserts, it also means you don't need to chain inserts when inserting multiple items.

Mathieu also worked on fixing the priority of filters when building complex queries, as well as supporting more than 2 items in a filter ("foo OR bar OR baz" for example).

My Letter to the Editor: Make Your Voice Heard On 'Live Exponentially'

More on the Los Alamos "Live Exponentially" slogan saga: There's been a flurry of letters, all opposed to the proposed slogan, in the Los Alamos Daily Post these last few weeks.

And now the issue is back on the council agenda; apparently they're willing to reconsider the October vote to spend another $50,000 on the slogan.

But considering that only two people showed up to that October meeting, I wrote a letter to the Post urging people to speak before the council: Letter to the Editor: Attend Tuesday's Council Meeting To Make Your Voice Heard On 'Live Exponentially'.

I'll be there. I've never actually spoken at a council meeting before, but hey, confidence in public speaking situations is what Toastmasters is all about, right?

(Even though it means I'll have to miss an interesting sounding talk on bats that conflicts with the council meeting. Darn it!)

A few followup details that I had no easy way to put into the Post letter:

The page with the links to Council meeting agendas and packets is here: Los Alamos County Calendar.

There, you can get the short Agenda for Tuesday's meeting, or the full 364 page Agenda Packets PDF.

[Breathtaking raised to the power of you] The branding section covers pages 93 - 287. But the graphics the council apparently found so compelling, which swayed several of them from initially not liking the slogan to deciding to spend a quarter million dollars on it, are in the final presentation from the marketing company, starting on page p. 221 of the PDF.

In particular, a series of images like this one, with the snappy slogan:

Breathtaking raised to the power of you
LIVE EXPONENTIALLY

That's right: the advertising graphics that were so compelling they swayed most of the council are even dumber than the slogan by itself. Love the superscript on the you that makes it into an exponent. Get it ... exponentially? Oh, now it all makes sense!

There's also a sadly funny "Written Concept" section just before the graphics (pages 242- in the PDF) where they bend over backward to work in scientific-sounding words, in bold each time.

But there you go. Hopefully some of those Post letter writers will come to the meeting and let the council know what they think.

The council will also be discussing the much debated proposed chicken ordinance; that discussion runs from page 57 to 92 of the PDF. It's a non-issue for Dave and me since we're in a rural zone that already allows chickens, but I hope they vote to allow them everywhere.

December 07, 2014

How to code a nice user-guided foreground extraction algorithm? (Addendum)

After writing my last article on the Easy (user-guided) foreground extraction algorithm, I realized that you could maybe think I was exaggeratedly arguing the whole algorithm can be re-coded very quickly from scratch. After all, I’ve just illustrated how the things work by using G’MIC command lines, but there is already a lot of image processing algorithms implemented in G’MIC, so it is somehow a biased demonstration. So let me just give you the corresponding C++ code for the algorithm. Here again, you may find I cheat a little bit, because I use some functions of a C++ image processing library (CImg) that actually doe most of the hard implementation work for me.

But:

  1. You can use this library too in your own code. The CImg Library I use here works on multiple platforms and has a very permissive license, so you probably don’t have to re-code the key image processing algorithms by yourself. And even if you want to do so, you can still look at the source code of CImg. It is quite clearly organized and function codes are easy to find (the whole library is defined in a single header file CImg.h).
  2. I never said the whole algorithm could be done in few lines, only that it was easy to implement :)

So, dear C++ programmer, here is the simple prototype you need to make the foreground extraction algorithm work:

#include "CImg.h"
using namespace cimg_library;

int main() {

  // Load input color image.
  const CImg<float> image("image.png");

  // Load input label image.
  CImg<float> labels("labels.png");
  labels.resize(image.width(),image.height(),1,1,0);  // Be sure labels has the correct size.

  // Compute gradients.
  const float sigma = 0.002f*cimg::max(image.width(),image.height());
  const CImg<float> blurred = image.get_blur(sigma);
  CImgList<float> gradient = blurred.get_gradient("xy");
    // gradient[0] and gradient[1] are two CImg images which contain
    // respectively the X and Y derivatives of the blurred RGB image.

  // Compute the potential map P.
  CImg<float> P(labels);
  cimg_forXY(P,x,y)
    P(x,y) = 1/(1 +
                cimg::sqr(gradient(0,x,y,0)) + // Rx^2
                cimg::sqr(gradient(0,x,y,1)) + // Gx^2
                cimg::sqr(gradient(0,x,y,2)) + // Bx^2
                cimg::sqr(gradient(1,x,y,0)) + // Ry^2
                cimg::sqr(gradient(1,x,y,1)) + // Gy^2
                cimg::sqr(gradient(1,x,y,2))); // By^2

  // Run the label propagation algorithm.
  labels.watershed(P,true);

  // Display the result and exit.
  (image,P,labels).display("Image - Potential - Labels");

  return 0;
}

To compile it, using g++ on Linux for instance, you have to type something like this in a shell (I assume you have all the necessarily headers and libraries installed):

$ g++ -o foo foo.cpp -Dcimg_use_png -lX11 -lpthread -lpng -lz

Now, execute it:

$ ./foo

and you get this kind of result:

cimg_ef

Once you have the resulting image of propagated labels, it is up to you to use it the way you want to split the original image into foreground/background layers or keep it as a mask associated to the color image.

The point is that even if you add the code of the important CImg methods I’ve used here, i.e. CImg<T>::get_blur(), CImg<T>::get_gradient() and CImg<T>::watershed(), you won’t probably exceed about 500 lines of C++ code. Yes, instead of a single line of G’MIC code. Now you see why I’m also a big fan of coding image processing stuffs directly in G’MIC instead of C++ ;).

A last remark before ending this addendum:

Unfortunately, the label propagation algorithm itself is hardly parallelizable. It is based on a priority queue and basically, the choice of a new label to set depends on how the latest label has been set. This is a sequential algorithm by nature. So, when processing large images, a good idea is to do the computations and preview the foreground extraction result only on a smaller preview of the original image. Then compute the full-res mask only once, after all key points have been placed by the user.

Well that’s it, happy coding!

December 06, 2014

How to code a nice user-guided foreground extraction algorithm?


Principle of the algorithm:

About three months ago, I’ve added a new filter in the G’MIC plug-in for GIMP, which allows to interactively extract a foreground object present in a color image, from its background (considering a single-layered bitmap image as the input). But instead of doing this the “usual” way, e.g. by letting the user crop the object manually with any kind of selection tools, this filter relies on a semi-automatic (user-guided) algorithm to do the job. Basically, the only thing the user needs to do is placing key points which tell about the nature of the underlying pixels: either they belong to the foreground (symbolized by green dots) or to the background (red dots). The algorithm tries then to propagate those labels all over the image, so that every image pixel get a binary label (either foreground or background). The needed interaction is quite limited, and the user doesn’t need to be an image processing guru to control what he does: left/right mouse buttons to add a new foreground/background key point (or move an existing one), middle button and mouse wheel to zoom in/out and pan shift. Nothing really complicated, easy mode.

gmic_extract

Fig.1. Principle of the user-guided foreground extraction algorithm: the user puts some foreground/background key points over a color image (left), and the algorithm propagates those labels for all pixels, then splits the initial image into two foreground/background layers (middle and right).

Of course, the algorithm which performs the propagation of these binary labels is quite dumb (as most image processing algorithms actually) despite the fact it tries to follow the contours detected in the image as much as possible. So, sometimes it just assign the wrong label to a pixel. But from the user point of view, this is not so annoying because it is really easy to locally correct the algorithm estimation by putting additional key points in the wrongly labelled regions, and run a new iteration of the algorithm.

Here is a quick video that shows how foreground extraction can be done when using this algorithm. Basically we start by placing one or two key points, run the algorithm (i.e. press the Space bar), then correct the erroneous areas by adding or moving key points, run the algorithm again, etc.. until the contours of the desired object to extract are precise enough. As the label propagation algorithm is reasonably fast (hence dumb), the corresponding workflow runs smoothly.

At this stage, two observations have to be done:

1. As a user, you can easily experiment this manipulation in GIMP, if you have the G’MIC plug-in installed (and if this is not the case, do not wait one second longer and go install it! ;) ). This foreground extraction filter is located in Contours / Extract foreground [interactive]. When clicking on the Apply button, it opens a new interactive window where you can start adding your foreground/background key points. Press Space to update the extracted mask (i.e. run a new iteration of the algorithm) until it fits your requirements, then quit the window and your foreground (and optionally background) selection will be transferred back to GIMP. I must say I have been quite surprised by the good feedback I got for this particular filter. I’m aware this can be probably a time-saver in some cases, but I can’t get out of my head the overall simplicity of the underlying algorithm.

gmic_extract_gimp

Fig.2. Location of the “Extract foreground” filter in the G’MIC plug-in for GIMP.

2. As a developer, you need to convince yourself that this label propagation algorithm is trivial to code. Dear reader, if by chance you are a developer of an open-source software for image processing or photo retouching, and you think this kind of foreground extraction tool could be a cool feature to have, then be aware this can be re-coded from scratch in a very short amount of time. I’m not lying, the hardest part is actually coding the GUI. I illustrate this fact in the followings, and give some details about how this propagation algorithm is currently implemented in G’MIC.


Details of the implementation:

Now I assume that the hardest work has been already done, which is the design of the GUI for letting the user place foreground/background key points. Now, we have then two known images as the input of our algorithm.

framed_colander

Fig.3.a. The input color image (who’s that guy ?)

framed_colander_labels

Fig.3.b. An image of labels, containing the FG/BG key points

At this stage, the image of user-defined labels (in Fig.3.b.) contain pixels whose value can be only 0 (unlabeled pixels = black), 1 (labeled as background = gray) or 2 (labeled as foreground = white). The goal of the label propagation algorithm is then to set a value of 1 or to pixels that have a current label of 0. This label propagation is performed by a so-called Watershed algorithm, which is in fact nothing more than the usual Dijkstra’s algorithm applied on a regular grid (the support of the image) instead of a generic graph. The toughest parts of the label propagation algorithm are:

  1. Managing a priority queue (look at the pseudo-code on the Wikipedia page of the Dijkstra’s algorithm to see what I mean). This is a very classical algorithmic structure, so nothing terrible for a programmer.
  2. Defining a scalar field P of potentials (or priorities) to drive the label propagation. We have to define a potential/priority value for each image pixel, that will tells if the current pixel should be labeled in priority or not (hence the need for a priority queue). So P has the same size as the input image, but has only one channel (see it as a matrix of floating-point values if you prefer).

Suppose that you tells the propagation algorithm that all pixels have the same priority (so you feed it with a constant potential image P(x,y)=1). Then, your labels will propagate such that each reconstructed label will take the value of its nearest known (user-defined) label. This will not take the image data into account, so there is no chance to get a satisfactory result. Instead, the key idea is to define the potential image P such that it depends on the contour variations of the color image you want to segment. More precisely, you want pixels with low variations to have a high priority, while pixels on image structures and contours should be labeled last. In G’MIC, I’ve personally defined the potential image P from the input RGB image with the following (heuristic) formula:

potentials

where

gradient

is the usual definition of the image gradient, here appearing in the potential map P for each image channel R, G and B. To be more precise, we estimate those gradients from a slightly blurred version of the input image (hence the sigma added to R,G and B, which stands for the standard variation of the Gaussian blur applied on the RGB color image). Defined like this, the values of this potential map P are low for pixels on image contours, and high (with a maximum of 1) for pixels located on flat regions.

If you have the command-line interface gmic of G’MIC installed on your machine, you can easily compute and visualize such a propagation map P for your own image. Just type this simple command line in a shell:

$ gmic image.jpg -blur 0.2% -gradient_norm -fill '1/(1+i^2)'

For our sample image above, the priority map P looks like this (color white correspond to the value 1, and black to something close to 0):

framed_colander_potential

Fig.4. Estimated potential/priority map P.

Now, you have all the ingredients to extract your foreground. Run the label propagation algorithm on your image of user-defined labels, with the priority map P estimated from the input color image. It will output the desired image of propagated labels:

framed_colander

Fig.5.a. Input color image.

framed_colander_mask

Fig.5.b. Result of the labels propagation algorithm.

And you won’t believe it, but you can actually do all these things in a single command line with gmic:

$ gmic image.jpg labels.png --blur[0] 0.2% -gradient_norm[-1] -fill[-1] '1/(1+i^2)' -watershed[1] [2] -reverse[-2,-1]

 

In Fig.5.b., I let the foreground/background frontier pixels keep their values 0, to better see the estimated background (in grey) and foreground (in white). Those pixels are of course also labeled in the final version of the algorithm. And once you have computed this image of propagated labels, the extraction work is almost done. Indeed, this label image is a binary mask that you can use to easily split the input color image into two foreground/background layers with alpha channels:

framed_colander_foreground

Fig.6.a. Estimated foreground layer, after label propagation.

framed_colander_background

Fig.6.b. Estimated background layer, after label propagation.

So, now you have a better idea on how all this works… Isn’t that a completely stupid algorithm ? Apologies if I broke the magic of the thing! :D

Of course, you can run the full G’MIC foreground extraction filter, not only from the plug-in, but also from the command line, by typing:

$ gmic image.jpg -x_segment , -output output.png

The input file should be a regular RGB image. This command generates an output file in RGBA mode, where only a binary alpha channel has been added to the image (the RGB colors of each pixel remain unchanged).

And now ?

You know the secret of this nice (but simple) foreground extraction algorithm. So you have two options:

  1. If you are not a developer, harass the developers of your favorite graphics software to include this “so-simple-to-code” algorithm into it!
  2. If you are a developer interested by the algorithm, just try to code a prototype for it. You will be surprised by how fast this could be actually done!

In any case, please don’t hate me if none of these suggestions worked ;). See you in a next post!

See also the Addendum I wrote, that describes how this can be implemented in C++.

December 05, 2014

David Tschumperlé and OpenSource.graphics

Some of you may be familiar with G'MIC, the rather extensive image processing language created by David Tschumperlé that has a very popular plug-in for GIMP.

If you're a fan, here's a nice little treat for you. David has started a blog about image processing with open source software:

http://opensource.graphics




If you'd like a front seat to some of the more technically interesting things going on behind the scenes at G'MIC, this would be a good blog to follow I think. He's already come out of the gate with a neat 3D colorcube investigation of some images (seen above, Mairi).

December 04, 2014

Visualizing the 3D point cloud of RGB colors


Preliminary note:

This is the very first post of my blog on Open Source Graphics. So, dear reader, welcome! I hope you will find some useful (or at least distracting) articles here. Please feel free to comment, and tell me about your advice and experience on graphics creation, photo retouching and image processing.


Understanding how the image colors are distributed:

According to Wikipedia, the histogram of an image is “a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.”

This is a very usual way to better understand the global statistics of the pixel values in an image. Often, when we look at an image histogram (which is a feature that most image editors and photo retouching programs propose), we see something like this:

histogram_luminance1

Fig.1. Lena image (left), and histogram of the luminances (right).

Here, the histogram (on the right) tells about the distribution of the image luminances, for each possible value of the luminance, from 0 to 255 (assuming a 8-bits per channel color image as an input). Sometimes, the displayed histogram rather contains the distribution of the pixel values simultaneously for the 3 image channels RGB like in the figure below:

histogram_colors

Fig.2. Lena image (left), and its RGB histogram (right).

For instance, this RGB histogram clearly shows that the image contains a lot of pixels with high values of Red, which is in fact not so surprising for this particular image of Lena. But what we still miss is the relationship between the three color components: do the pixels with a lot of Red cover a wide range of different colors, or is it just the same reddish color repeated all over the image ? Look at the image below: it has exactly the same RGB histogram as the image above. I’ve just mirrored the Green and Blue channels respectively along the x and y-axes. But, the perception of the image colors is in fact very different. We see a lot more greens in it.

histogram_colors2

Fig.3. Partially mirrored Lena image (left), and its RGB histogram.

So clearly, it could be interesting to get more clues about the correlation between the different color components of the pixels in an image. This requires to visualize something different, as the 3D distribution (or histogram) of the colors in the RGB space: each pixel (x,y) of a color image can be indeed viewed as a point with 3D coordinates (R(x,y),G(x,y),B(x,y)) located inside a cube of size 256x256x256, in the 3D color space RGB. The whole set of image pixels forms then a 3D point cloud in this 3D space. To visualize this pointcloud, each displayed point takes a color that can be either its actual RGB value (to get the 3D colors distribution), or a color expressing the number of occurrences of this RGB color in the initial image (to get the 3D colors histogram). For the Lena image, it looks like this. A wireframe representation of the RGB cube boundaries has been also added to better locate the image colors:

coldis_lena

Fig.4. Lena image (left), colors distribution in 3D RGB space (middle) and colors histogram in 3D RGB space (right).

With this way of representing colors distributions, we get a better feeling about:

  1. The global variety of the colors in an image.
  2. Locally, the amount of dispersion of color tones and shades around a given color.

I personally prefer the 3D colors distribution view (Fig.4.middle), as it is easier to see which color corresponds to which point, even if we lost the information about the color occurrences (but this is still hardly visible in the 3D colors histogram). Now, if we compare the 3D colors distributions of the Lena image and its partially-mirrored version (remember, they have the same RGB histogram), we can see a big difference !

coldis_lenaflip

Fig.5. Comparisons of the colors distributions in the 3D RGB space, for the Lena image, and its partially-mirrored version.

Let me show you some other examples, to illustrate that visualizing 3D colors distributions is a nice additional way to get more global information about how the colors of an image are arranged:

coldis_vegas

Fig.6. Las Vegas image (left) and its colors distribution in the 3D RGB space (right).

The image above has been shot by Patrick David in Las Vegas. We can see in the 3D colors distribution view, which colors are present and the input image (which is already quite colorful), and which are not (purples are almost non-existent). Below is another image (a portrait of Mairi) shot by Patrick:

coldis_mary

Fig.7. Portrait of Mairi (left) and its colors distribution in the 3D RGB space (right).

Here, the 3D colors distribution reflects the two dominant colors of the image (greys and skin/hair colors) and the continuity of their shading into the black. Note that this is not something we can see from the RGB histogram of the same input image, so this kind of visualization really says something new about the global statistics of the image colors.

mary_histo

Fig.8. Portrait of Mairi (left) and its RGB histogram (right).

Let me show you some more extreme cases:

coldis_bottles

Fig.9. Bottles image (left) and its colors distribution in the 3D RGB space (right).

We can clearly see three main color branches, corresponding to each of the three main colors of the bottles (blue, pink and yellow-green), with all the shades (to white) the corresponding pixels take. Now, let us take the same image which has been desaturated:

coldis_bottlesbw

Fig.10. Desaturated bottles image (left) and its colors distribution in 3D RGB space (right).

No surprise, the only 3D color points you get belong to the diagonal of the RGB cube, i.e. the points where R = G = B. This last example shows the opposite case with a (synthetic) image containing a lot of different colors :

coldis_plasma

Fig.11. Synthetic plasma image (left) and its colors distribution in 3D RGB space (right).

I think all these examples illustrate quite well that we can often learn a lot about the colors dispersion of an image by looking at his 3D colors distribution, in addition to the usual histograms. This is somehow something I miss in many image editors and image retouching software.

So, my next step will be to show how I’ve create those views, and how you can do the same with your own images. And of course, only with open-source imaging tools ;)  I chose to use the command-line interface of G’MIC for this particular task, because it is quite flexible to manage 3D vector objects (and also because I know it quite well!).


Generating 3D colors distribution views with G’MIC:

I assume you have already the command-line interface of G’MIC (which is called gmic) installed on your system. The commands I show below should be typed in a shell (I use bash on a machine running Linux, personally). The first thing to do after installing gmic is to ensure we get the latest update of the commands proposed in the G’MIC framework:

$ gmic -update

[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Update commands from the latest definition file on the G'MIC server.
[gmic]-0./ End G'MIC interpreter.

Now, the main trick consists in using the specific G’MIC command -distribution3d, which converts an input color image as a 3D vector object corresponding to the desired colors distribution we want to visualize. I will also add two things to improve it a little bit:

  1. if the image is too big, I will reduce it to a reasonnable size, because the number of points in the generated 3D object is always equal to the number of pixels in the image, and we don’t want it to be unnecessarily large, for performance reasons.
  2. I will also merge the generated 3D pointcloud with the boundaries of the RGB color cube, in wireframe mode, as in the examples you’ve seen in the previous section.

This long (but comprehensive) command line below does the job:

$ gmic input_image.jpg -resize2dy "{min(h,512)}" -distribution3d -colorcube3d -primitives3d[-1] 1 -+3d

[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input file 'input_image.jpg' at position 0 (1 image 3283x4861x1x3).
[gmic]-1./ Resize 2d image [0] to 512 pixels along the y-axis, preserving 2d ratio.
[gmic]-1./ Get 3d color distribution of image [0].
[gmic]-1./ Input 3d RGB-color cube.
[gmic]-2./ Convert primitives of 3d object [1] to segments.
[gmic]-2./ Merge 3d objects [0,1].
[gmic]-1./ Display 3d object [0] = '[3d distribution]*' (177160 vertices, 177176 primitives).

It opens an interactive visualization window with the 3D colors distribution that you can rotate with your mouse:

gmic_view

Nice! But let me go one step further.

Now, I want to generate an animation (a video file basically) showing the 3D colors distribution object rotating, just beside the input image. For this, I can use the command -snapshot3d inside a -do…-while loop in order to get all the images of my animation. I won’t enter into much technical details because this post is not about the G’MIC programming language (for this, I refer the interested reader to these nicely written tutorial pages), but basically we can do this task easily by defining our own custom command for G’MIC. Just copy/paste the code snippet below into your $HOME/.gmic configuration file (or %APPDATA%/gmic for Windows users) in order to create a new command named -animate_color_distribution3d, that will be recognized by the G’MIC interpreter next time you invoke it:

animate_color_distribution3d :
    
  angle=0
  delta_angle=2

  # Create 3d object from input image.
  --local
    -resize2dy {min(h,512)} -distribution3d
    -colorcube3d -primitives3d[-1] 1
    -+3d -center3d -normalize3d -*3d 180
    -rotate3d 1,1,0,60
  -endl

  # Resize input image so that its height is exactly 300.
  -resize2dy[0] 300

  # Render animated frames (with height 300).
  -do
    --rotate3d[1] 0,1,0,$angle
    -snapshot3d[-1] 300,0,0,0,0,0 --a[0,-1] x
    -w[-1] -rm[-2]
    angle={$angle+$delta_angle}
  -while {$angle<360}
  -rm[0,1] -resize2dx 640

Once you’ve done that, the command is ready for use. It takes no arguments, so you can simply call it like this:

$ gmic input_image.jpg -animate_color_distribution3d -output output_video.avi

And after a while, your video file should be saved in your current path. That’s it !

Below, I show some examples which have been generated by this custom G’MIC command. The first picture is a beautiful (and colorful) drawing from David Revoy. I find its 3D color distribution particularly interesting.

The landscape picture below is also interesting, because of the almost bi-tonal nature of the image:

With the Joconde painting, we can also see that the number of effective color tones is quite reduced:

And if you apply the command -animate_color_distribution3d on a synthetic plasma image, as done below, you will get a really nice 3D pointcloud, just like the ones we can see in some sci-fi movies !


And now ?

Well, that’s it ! Nothing more to say for this post :). Surprisingly, it is already much longer than expected when I’ve started writing it. Next time you see a color image, maybe you could try to visualize its color distribution in the 3D RGB space along with the classical histograms. Maybe it will help you to better understand the dynamics of the image colors, who knows ?. At least we know that this is something we can do easily with FLOSS imaging tools. See you next time !

December 02, 2014

Ripping a whole CD on Linux

I recently discovered that my ancient stereo turntable didn't survive our move. So all those LPs I brought along, intending to rip to mp3 when I had more time, will never see bits.

So I need to buy new versions of some of that old music. In particular, I'd lately been wanting to listen to my old Flanders and Swann albums. Flanders and Swann were a terrific comedy music duo (think Tom Lehrer only less scientifically oriented) from the 1960s.

So I ordered a CD of The Complete Flanders & Swann, which contains all three of the albums I inherited from my parents. Woohoo! I ran a little script I have that rips a whole CD to a directory of separate MP3 songs, and I was all set.

Until I listened to it. It turns out that when the LP album was turned into a CD, they put the track breaks in the wrong place. These albums are recordings of live performances. Each song has a spoken intro, giving a little context for the song that follows. On the CD, each track starts with a song, and ends with the spoken intro for the next song. That's no problem if you always listen to whole albums in order. But I like to play individual tracks, or listen to music on random play. So this wasn't going to work at all.

I tried using audacity to copy the intro from the end of one track and paste it onto the beginning of another. That worked, but it was tedious and fiddly. A little research showed me a much better way.

First: Rip the whole CD

First I needed to rip the whole CD as one gigantic track. My script had been running cdparanoia tracknumber filename.wav. But it took some study of the cdparanoia manual before I finally found the way to rip a whole CD to one track: you can specify a range of tracks, starting at 0 and omitting the end track.

cdparanoia 0- outfile.wav

Use Audacity to split and save the tracks

Now what's the best way to split a recording into separate tracks? Fortunately the Audacity manual has a nice page on that very subject: Splitting a recording into separate tracks.

Mostly, the issue is setting labels -- with Tracks->Add Label at Selection or Tracks->Add Label at Playback Position. Use Ctrl-1 to zoom as much as you need to see where the short pauses are. Then listen to the audio, pausing or clicking and setting labels appropriately.

It's a bit fiddly. For instance, if you pause your listening to set a label, you might want to save the audacity project so you don't lose the label positions you've set so far. But you can't save unless you Stop the playback; and that loses the current playback position which you may not yet have set a label for. Even if you have set a label for it, you'll need to click to set the selection to the label you just made if you want to continue playing from where you left off. It all seems a little silly and unintuitive ... but after a few tries you'll find a routine that works for you.

When all your labels are set, then File->Export Multiple.... You will have to go through a bunch of dialogs involving metadata for each track; just hit return, since audacity ignores any metadata you type in and won't actually write it to the MP3 file. I have no idea why it always prompts for metadata then doesn't use it, but you can use a program like id3tool later to add proper metadata to the tracks.

So, no, the tools aren't perfect. On the other hand, I now have a nice set of Flanders and Swann tracks, and can listen to Misalliance, Ill Wind and The GNU Song complete with their proper introductions.

November 26, 2014

The 2015 Libre Calendar

So Jehan Pages contacted me a little while ago about participating in a project to produce a “Libre Calendar”. Once he described the idea, it was an easy choice to join up and help out!


Through his non-profit LILA in France, he has assembled 6 artists to produce works specifically for this calendar (Disclaimer: I'm one of the artists):


Aryeom Han


Henri Hebeisen


Gustavo Deveze


Brian Beck



The proceeds from the calendar will be split evenly between the artists, the LILA non-profit, and various F/OSS projects that the artists used (GIMP, Blender, Inkscape, etc...). The full list is on the site. (Second disclaimer: I'm deferring any of my proceeds to the projects).

This is a really nice way to donate a bit to the various projects and get a neat gift for it.

Head over to the site to see some sample images from the artists, and consider buying a calendar! Jehan is looking to meet a minimum order before moving forward (around 100 I believe).

New algorithm for bone distortion

As part of our recent developments we have changed the algorithm for the upcoming bone distortion feature. Check out the demonstration video!...

Yam-Apple Casserole

Yams. I love 'em. (Actually, technically I mean sweet potatoes, since what we call "yams" here in the US aren't actual yams, but the root from a South American plant, Ipomoea batatas, related to the morning glory. I'm not sure I've ever had an actual yam, a tuber from an African plant of the genus Dioscorea).

But what's up with the way people cook them? You take something that's inherently sweet and yummy -- and then you cover them with brown sugar and marshmallows and maple syrup and who knows what else. Do you sprinkle sugar on apples before you eat them?

Normally, I bake a yam for about an hour in the oven, or, if time is short (which it usually is), microwave it for about four and a half minutes, then finish up with 20-40 minutes in a toaster oven at 350°. The oven part seems to be necessary: it brings out the sweetness and the nice crumbly texture in a way that the microwave doesn't. You can read about some of the science behind this at this Serious Eats discussion of cooking sweet potatoes: it's because sweet potatoes have an odd enzyme, beta amylase, that breaks down carbohydrates into sugars, thus bringing out the vegetable's sweetness, but that enzyme only works in a limited temperature range, so if you heat up a sweet potato too fast the enzyme doesn't have time to work.

But Thanksgiving is coming up, and for a friend's dinner party, I wanted to make something a little more festive (and more easily parceled out) than whole baked yams.

A web search wasn't much help: nearly everything I found involved either brown sugar or syrup. The most interesting casserole recipes I saw fell into two categories: sweet and spicy yams with chile powder and cayenne pepper (and brown sugar), and for yam-apple casserole (with brown sugar and lemon juice). As far as I can tell it has never occurred to anyone, before me, to try either of these without added sugar. So I bravely volunteered myself as test subject.

I was very pleased with the results. The combination of the tart apples, the sweet yams and the various spices made a lovely combination. And it's a lot healthier than the casseroles with all the sugary stuff piled on top.

Yam-Apple Casserole without added sugar

Ingredients:

  • Yams, as many as needed.
  • Apples: 1-2 apples per yam. Use a tart variety, like granny smith.
  • chile powder
  • sage
  • rosemary or thyme
  • cumin
  • nutmeg
  • ginger powder
  • salt
(Your choice whether to use all of these spices, just some, or different ones.)

Peel and dice yams and apples into bite-sized pieces, inch or half-inch cubes. (Peeling the yams is optional.)

Drizzle a little olive oil over the yam and apple pieces, then sprinkle spices. Your call as to which spices and how much. Toss it all together until the pieces are all evenly coated with oil and the spices look evenly distributed.

Lay out in a casserole dish or cake pan and bake at 350°F until the yam pieces are soft. This takes at least an hour, two if you made big pieces or layered the pieces thickly in the pan. The apples will mostly disintegrate into little mushy bits between the pieces of yam, but that's fine -- they're there for flavor, not consistency.

Note: After reading about beta-amylase and its temperature range, I had the bright idea that it would be even better to do this in a crockpot. Long cooking at low temps, right? Wrong! The result was terrible, almost completely tasteless. Stick to using the oven.

I'm going to try adding some parsnips, too, though parsnips seem to need to cook longer than sweet potatoes, so it might help to pre-cooked the parsnips a few minutes in the microwave before tossing them in with the yams and apples.

November 25, 2014

Two weeks in Siberia with Morevna Project, open-source animation tools and anime

Translated by:

Media researcher Julia Velkova took the trouble to visit our small studio at Gorno-Altaysk and met the core team members. With this guest-post she is sharing her first impressions and revealing some “behind-the-scenes” of Morevna Project.

This post is about my almost two-week long stay in the city of Gorno-Altaysk in Siberia, Russia. As part of my research on the production of open-source animation films – made with open-source tools and released as commons with sources – I have been trying to figure out the webs of connections between this place in Siberia, free software graphics communities, and Morevna Project. I was curious to see the context of producing an open-source animation film in a place like Gorno-Altaysk and how does this context matter. I wanted also to meet more of the people involved in the production of Morevna Project.

Before coming to Gorno-Altaysk I have been talking a lot to Konstantin Dmitriev online and I have been following his Morevna anime film project and Synfig developments. I knew that the place from where he works is somewhere far away in Russia, and that there are some other people involved. It was however very difficult to get any better of idea of who these people were, and what were their roles only through the accounts of Konstantin – which were surely helpful but not enough neither for me, nor for my research.

So, in the same way as I went to Amsterdam and the Blender Institute earlier this year, I went to Gorno-Altaysk in the beginning of November to learn more.

Gorno-Altaysk is not Amsterdam. Until I actually tried to get there I could not realize how far it was. The trip took 3 days of travel. And, hey, we live in the 21st century and everything is fast – how was this possible? Well, I flew relatively quickly from Stockholm to Novosibirsk – the capital of Siberia, leaving on Monday and arriving on Tuesday morning. Konstantin met me there, and the next day in the evening we boarded the night train to a city called Biysk (Бийск). From there it took another 1:30 hours of bus ride to Gorno-Altaysk on a scenic road built by prisoners deported to Siberia by the soviet regime. We arrived on Thursday morning, at 08 am.

map-GA1

Frosty, and sunny, and squeezed between mountains the city looked cozier than the industrial, humid and highly polluted Novosibirsk. High buildings co-exist with traditional Russian log houses, reminants from the Soviet past reside side by side to a 4d cinema and a mall. The frosty air was saturated with the smell of winter – smoke from the chimneys and stoves running on coal.

log-house lenin 4d

A barking dog met us in the log house where Konstantin lives and where we went for breakfast. Konstantin works mostly from home. Entering his workplace, the first thing I noticed was this hand-made stereoscopic screen.

Stereoscopic screen (without polarization glass)

Stereoscopic screen (without polarization glass)

The screen appeared to be fully functional and we got to test it by watching a 20-min stereoscopic 2D/3D animation short made with Synfig, Blender and Krita, which Konstantin has been directing and working on in the past few months for a client from Novosibirsk.

A piece of 2D/3D animated film made for the client:

The project has been an arena for testing and developing open-source software based stereographic animation pipelines by combining Blender, Synfig, and Krita. It has also helped develop new tools to simplify and speed up the animation process, for example by developing RenderChan. Not least, it has been paying the bills at the moment for the three people involved in the project – Konstantin Dmitriev, Nikolai Mamashev and Anastasia Majzhegisheva (Nastya).

RenderChan mascot by Nastya Majzhegisheva

RenderChan mascot by Nastya Majzhegisheva

The image of the screen sharing a desk with a Lenovo Thinkpad x220T and a workstation with an 8-core CPU, all running Linux, together with the view of Konstantin cutting and moving sound and animation with a Wacom pen on his screen recalled in my mind the image of David Revoy’s techie sculpture made of Cintiq and Linux. I get reminded of the connections between tools, open-source software and graphics that integrate into people’s whole lifestyle.

Konstantin and his Lenovo tablet during the stay in Novosibirsk

Konstantin and his Lenovo tablet during the stay in Novosibirsk

I soon notice several things. Besides the stereoscopic animation for a client, Konstantin works also on a new website for Morevna Project; coordinates Synfig’s development with developer Ivan Mahonin; and teaches twice a week free animation classes to teenagers in the premises of a small local extracurriculum art school. The teaching is in fact shared with Nikolai Mamashev, the art director of the Morevna film demo. I start wondering how does this all relate to Morevna Project and its production – the object of my initial interest and reason to come here. It also brings up an even bigger question – what exactly is Morevna Project now? I mean – after all, it completed its first goal to make a demo film in 2012, but since then there has not been new animation produced. Instead, there have been appearing fan artworks drawn by Nastya; a Synfig training course; and improvements on Synfig’s code primarily done by Ivan Mahonin who has been working on and off on coding (in dependence of how the Synfig donations were developing).

I meet Nastya. She is 15, and she is local. In fact, everyone is local, and I am one of the very few foreigners and non-locals currently in town. Nastya tells me about her passion for drawing, animé and falling in love with Krita: “It was magic – to draw with a pen on a tablet. And later, when I tried Krita, we became soulmates”. This friendship has recently led her to a move from Windows to Linux for the sake of stability and better functionality of her drawing tools which she seems to use intensively. She names four different animation short projects in which she is involved as artist among which Morevna, and ‘Neighbour from Hell’, a short on which she works as an artist with two more girls in the animation classes led by Konstantin and Nikolai.

Forest background – work in progress, Nastya Majzhegisheva

Forest background – work in progress, Nastya Majzhegisheva

Nastya drawing on the old studio Cintiq

Nastya drawing on the old studio Cintiq

Below are two scenes from ‘Neighbour from Hell’ – the short for which Nastya draws the tree background.

Artwork: Anastasia (Nastya) Majzhegisheva.
Animation: Tamara (Toma) Hudyakova, Anastasia Popova.

I get to visit twice the animation classes during my stay. They take place in an ad-hoc studio at the premises of the local art school Adamant. In a room that has to be reconfigured every time and where any equipment of value is kept in a safe the first thing I see is a first generation Wacom Cintiq, representing the working place of Nastya. ‘Chinese animation studios sell off old equipment and, so we managed to get it for $300′, explains Konstantin. This is one of the two drawing tablets of this type that the students have, and the attempts of Nastya to draw on one of them in Krita in high-resolution quickly gets in conflict with the low amount of memory available on the connected workstation. At the moment the art school and Konstantin have no resources to fix this, and in the same time nobody in the area seems to be able to understand the importance of helping out with improving things. The art school lacks Internet too – another underprioritised and underfunded thing. This is of course sad considering that Konstantin’s classes represent the only animation school in town and in the whole nearby region. They are also probably the only ones in Siberia that teach only open-source software based pipelines.

Adamant art school where the animation classes take place

Adamant art school where the animation classes take place

Gradually, six students, all teenagers start arriving with their own laptops of all sorts of budget brands, most of which assembled and produced in Russia. Some have also drawing tablets – anything from a 2001 Wacom Graphire, to relatively new Wacom Bamboo pads. I overhear the following conversation:

A student, Tamara, shows her new drawing – a horse with a rider.

Vika: Did you draw him in Krita?
T: No, in GIMP.
V: I try to draw in Photoshop but I find it very complicated.
T: Well, this is why I draw in Gimp. I did not manage either well with Photoshop.
V: Can I see some more of your work?

Tamara shows her more drawings explaining:

I did this in Photoshop, this in Gimp, this in MyPaint.

In class

In class

Many of the students use Krita, Gimp and MyPaint for drawing in various combinations depending on the tasks. The students animate in Synfig which helps Konstantin and Nikolay to test new functions and discover bugs. Here is a little preview of the different projects they currently work on:

Sample scene by Anna Erogova:
Artwork and animation by Anna Erogova (16 years old).
Made in Synfig and MyPaint.

Poet and Robber (sample scenes):
Artwork and animation by Igor Sidorov (13 years old).
Made in Synfig, MyPaint and Gimp.

Dolls and Rain (animation sample):
Artwork and animation by Vika Popova (16 years old).
Made in Synfig, and Krita.

Neighbour from Hell (sample scene 1):
Artwork: Anastasia (Nastya) Majzhegisheva (15 years old).
Animation: Tamara Hudyakova (19 years old).
Made in Synfig, Krita, Gimp and MyPaint.

Neighbour from Hell (sample scene 2):
Artwork: Anastasia (Nastya) Majzhegisheva (15 years old).
Animation: Anastasia Popova (19 years old).
FX: Anastasia Majzhegisheva.
Made in Synfig, Krita, Gimp and MyPaint.

At some moment I realize that everyone in the studio, including Konstantin and Nikolai, are passionate animé fans. And while the start of this passion has been different for everyone, in the end they have all been attracted by the specificity and peculiarity of the genre. As Nikolai describes it, ‘It is very different. It is perceived very differently. It is like food. Imagine that you usually eat one thing, but one day you get to try a totally different food that you can not comprehend at all – Chinese, Japanese, something spicy, specific that you can not understand at all. Then you are – wow, what is this? It was like that with animé for me. I was very impressed.’

The students in class are so obsessed by anime that they draw it, breath it, live it in every minute of their lives. They tell me that it is their way to experience life and learn about life, and in the same time it is their life. They say: it is unconventional. It has psychology, and pedagogy. They compete to tell me stories of uncontrollable inspiration which can come while writing a school exam when they start drawing on the exam sheet which they can not bring home. It is the animé passion that has brought them all to animation and to Konstantin’s studio and classes.

Anime/open-source tools gang

Anime/open-source tools gang

This suddenly helps me connect the pieces and see better what Morevna project is. It is a fruit born in this passion for animé which has sprawled beyond the mere consumption of it, and pushed Konstantin into making something more. Morevna project is the dream and the project of making a particular animé film which fills many local people’s lives with meaning. It is the dream of making a feature animation film which integrates the strong fandom to the animé genre with the local Russian culture through the script based on a folk tale many people in Russia knows.

Inspired by the example set by the Blender Institute, Konstantin has been trying to establish a similar environment but focused on 2D animation/anime film development. Open-source graphics instruments – primarily Synfig, but also Blender, Krita, Gimp, MyPaint and Pencil have therefore provided the logical solution for how to realize this idea in a reality where many people can not afford to buy high-class drawing tablets, branded computers or expensive mobile phones. In the same time, open-source tools have given the freedom of adapting the tools, developing them for the specifics of 2D-animation production pipelines and have given the freedom for creative expression.

The animation classes transfer the knowledge on working with open-source graphics instruments locally and help create some of the future contributors to the project. This knowledge, and daily work with Synfig drives the development of new features, and expands the community of Synfig users. What I realized during my two-week stay in Gorno-Altaysk is that Morevna Project is a framework – it is a film project which represents a driving force for creating an environment and pipelines for 2D open-source animation which has driven the substantial development of Synfig in the last years. It is also a channel that transforms consumption and fandom into a culture of making; and a place for experimenting with models of sharing in which tools, artwork, and knowledge get created. And similarly to the spirit in which the things are done in the Blender Institute, what keeps ideas and projects developing is the wish for making things, and the fascination to the magic of animation – a will of such strength that slowly pushes things through despite the (still) smaller scale, and limitations of the local and national context in which they are made.

During my stay I met many people, and had the opportunity to record many hours of interviews and personal histories about animé, animation and open-source tools. This has been an invaluable experience to understand better the spectrum of similarities and differences of the different environments and specifics of open-source based animation production, and the nature of the graphics communities wrapped around these projects. In conclusion I want to say a big ‘thank you!’ to Konstantin, Nikolay, Nastya Majzhegisheva, Igor Dmitriev, Ivan Mahonin, Toma Hudyakova, Nastya Popova, Vika Popova and Anna Erogova for the opportunity to meet you and get to know a little piece of your world.

I would also like to share here one of the interviews (in Russian) – with Morevna Project’s artist Nikolai Mamashev who tells about his daily work with animation, Morevna project, anime, open source software, Blender, and Synfig. Enjoy listening!

Click here for the audio file (OGG)

November 20, 2014

Unbound RGB with littleCMS slow

The last days I played with lcms‘ unbound mode. In unbound mode the CMM can convert colours with negative numbers. That allows to use for instance the LMS colour space, a very basic colour space to the human visual system. As well unbound RGB, linear gamma with sRGB primaries, circulated long time as the new one covers all colour space, a kind of replacement of ICC or WCS style colour management. There are some reservations about that statement, as linear RGB is most often understood as “no additional info needed”, which is not easy to build a flexible CMS upon. During the last days I hacked lcms to write the mpet tag in its device link profiles in order to work inside the Oyranos CMS. The multi processing elements tag type (mpet) contains the internal state of lcms’ transform as a rendering pipeline. This pipeline is able to do unbound colour transforms, if no table based elements are included. The tested device link contained single gamma values and matrixes in its D2B0 mpet tag. The Oyranos image-display application renderd my LMS test pictures correctly, in opposite to the 16-bit integer version. However the speed was decreased by a factor of ~3 with lcms compared to the usual integer math transforms. The most time consuming part might be the pow() call in the equation. It is possible that GPU conversions are much faster, only I am not aware of a implementation of mpet transforms on the GPU.

November 19, 2014

Synfig Training Package in Portuguese

Synfig Training Package is available in Portuguese language now!...

GIMP Magazine Issue #6 released

The newly released issue #6 of GIMP Magazine features a "Using GIMP for portrait and fashion photography" master class by Aaron Tyree who uses GIMP professionally, and a gallery of other artworks and photos made or processed with GIMP.

The team is planning to switch to monthly releases, however they need your support to cover the costs of publishing a free magazine. You can sponsor the project at Patreon or visit the magazine's gift shop to make a donation.

November 18, 2014

Unix "remind" file for US holidays

Am I the only one who's always confused about when holidays happen?

Partly it's software, I guess. In these days of everybody keeping their schedules on Google's or Apple's servers, maybe most people keep up on these things.

But being the dinosaur I am, I'm still resistant to keeping my schedule in the cloud on a public server. What if I need to check for upcoming events while I'm on a trip out in the remote desert somewhere? (Not to mention the obvious privacy considerations.) For years I used PalmOS PDAs, but when I switched to Android and discovered how poor the offline calendar options are, I decided that I should learn how to use the old Unix standby.

It's been pretty handy. I run remind ~/[remind-file-name] when I log in in the morning, and it gives me a nice summary of upcoming events:

DPU Solar surcharge meeting, 5:30-8:30 tomorrow
NMGLUG meeting in 2 days' time

Of course, I can also have it email me with reminders, or pop up a window, but so far I haven't felt the need.

I can also display a nice calendar showing upcoming events for this month or the next several months. I made a couple of aliases:

mycal () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=1 
        fi
        remind -c$months ~/Docs/Lists/remind
}

mycalp () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=2 
        fi
        remind -p$months ~/Docs/Lists/remind | rem2ps -e -l > /tmp/mycal.ps
        gv /tmp/mycal.ps &
}

The first prints an ascii calendar; the second displays a nice postscript calendar complete with little icons for phases of the moon.

But what about those holidays?

Okay, that gives me a good way of storing reminders about appointments. But I still don't know when holidays are. (I had that problem with the PalmOS scheduling program, too -- it never knew about holidays either.)

Web searching didn't help much. Unfortunately, "remind" is a terrible name in this age of search engines. If someone has already solved this problem, I sure wasn't able to find any evidence of it. So instead, I went to Wikipedia's list of US holidays, with the remind man page in another tab, and wrote remind stanzas for each one -- except Easter, which is much more complicated.

But wait -- it turns out that remind already has code to calculate Easter! It just needs a slightly more complicated stanza: instead of the standard form of

REM  1 Apr +1 MSG April Fool's Day %b
I need to use this form:
REM  [trigger(easterdate(today()))] +1 MSG Easter %b

The %b in each case is what gives you the notice of when the event is in your reminders, e.g. "Easter tomorrow" or "Easter in two days' time". The +1 is how far beforehand you want to be reminded of each event.

So here's my remind file for US holidays. I make no guarantees that every one is right, though I did check them for the next 12 months and they all seem to be working.

#
# US Holidays
#
REM      1 Jan    +3 MSG New Year's Day %b
REM Mon 15 Jan    +2 MSG MLK Day %b
REM      2 Feb       MSG Groundhog Day %b
REM     14 Feb    +2 MSG Valentine's Day %b
REM Mon 15 Feb    +2 MSG President's Day %b
REM     17 Mar    +2 MSG St Patrick's Day %b
REM      1 Apr    +9 MSG April Fool's Day %b
REM  [trigger(easterdate(today()))] +1 MSG Easter %b
REM     22 Apr    +2 MSG Earth Day %b
REM Fri  1 May -7 +2 MSG Arbor Day %b
REM Sun  8 May    +2 MSG Mother's Day %b
REM Mon  1 Jun -7 +2 MSG Memorial Day %b
REM Sun 15 Jun       MSG Father's Day
REM      4 Jul    +2 MSG 4th of July %b
REM Mon  1 Sep    +2 MSG Labor Day %b
REM Mon  8 Oct    +2 MSG Columbus Day %b
REM     31 Oct    +2 MSG Halloween %b
REM Tue  2 Nov    +4 MSG Election Day %b
REM     11 Nov    +2 MSG Veteran's Day %b
REM Thu 22 Nov    +3 MSG Thanksgiving %b
REM     25 Dec    +3 MSG Christmas %b

November 16, 2014

Arnab Goswami is changing the way the election results are delivered! From aggressive and dramatic delivery to statistics and even score card, inspire of the fact that there is actually nothing happening now than counting – it looks like Twenty 20, without breaks and cheer girls!

November 14, 2014

Tracking Usage

One of the long standing goals of Unity has been to provide an application focused presentation of the desktop. Under X11 this proves tricky as anyone can connect into X and doesn't necessarily have to give information on what applications they're associated with. So we wrote BAMF, which does a pretty good job of matching windows to applications, but it could never be perfect because there simply wasn't enough information available. When we started to rethink the world assuming a non-X11 display server we knew there was one thing we really wanted, to never ever have something like BAMF again.

This meant designing, from startup to shutdown, a complete tracking of an application before it started creating windows in the display server. We then were able to use the same mechanisms to create a consistent and secure environment for the applications. This is both good for developers and users as their applications start in a predictable way each and every time it's started. And we also setup the per-application AppArmor confinement that the application lives in.

Enough backstory, what's really important to this blog post is that we also get an event when an application starts and stops which is a reliable event. So I wrote a little tool that takes those events out of the log and presents them as usage data. It is cleverly called:

$ ubuntu-app-usage

And it presents a list of all the applications that you've used on the system along with how long you've used them. How long do you spend messing around on the web? Now you know. You're welcome.

It's not perfect in that it uses all the time that you've used the device, it'd be nice to query the last week or the last year to see that data as well. Perhaps even a percentage of time. I might add those little things in the future, if you're interested you can beat me too it.

Some postcard illustrations to help KDE through the winter…

KDE winter fundraiser

If you’re following KDE community news, you probably already know that we’re running a donation campaign to help funding KDE community costs for next year.
Everyone giving at least 30€ will recieve a cool postcard featuring Konqi, choosing one of the three models available.

So here are the three illustrations I made for these cards:

-Konqi Gift
Konqi Gift

-Konqi Freedom
Konqi Freedom

-Konqi Party
Konqi Party

So please consider giving something to this fundraiser, and enjoy the postcards! :)

On a side note, this week-end I’ll be at the Capitole du Libre and Akademy-Fr in Toulouse.
I’ll give a talk about contributing to KDE as a user, another one about latest Krita news, and I’ll spend some time on the KDE booth to talk to people and show some cool piece of software.
If you’re in the area, come and say hi ;)

November 13, 2014

Crockpot Green Chile Posole Stew

Posole is a traditional New Mexican dish made with pork, hominy and chile. Most often it's made with red chile, but Dave and I are both green chile fans so that's how I make it. I make no claims as to the resemblance between my posole and anything traditional; but it sure is good after a cold, windy day like we had today.

Dave is leery of anything called "posole" -- I think the hominy reminds him visually of garbanzo beans, which he dislikes -- but he admits that they taste fine in this stew. I call it "green chile stew" rather than "posole" when talking to him, and then he gets enthusiastic.

Ingredients (all quantities very approximate):

  • pork, about a pound; tenderloin works well but cheaper cuts are okay too
  • about 10 medium-sized roasted green chiles, whatever heat you prefer (or 1 large or 2 medium cans diced green chile)
  • 1 can hominy
  • 1 large or two medium russet potatoes (or equivalent amount of other type)
  • 1 can chicken broth
  • 1 tsp salt
  • 1 tsp red chile powder
  • 1/2 tsp cumin
  • fresh garlic to taste
  • black pepper and hot sauce (I use Tapatio) to taste

Start the crockpot heating: I start it on high then turn it down later. Add broth.

Dice potato. At least half the potato should be in small pieces, say 1/4" cubes, or even shredded; the other half can be larger chunks. I leave the skin on.

Pre-cook diced potato in the microwave for 7 minutes or until nearly soft enough to eat, in a loosely covered bowl with maybe 1" of water in the bottom. (This will get messy and the water gets all over and you have to clean the microwave afterward. I haven't found a solution to that yet.) Dump cooked potato into crockpot.

Dice pork into stew-sized pieces, trimming fat as desired. Add to crockpot.

De-skin and de-seed the green chiles and cut into short strips. (Or use canned or frozen.) Add to crockpot.

Add spices: salt, chile powder, cumin, and hot sauce (if your chiles aren't hot enough -- we have a bulk order of mild chiles this year so I sprinkled liberally with Tapatio).

Cover, reduce heat to low.

Cook 6-7 hours, occasionally stirring, tasting and correcting the seasoning. (I always add more of everything after I taste it, but that's me.)

Serve with bread, tortillas, sopaipillas or similar. French bread baked from the refrigerated dough in the supermarket works well if you aren't brave enough to make sopaipillas (I'm not, yet).

November 07, 2014

Working in Macromedia Flash 8


Here’s a time-lapse screen-capture of me working in Macromedia Flash 8. This little bit of animation is unlikely to make it into the finished movie, as I later decided on a different approach here. This just shows one aspect of how I use Flash; other work videos to come. Thanks to the Blender Institute for posting this and thinking about maybe possibly developing FLOSS vector animation tools.

Share/Bookmark

flattr this!

November 06, 2014

New GIMP Save/Export plug-in: Saver

The split between Save and Export that GIMP introduced in version 2.8 has been a matter of much controversy. It's been over two years now, and people are still complaining on the gimp-users list.

Early on, I wrote a simple Python plug-in called Save-Export Clean, which saved over an image's current save or export filename regardless of whether the filename was XCF (save) or a different format (export). The idea was that you could bind Ctrl-S to the plug-in and not be pestered by needing to remember whether it was XCF, JPG or what.

Save-Export Clean has been widely cited, and I hope it's helped some people who were bothered by the Save/Export split. But personally I didn't like it very much. It wasn't very flexible -- there was no way to change the filename, for one thing, and it was awfully easy to overwrite an original image without knowing that you'd done it. I went back to using GIMP's separate Save and Export, but in the back of my mind I was turning over ideas, trying to understand my workflow and what I really wanted out of a GIMP Save plug-in.

[Screenshot: GIMP Saver-as... plug-in] The result of that was a new Python plug-in called Saver. I first wrote it a year ago, but I've been tweaking it and using it since then, with Ctrl-S bound to Saverand Ctrl-Shift-S bound to Saver as...). I wanted to make sure that it was useful and working reliably ... and somehow I never got around to writing it up and announcing it formally ... until now.

Saver, like Save/Export Clean, will overwrite your chosen filename, whether XCF or another format, and will mark the image as saved so GIMP won't pester you when you exit.

What's different? Mainly, three things:

  1. A Saver as... option so you can change the filename or file type.
  2. Merges multiple layers so they'll show up properly in your JPG or PNG image.
  3. An option to save as .xcf or .xcf.gz and, at the same time, export a copy in another format, possibly scaled down. So you can maintain your multi-layer XCF image but also update the JPG copy that you're going to put on the web.

I've been using Saver for nearly all my saving for the past year. If I'm just making a quick edit of a JPEG camera image, Ctrl-S overwrites it without questioning me. If I'm editing an elaborate multi-layer GIMP project, Ctrl-S overwrites the .xcf.gz. If I'm planning to export that image for the web, I Ctrl-Shift-S to bring up the Saver As... dialog, make sure the main filename is .xcf.gz, set a name (ending in .jpg) for the exported copy; and from then on, Ctrl-S will save both the XCF and the JPG copy.

Saver is available on my github page, with installation instructions here: GIMP Saver and Save/Export Clean Plug-ins. I hope you find it useful.

November 04, 2014

Angry Birds maker Rovio ‘Plunder Pirates’ featured on App store

PlunderPirates_Render_03

Midoki studio’s latest release ‘Plunder Pirates’, a strategy game melding 4X exploration with tower defense, set in the Caribbean, was picked as ‘Editor’s choice’ on Apples iTunes app store. Art director Daniel Martinez-Normand has been instrumental in bringing the team to use Blender alongside their more traditional Maya workflow and is keen to share his and his studio’s experience of this transition.


First Blood.PlunderPirates_blender_screeengrab_04

November 2013, looming deadlines for their Crazy Taxi project and frustration with aspects of UV handling with the texture paint tools in Maya sent Daniel on a hunt for an alternative

tool for the job. He quickly found Blender tutorials that drew him in leading to a swift download and install of the package. The speed of this process impressed him further, especially when compared to the lengthy procedure to get Maya onto a machine and while he continued to have some initial trouble with naming conventions in Blender as well as the unique ‘right click’ methodology, he was enamored  enough to implement techniques learned from video tutorials into the team’s workflow.

SaviourPlunderPirates_Render_01

Chief among these was the use of the Ocean’ modifier, used to generate wave surfaces for pre-rendered action sequences. They had initially abandoned plans for these sorts of shots, believing that while they could achieve the look they required in-game, the setup and render time needed to produce an equivalent set of was too much for their four week schedule. However Daniel managed to get a working ocean scene up and running in Blender within a few days so they re-upped their expectations and went ahead with the sequence.

Workflow

  • Most models are made and exported to the game from Maya LT using FBX. Plunder Pirates uses Midoki’s own engine, and the model converter was designed to read FBX files with the 2012 specification.
  • Midoki do a lot of marketing images for social media, so once a model is finished they import it into Blender,  re-apply cycles materials, and subdivide the mesh with extra details if needed. Setting up a scene doesn’t take long and they can easily produce a couple of renders per week while carrying on work on the game.
  • Any UI assets that require pre-rendered images (such as buttons) are also rendered in Blender.
  • Finally, all the latest  characters are fully modelled and baked in Blender, and only exported to Maya for rigging and FBX export. Blender is much faster than Maya baking the textures and the quality of the bakes is stunning, adding Cycles baking to the mix has improved that further.

Testimony

“Blender is a solid and powerful professional tool, with an incredibly fast update cycle. And more importantly: in Blender you feel that every new tool and feature has been designed by someone who actually needs it. These are not tick boxes on a sales brochure as we are sadly used to. These are tools designed for a real purpose, and they work. And that’s something any studio, big or small, can benefit from.”   Daniel Martinez-Normand
He also says that if he had been as literate to the benefits of Blender three years ago, Midoki would be likely to have a Blender only pipeline as their use of Maya stems from their previous roles in the industry and its integration in their pipeline. Their long term goal is to improve the model converter so it can read FBX with different specifications, from either Maya or from Blender.

Studio

Midoki are a small games studio based in Leamington Spa in England,  as small town in the middle of the UK long associated with video game production being home to Pitbull, Radiant Worlds, Sega Hardlight and Codemasters among others. The company was created around three years ago with a dream team line up of staff and management, from it’s chairman Ian Livingstone (Games Workshop / Eidos) to company director Ian Hetherington (Psygnosis / SCEE) the company has brought together some of the industry’s leading talents. In their short history they have collaborated with Sega on a ‘Crazy Taxi’ title, produced a 3D explorations app (Recce) of three major world cities (London, San Francisco and New York) and subsequently used that technology to gameify those urban environments in their title ‘Go Deliver’. Plunder Pirates, their first global release, has already racked up more than 4 million downloads since launch

Links

 

November 01, 2014

Chinese version of Training Package is available as Pay-What-You-Want!

Pay any amount you want and get Chinese version of Synfig Training Package...

Hardware support news

Trackballs

I dusted off (literally) my Logitech Marble trackball to replace the Intuos tablet + mouse combination that I was using to cut down on the lateral movement of my right arm which led to back pains.

Not that you care about that one bit, but that meant that I needed a way to get a scroll wheel working with this scroll-wheel less trackball. That's now implemented in gnome-settings-daemon for GNOME 3.16. You'd run:


gsettings set org.gnome.settings-daemon.peripherals.trackball scroll-wheel-emulation-button 8

With "8" being the mouse button number to use to make the trackball ball into a wheel. We plan to add an interface to configure this in the Settings.

Touchscreens

Touchscreens are now switched off when the screensaver is on. This means you'll usually need to use one of the hardware buttons on tablets, or a mouse or keyboard on laptops to turn the screen back on.

Note that you'll need a kernel patch to avoid surprises when the touchscreen is re-enabled.

More touchscreens

The driver for the Goodix touchscreen found in the Onda v975w is now upstream as well.

October 31, 2014

Simulating a web page timeout

Today dinner was a bit delayed because I got caught up dealing with an RSS feed that wasn't feeding. The website was down, and Python's urllib2, which I use in my "feedme" RSS fetcher, has an inordinately long timeout.

That certainly isn't the first time that's happened, but I'd like it to be the last. So I started to write code to set a shorter timeout, and realized: how does one test that? Of course, the offending site was working again by the time I finished eating dinner, went for a little walk then sat down to code.

I did a lot of web searching, hoping maybe someone had already set up a web service somewhere that times out for testing timeout code. No such luck. And discussions of how to set up such a site always seemed to center around installing elaborate heavyweight Java server-side packages. Surely there must be an easier way!

How about PHP? A web search for that wasn't helpful either. But I decided to try the simplest possible approach ... and it worked!

Just put something like this at the beginning of your HTML page (assuming, of course, your server has PHP enabled):

<?php sleep(500); ?>

Of course, you can adjust that 500 to be any delay you like.

Or you can even make the timeout adjustable, with a few more lines of code:

<?php
 if (isset($_GET['timeout']))
     sleep($_GET['timeout']);
 else
     sleep(500);
?>

Then surf to yourpage.php?timeout=6 and watch the page load after six seconds.

Simple once I thought of it, but it's still surprising no one had written it up as a cookbook formula. It certainly is handy. Now I just need to get some Python timeout-handling code working.

October 30, 2014

appdata-tools is dead

PSA: If you’re using appdata-validate, please switch to appstream-util validate from the appstream-glib project. If you’re also using the M4 macro, just replace APPDATA_XML with APPSTREAM_XML. I’ll ship both the old binary and the old m4 file in appstream-glib for a little bit, but I’ll probably remove them again the next time we bump ABI. That is all. :)

October 27, 2014

Development Builds (with Sound Layer)

The new builds of development version with Sound Layer functionality are available for download....

October 24, 2014

Partial solar eclipse, with amazing sunspots

[Partial solar eclipse, with sunspots] We had perfect weather for the partial solar eclipse yesterday. I invited some friends over for an eclipse party -- we set up a couple of scopes with solar filters, put out food and drink and had an enjoyable afternoon.

And what views! The sunspot group right on the center of the sun's disk was the most large and complex I'd ever seen, and there were some much smaller, more subtle spots in the path of the eclipse. Meanwhile, the moon's limb gave us a nice show of mountains and crater rims silhouetted against the sun.

I didn't do much photography, but I did hold the point-and-shoot up to the eyepiece for a few shots about twenty minutes before maximum eclipse, and was quite pleased with the result.

An excellent afternoon. And I made too much blueberry bread and far too many oatmeal cookies ... so I'll have sweet eclipse memories for quite some time.

October 23, 2014

perf.gnome.org – introduction

My talk at GUADEC this year was titled Continuous Performance Testing on Actual Hardware, and covered a project that I’ve been spending some time on for the last 6 months or so. I tackled this project because of accumulated frustration that we weren’t making consistent progress on performance with GNOME. For one thing, the same problems seemed to recur. For another thing, we would get anecdotal reports of performance problems that were very hard to put a finger on. Was the problem specific to some particular piece of hardware? Was it a new problem? Was it an a problems that we have already addressed? I wrote some performance tests for gnome-shell a few years ago – but running them sporadically wasn’t that useful. Running a test once doesn’t tell you how fast something should be, just how fast it is at the moment. And if you run the tests again in 6 months, even if you remember what numbers you got last time, even if you still have the same development hardware, how can you possibly figure out what what change is responsible? There will have been thousands of changes to dozens of different software modules.

Continuous testing is the goal here – every time we make a change, to run the same tests on the same set of hardware, and then to make the results available with graphs so that everybody can see them. If something gets slower, we can then immediately figure out what commit is responsible.

We already have a continuous build server for GNOME, GNOME Continuous, which is hosted on build.gnome.org. GNOME Continuous is a creation of Colin Walters, and internally uses Colin’s ostree to store the results. ostree, for those not familiar with it is a bit like Git for trees of binary files, and in particular for operating systems. Because ostree can efficiently share common files and represent the difference between two trees, it is a great way to both store lots of build results and distribute them over the network.

I wanted to start with the GNOME Continuous build server – for one thing so I wouldn’t have to babysit a separate build server. There are many ways that the build can break, and we’ll never get away from having to keep a eye on them. Colin and, more recently, Vadim Rutkovsky were already doing that for GNOME Continuouous.

But actually putting performance tests into the set of tests that are run by build.gnome.org doesn’t work well. GNOME Continuous runs it’s tests on virtual machines, and a performance test on a virtual machine doesn’t give the numbers we want. For one thing, server hardware is different from desktop hardware – it generally has very limited graphics acceleration, it has completely different storage, and so forth. For a second thing, a virtual machine is not an isolated environment – other processes and unpredictable caching will affect the numbers we get – and any sort of noise makes it harder to see the signal we are looking for.

Instead, what I wanted was to have a system where we could run the performance tests on standard desktop hardware – not requiring any special management features.

Another architectural requirement was that the tests would keep on running, no matter what. If a test machine locked up because of a kernel problem, I wanted to be able to continue on, update the machine to the next operating system image, and try again.

The overall architecture is shown in the following diagram:

HWTest Architecture The most interesting thing to note in the diagram the test machines don’t directly connect to build.gnome.org to download builds or perf.gnome.org to upload the results. Instead, test machines are connected over a private network to a controller machine which supervises the process of updating to the next build and actually running, the tests. The controller has two forms of control over the process – first it controls the power to the test machines, so at any point it can power cycle a test machine and force it to reboot. Second, the test machines are set up to network boot from the test machines, so that after power cycling the controller machine can determine what to boot – a special image to do an update or the software being tested. The systemd journal from the test machine is exported over the network to the controller machine so that the controller machine can see when the update is done, and collect test results for publishing to perf.gnome.org.

perf.gnome.org is live now, and tests have been running for the last three months. In that period, the tests have run thousands of times, and I haven’t had to intervene once to deal with a . Here’s perf.gnome.org catching a regression (fix)

perf.gnome.org regressionI’ll cover more about the details of how the hardware testing setup work and how performance tests are written in future posts – for now you can find some more information at https://wiki.gnome.org/Projects/HardwareTesting.


October 22, 2014

Monkaa, Open Movie by Weybec

Monkaa has been made by the new Mumbai studio Weybec. It is a 5 minutes short animated movie, entirely made with Blender and GIMP and other Free/Open Software programs. It has been released as an Open Movie, including all production files and tutorials, as Creative Commons Attribution.

Although this short film has been produced independently, it was made possible thanks to support by the Blender Institute. Monkaa is a great educational example of design, animation and film making. Combined with all the extras and tutorials people will be enjoying the collection a lot – either as a DVD purchased in the blender.org e-store, or in Blender Cloud for the open production supporters.

(Or watch on youtube here)

Monkaa is a blue furred, pink faced monkey who consumes a crystallized meteorite, making Monkaa invincibly strong and too hot to handle. Exploring his superpower Monkaa zooms into an unexplored universe.

Produced by: Weybec – www.weybec.com
Released by Blender Institute, in Blender Cloud and the blender.org e-store

-Ton-

A surprise in the mousetrap

I went out this morning to check the traps, and found the mousetrap full ... of something large and not at all mouse-like.

[young bullsnake] It was a young bullsnake. Now slender and maybe a bit over two feet long, it will eventually grow into a larger relative of the gopher snakes that I used to see back in California. (I had a gopher snake as a pet when I was in high school -- they're harmless, non-poisonous and quite docile.)

The snake watched me alertly as I peered in, but it didn't seem especially perturbed to be trapped. In fact, it was so non-perturbed that when I opened the trap, the snake stayed right where it was. It had found a nice comfortable resting place, and it wasn't very interested in moving on a cold morning.

I had to poke it gently through the bars, hold the trap vertically and shake for a while before the snake grudgingly let go and slithered out onto the ground.

I wondered if it had found its way into the trap by chasing a mouse, but I didn't see any swellings that looked like it had eaten recently. I'm fairly sure it wasn't interested in the peanut butter bait.

I released the snake in a spot near the shed where the mousetrap is set up. There are certainly plenty of mice there for it to eat, and gophers when it gets a little larger, and there are lots of nice black basalt boulders to use for warming up in the morning, and gopher holes to hide in. I hope it sticks around -- gopher/bullsnakes are good neighbors.

[young bullsnake caught in mousetrap]

October 21, 2014

A GNOME Kernel wishlist

GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

October 20, 2014

KMZ Zorki 4 (Soviet Rangefinder)

Leica rangefinders

Rangefinder type cameras predate modern single lens reflex camera’s. People still use them. It’s just a different way of shooting. Since they’re no longer a mainstream type camera most manufacturers have stopped making them a long time ago. Except Leica, Leica still makes digital and film rangefinders, as you might guess, they come at significant cost. Even old Leica film rangefinders easily cost upwards of a 1000 EUR. While Leica wasn’t the only brand to manufacture rangefinder through photographic history, it was (and still is) certainly the most iconic brand.

Zorki rangefinders

Now the soviets essentially tried to copy Leica’s cameras, the result of which, the Zorki camera was produced at KMZ. Many different versions exist, having produced nearly 2 million cameras across more than 15 years, the Zorki 4 was without a doubt it’s most popular incarnation. Many consider the Zorki 4 to be the camera where the soviets got it right.

That said, the Zorki 4 more or less looks like a Leica M with it’s single coupled viewfinder/rangefinder window. In most other ways it’s more like a pre-M Leica, with it’s m39 lens screw mount. Earlier Zorki 4’s have a body finished with vulcanite which is though as nails, but if damaged is nearly impossible to fix/replace. Later Zorki’s have a body finished with relatively cheap leatherette, which is much more easily damaged, and is commonly starting to peel off, but it should be relatively easy to make better than new. Most Zorki’s come with either a Jupiter-8 50mm f/2.0 lens (being a Zeiss Sonnar inspired design), or an Industar-50 50mm f/3.5 (being a Zeiss Tessar inspired design). I’d highly recommend getting a Jupiter-8 if you can find one.

Buying a Zorki with a Jupiter

If your looking to buy a Zorki there are a few things to be aware of. Zorki’s were produced during the fifties, the sixties and the seventies in Soviet Russia often favoring quantity over quality presumably to be able to meet quota’s. The same is likely true for most soviet lenses as well. So they are both old and may not have met the high quality standards to begin with. So when buying a Zorki you need to keep in mind it might need repairs and CLA (clean / lube / adjust). My particular Zorki had a dim viewfinder because of dirt both inside and out, the shutterspeed dial was completely stuck at 1/60sec and the film takeup spool was missing. I sent my Zorki and Jupiter-8 to Oleg Khalyavin for repairs, shutter curtain replacement and CLA. Oleg was also able to provide me with a replacement film takeup spool or two. All in all having work done on your Zorki will easily set you back about 100 EUR including shipping expenses. Keep this in mind before buying. And even if you get your Zorki in a usable state, you’ll probably have to have it serviced at some point. You may very well want to consider having it serviced rather sooner than later, allowing yourself the benefit of enjoying a newly serviced camera.

Zorki’s come without a lens hood, and the Jupiter-8’s glass elements are typically only single coasted, so a hood isn’t exactly a luxury. A suitable aftermarket lens hood isn’t hard to find though.

Choosing a film stock

So now you have a nice Zorki 4, waiting for film to be loaded into it. As of this writing (2014) there is a smörgåsbord of film available. I like shooting black & white, and I often shoot Ilford XP2 Super 400. Ilford’s XP2 is the only B&W film left that’s meant to be processed along with regular color negative film in regular C41 chemicals (so it can be processed by a one-hour-photo-service). Like most color negative film, XP2 has a big exposure latitude, remaining usable between ISO 50 — 800, which isn’t a luxury since the Zorki does not come with a lightmeter. While Ilford recommends shooting it at ISO 400, I’d suggest shooting it as if it’s ISO 200 film, giving you two stops of both underexposure and overexposure leeway.

I haven’t shot any real color negative film yet in the Zorki, but Kodak New Portra 400 quickly comes to mind. An inexpensive alternative could possibly be Fuji Superia X-TRA 400, which can be found very cheaply as most store brand 400 speed film.

Shooting a Zorki

Once you have a Zorki, there are still some caveats you need to be aware of… Most importantly, don’t change shutter speed while the shutter isn’t cocked (cocking the shutter is done by advancing the film), not heeding this warning may result in internal damage to the camera mechanism. Other notable issues of lesser importance are minding the viewfinder’s parallax error (particular when shooting at short distances) and making sure you load the film straight.

As I’ve already mentioned the Zorki 4 does not come with a lightmeter, which means the camera won’t be helping you getting the exposure right, you are on your own. You could use a pricy dedicated light meter (or less pricy smartphone app), either of which is fairly cumbersome. Considering XP2’s exposure latitude means an educated guesswork approach becomes feasible. There’s a rule of thumb system called Sunny 16 for making educated guesstimates of exposure. Sunny 16 states that if you set your shutter speed to the closest reciprocal of your film speed, bright sunny daylight requires an aperture of f/16 to get a decent exposure. Other weather conditions require opening up the aperture according to this table:

Sunny f/16
Slightly Overcast f/11
Overcast f/8
Heavy Overcast f/5.6
Open Shade f/4

If you have doubts when classifying shooting conditions, you may want to err on the side of overexposure as color negative film tends to prefer overexposure over underexposure. If you’re shooting slide film you should probably avoid using Sunny 16 altogether, as slide film can be very unforgiving if improperly exposed.

Quick example: When shooting XP2 on an overcast day, assuming an alternate base ISO of 200 (as suggested earlier), the shutter speed should be set at 1/250th of a second and our aperture should be set at f8, giving a fairly large field of depth. Now if we can to reduce our field of depth we can trade +2 stops aperture for -2 stops of shutterspeed, where we end up shooting at 1/1000th of a second at f4.

Having film processed

After shooting a roll of XP2 (or any roll of color negative film) you need to take it to a local photo shop, chemist or supermarket to have a it processed, scanned and printed. Usually you’ll be able to have your film processed in C41 chemicals, scanned to CD and get a set of prints for about 15 EUR or so. Keep in mind that most shops cut your filmroll into strips of 4, 5 or 6 negatives depending on the sleeves they use. Also some shops might not offer scanning services without ordering prints, since scanning is an integral part of the printmaking process. Resulting JPEG scans are usually about 2 megapixel (1800×1200) equivalent, or sometimes slightly lower (1536×1024). A particular note when using XP2, since it’s processed as if it’s color negative film, also means it’s usually scanned as if it’s color negative film, where the resulting should-be-monochrome scans (and prints for that matter) can often have a slight color cast. This color cast varies, my particular lab usually does a fairly decent job, where the scans have a subtle warm color cast, which isn’t unpleasant at all. But I’ve heard about nasty purplish color casts as well. Regardless you need to keep in mind that you might need to convert the scans to proper monochrome manually, which can be easily done with any random photo editing software in a heartbeat. Same goes for rotating the images, aside from the usual 90 degree turns occasionally I get my images scanned upside down, where they need either 180 degree or 270 degree turns, you’ll need to do that yourself as well.

Post-processing the images

First remove all useless data from the source JPEG, and in particular for XP2, remove the JPEGs chroma (UV) channels, to remove any color cast:

$ jpegtran -copy none -grayscale -optimize -perfect 0001.JPG > ZRK_0001.JPG

Then add basic EXIF metadata:

$ exiv2 \
   -M"set Exif.Image.Artist Pascal de Bruijn" \
   -M"set Exif.Image.Make KMZ" \
   -M"set Exif.Image.Model Zorki 4" \
   -M"set Exif.Image.ImageNumber $(echo 0001.JPG | tr -cd '0-9' | sed 's#^0*##g')" \
   -M"set Exif.Image.Orientation 3" \
   -M"set Exif.Image.XResolution 300/1" \
   -M"set Exif.Image.YResolution 300/1" \
   -M"set Exif.Image.ResolutionUnit 2" \
   -M"set Exif.Image.YCbCrPositioning 1" \
   -M"set Exif.Photo.ComponentsConfiguration 1 2 3 0" \
   -M"set Exif.Photo.FlashpixVersion 48 49 48 48" \
   -M"set Exif.Photo.ExifVersion 48 50 51 48" \
   -M"set Exif.Photo.DateTimeDigitized $(stat --format="%y" 0001.JPG | awk -F '.' '{print $1}' | tr '-' ':')" \
   -M"set Exif.Photo.UserComment Ilford XP2 Super" \
   -M"set Exif.Photo.ExposureProgram 1" \
   -M"set Exif.Photo.ISOSpeedRatings 400" \
   -M"set Exif.Photo.FocalLength 50/1" \
   -M"set Exif.Photo.LensMake KMZ" \
   -M"set Exif.Photo.LensModel Jupiter-8 50/2" \
   -M"set Exif.Photo.FileSource 1" \
   -M"set Exif.Photo.ColorSpace 1" \
   ZRK_0001.JPG

Finally

Moar

If you want to read more about film photography you may want to consider adding Film Is Not Dead to your shelf.

October 19, 2014

What's next

Some thoughts on further steps in Synfig development....

TNT Drama Series ‘Legends’ Teaser

Loica_Legends_face_CU

Loica, a production studio based in Santiago, Chile have used a novel combination of photography, 3D scanning and Blender to produce a stunning promo for TNT’s recent drama series ‘Legends’ starring British actor Sean Bean.

Working with their office in Santa Monica, Hollywood they have collaborated with the Turner Creative team for a month in which they took still photography and 3D scans to the next level developing post production techniques to create a sequence that showcases a hyper real cinematic feel, fitting with the high production values of the show and expressing its psychological mystery.

Loica_Legends_titleFirst steps.

Starting with detailed photographs and 3D scans of the cast and props, they refined the models with sculpting tools, later adding layers such as hair, shaders and lighting to emphasize the realism and life of the scenes. They built Sean Bean as a low poly mesh object and using a multires modifier, they sculpted fine detail. The model was rigged to enable the team to easily pose and adapt the character into multiple positions to match the basis photographs, which combined with the scans gave them the ability to fully control the look and feel of each shot.

Shading

Shading was added in the form of several passes including diffuse layers enhanced with cloning and stencil work while reflection layers were used to add life to the eyes and a feeling of texture to the various materials of character’s clothes and props. Movement of light sources in the scenes was used to bring motion to the otherwise completely still characters giving a sense of time frozen in a moment.

Loica_Legends_shading2Details

Blender’s hair system was used in order to produce simulations of Sean’s beard hair and the team even went as far as adding smaller details such as skin hair on the nose and eyelashes. In motion, these touches bring a sense of true depth and realism to the shots.

‘Memory Loss’

The team has made good use of the Cycles renderer by using a modified glass shader to produce a heavy depth of field effect that they have named ‘Memory Loss’. They explored this route after finding that simple post effect depth of field effects weren’t producing enough of what they envisioned. Passing a 3D plane with this shader through the character allowed them to finely control the spatial and focal elements of the shot.

OverallLoica_Legends_shading

The studio has reported that the team had a great experience with Blender especially in terms of the single program workflow. To be able to model, texture, sculpt, preview and render in the same package helped streamline the workflow which accelerated production.

Studio

Loica’s show reel exhibits a broad range of styles without watering down the quality and beauty of the work they do. From the artsy UNICEF promo and cheeky use of visual effectsfor Volkswagen to the refined quality of ABC’s ‘Once Upon a Time’ promo they have proved themselves adept in multiple fields of graphic production. The promo can be viewed here and their website is http://loica.tv/

 

Stellarium 0.13.1 has been released!

The Stellarium development team after 3 months of development is proud to announce the first correcting release of Stellarium in series 0.13.x - version 0.13.1.

This release brings few new features and fixes:
- Added: Light layer for old_style landscapes
- Added: Auto-detect location via network lookup.
- Added: Seasonal rules for displaying constellations
- Added: Coordinates can be displayed as decimal degrees (LP: #1106743)
- Added: Support of multi-touch gestures on Windows 8 (LP: #1165754)
- Added: FOV on bottom bar can be displayed in DMS rather than fractional degrees (LP: #1361582)
- Added: Oculars plugins support eyepieces with permanent crosshairs (LP: #1364139)
- Added: Pointer Coordinates Plugin can displayed not only RA/Dec (J2000.0) (LP: #1365784, #1377995)
- Added: Angle Measure Plugin can measure positional angles to the horizon now (LP: #1208143)
- Added: Search tool can search position not only for RA/Dec (J2000.0) (LP: #1358706)
- Fixed: Galactic plane renamed to correct: Galactic equator (LP: #1367744)
- Fixed: Speed issues when computing lots of comets (LP: #1350418)
- Fixed: Spherical mirror distortion work correctly now (LP: #676260, #1338252)
- Fixed: Location coordinates on the bottom bar displayed correctly now (LP: #1357799)
- Fixed: Ecliptic coordinates for J2000.0 and grids diplayed correctly now (LP: #1366567, #1369166)
- Fixed: Rule for select a celestial objects (LP: #1357917)
- Fixed: Loading extra star catalogs (LP: #1329500, #1379241)
- Fixed: Creates spurious directory on startup (LP: #1357758)
- Fixed: Various GUI/rendering improvements (LP: #1380502, #1320065, #1338252, #1096050, #1376550, #1382689)
- Fixed: "missing disk in drive <whatever>" (LP: #1371183)

A huge thanks to our community whose contributions help to make Stellarium better!

October 18, 2014

Synfig Studio 0.64.2

The new stable version of Synfig Studio is released!...

October 16, 2014

Aspens are turning the mountains gold

Last week both of the local mountain ranges turned gold simultaneously as the aspens turned. Here are the Sangre de Cristos on a stormy day:

[Sangre de Cristos gold with aspens]

And then over the weekend, a windstorm blew a lot of those leaves away, and a lot of the gold is gone now. But the aspen groves are still beautiful up close ... here's one from Pajarito Mountain yesterday.

[Sangre de Cristos gold with aspens]

October 15, 2014

Quick update

Hi all

Just today realize how long since my last post, sadly being in Cuba prevents me from regular updates.
Last months ive being working polishing A LOT the FillHoles tools for purposes way beyond regular sculpting to the point of making it a very powerful tool on they own for mesh healing. I will probably make a short video soon featuring a complete workflow for that tools but honestly, this offline situation that has prolonged more than a year now is demotivating me a lot.
Ive started working on a quadrangulation tool that proves to be challenging and interesting enough to drive me trough this situation.
Hope in new post I will be a little more happy :)

Cheers to all


GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>Liberation</id>
  <metadata_license>CC0-1.0</metadata_license>
  <name>Liberation</name>
  <summary>Open source versions of several commercial fonts</summary>
  <description>
    <p>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
    </p>
  </description>
  <update_contact>richard_at_hughsie_dot_com</updatecontact>
  <url type="homepage">http://fedorahosted.org/liberation-fonts/</url>
</component>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>LiberationSerif</id>
  <metadata_license>CC0-1.0</metadata_license>
  <extends>Liberation</extends>
</component>

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

October 14, 2014

Blenderart Mag Issue #45 now available

Welcome to Issue #45, “Cycles Circus

Come jump on the Cycles Circus merry-go-round with us as we not only explore some fun features of cycles, but play on a comet and meet the geniuses of Ray and Clovis

So grab your copy today. Also be sure to check out our gallery of wonderful images submitted by very talented members of our community.

Table of Contents: 

  • Quick Comet Animation
  • Book Review: Cycles Materials and Textures
  • Baby Elephant
  • Ray and Clovis

And Lot More…

October 12, 2014

Synfig Studio 0.64.2 - Release Candidate #2

The second release candidate of upcoming Synfig Studio 0.64.2 is available for download now....

October 11, 2014

Railroading exponentially

or: Smart communities can still be stupid

I attended my first Los Alamos County Council meeting yesterday. What a railroad job!

The controversial issue of the day was the town's "branding". Currently, as you drive into Los Alamos on highway 502, you pass a tasteful rock sign proclaiming "LOS ALAMOS: WHERE DISCOVERIES ARE MADE". But back in May, the county council announced the unanimous approval of a new slogan, for which they'd paid an ad agency some $55,000: "LIVE EXPONENTIALLY".

As you might expect in a town full of scientists, the announcement was greeted with much dismay. What is it supposed to mean, anyway? Is it a reference to exponential population growth? Malignant tumor growth? Gaining lots of weight as we age?

The local online daily, tired of printing the flood of letters protesting the stupid new slogan, ran a survey about the "Live Exponentially" slogan. The results were that 8.24% liked it, 72.61% didn't, and 19.16% didn't like it and offered alternatives or comments. My favorites were Dave's suggestion of "It's Da Bomb!", and a suggestion from another reader, "Discover Our Secrets"; but many of the alternate suggestions were excellent, or hilarious, or both -- follow the link to read them all.

For further giggles, try a web search on the term. If you search without quotes, Ebola tops the list. With quotes, you get mostly religious tracts and motivational speakers.

The Council Meeting

(The rest of this is probably only of interest to Los Alamos folk.)

Dave read somewhere -- it wasn't widely announced -- that Friday's council meeting included an agenda item to approve spending $225,000 -- yes, nearly a quarter of a million dollars -- on "brand implementation". Of course, we had to go.

In the council discussion leading up to the call for public comment, everyone spoke vaguely of "branding" without mentioning the slogan. Maybe they hoped no one would realize what they were really voting for. But in the call for public comment, Dave raised the issue and urged them to reconsider the slogan.

Kristin Henderson seemed to have quite a speech prepared. She acknowledged that "people who work with math" universally thought the slogan was stupid, but she said that people from a liberal arts background, like herself, use the term to mean hiking, living close to nature, listening to great music, having smart friends and all the other things that make this such a great place to live. (I confess to being skeptical -- I can't say I've ever heard "exponential" used in that way.)

Henderson also stressed the research and effort that had already gone into choosing the current slogan, and dismissed the idea that spending another $50,000 on top of the $55k already spent would be "throwing money after bad." She added that showing the community some images to go with the slogan might change people's minds.

David Izraelevitz admitted that being an engineer, he initially didn't like "Live Exponentially". But he compared it to Apple's "Think Different": though some might think it ungrammatical, it turned out to be a highly successful brand because it was coupled with pictures of Gandhi and Einstein. (Hmm, maybe that slogan should be "Live Exponential".)

Izraelevitz described how he convinced a local business owner by showing him the ad agency's full presentation, with pictures as well as the slogan, and said that we wouldn't know how effective the slogan was until we'd spent the $50k for logo design and an implementation plan. If the council didn't like the results they could choose not to go forward with the remaining $175,000 for "brand implementation". (Councilor Fran Berting had previously gotten clarification that those two parts of the proposal were separate.)

Rick Reiss said that what really mattered was getting business owners to approve the new branding -- "the people who would have to use it." It wasn't so important what people in the community thought, since they didn't have logos or ads that might incorporate the new branding.

Pete Sheehey spoke up as the sole dissenter. He pointed out that most of the community input on the slogan has been negative, and that should be taken into account. The proposed slogan might have a positive impact on some people but it would have a negative impact on others, and he couldn't support the proposal.

Fran Berting said she was "not all that taken" with the slogan, but agreed with Izraelevitz that we wouldn't know if it was any good without spending the $50k. She echoed the "so much work has already gone into it" argument. Reiss also echoed "so much work", and that he liked the slogan because he saw it in print with a picture.

But further discussion was cut off. It was 1:30, the fixed end time for the meeting, and chairman Geoff Rodgers (who had pretty much stayed out of the discussion to this point) called for a vote. When the roll call got to Sheehey, he objected to the forced vote while they were still in the middle of a discussion. But after a brief consultation on Robert's Rules of Order, chairman Rogers declared the discussion over and said the vote would continue. The motion was approved 5-1.

The Exponential Railroad

Quite a railroading. One could almost think it had been planned that way.

First, the item was listed as one of two in the "Consent Agenda" -- items which were expected to be approved all together in one vote with no discussion or public comment. It was moved at the last minute into "Business"; but that put it last on the agenda.

Normally that wouldn't have mattered. But although the council more often meets in the evenings and goes as long as it needs to, Friday's meeting had a fixed time of noon to 1:30. Even I could see that wasn't much time for all the items on the agenda.

And that mid-day timing meant that working folk weren't likely to be able to listen or comment. Further, the branding issue didn't come up until 1 pm, after some of the audience had already left to go back to work. As a result, there were only two public comments.

Logic deficit

I heard three main arguments repeated by every council member who spoke in favor:

  1. the slogan makes much more sense when viewed with pictures -- they all voted for it because they'd seen it presented with visuals;
  2. a lot of time, effort and money has already gone into this slogan, so it didn't make sense to drop it now; and
  3. if they didn't like the logo after spending the first $50k, they didn't have to approve the other $175k.

The first argument doesn't make any sense. If the pictures the council saw were so convincing, why weren't they showing those images to the public? Why spend an additional $50,000 for different pictures? I guess $50k is just pocket change, and anyone who thinks it's a lot of money is just being silly.

As for the second and third, they contradict each other. If most of the board thinks now that the initial $50k contract was so much work that we have to go forward with the next $50k, what are the chances that they'll decide not to continue after they've already invested $100k?

Exponentially low, I'd say.

I was glad of one thing, though. As a newcomer to the area faced with a ballot next month, it was good to see the council members in action, seeing their attitudes toward spending and how much they care about community input. That will be helpful come ballot time.

If you're in the same boat but couldn't make the meeting, catch the October 10, 2014 County Council Meeting video.

And now for some hardware (Onda v975w)

Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Vrrrroooom.


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Update: On Google+ and in comments of this blog, it was pointed out that the seller on Aliexpress was trying to scam people. All my apologies, I just selected the cheapest from this website. I personally bought it on Amazon.fr using NewTec24 FR as the vendor.

October 09, 2014

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

What's nesting in our truck's engine?

We park the Rav4 outside, under an overhang. A few weeks ago, we raised the hood to check the oil before heading out on an adventure, and discovered a nest of sticks and grass wedged in above the valve cover. (Sorry, no photos -- we were in a hurry to be off and I didn't think to grab the camera.)

Pack rats were the obvious culprits, of course. There are lots of them around, and we've caught quite a few pack rats in our live traps. Knowing that rodents can be a problem since they like to chew through hoses and wiring, we decided we'd better keep an eye on the Rav and maybe investigate some sort of rodent-repelling technology.

Sunday, we got back from another adventure, parked the Rav in its usual place, went inside to unload before heading out for an evening walk, and when we came back out, there was a small flock of birds hanging around under the Rav. Towhees! Not only hanging around under the still-warm engine, but several times we actually saw one fly between the tires and disappear.

Could towhees really be our engine nest builders? And why would they be nesting in fall, with the days getting shorter and colder?

I'm keeping an eye on that engine compartment now, checking every few days. There are still a few sticks and juniper sprigs in there, but no real nest has reappeared so far. If it does, I'll post a photo.

October 08, 2014

Wed 2014/Oct/08

  • Growstuff's Crowdfunding Campaign for an API for Open Food Data

    During GUADEC 2012, Alex Skud Bailey gave a keynote titled What's Next? From Open Source to Open Everything. It was about how principles like de-centralization, piecemeal growth, and shared knowledge are being applied in many areas, not just software development. I was delighted to listen to such a keynote, which validated my own talk from that year, GNOME and the Systems of Free Infrastructure.

    During the hallway track I had the chance to talk to Skud. She is an avid knitter and was telling me about Ravelry, a web site for people who knit/crochet. They have an excellent database of knitting patterns, a yarn database, and all sorts of deep knowledge on the craft gathered over the years.

    At that time I was starting my vegetable garden at home. It turned out that Skud is also an avid gardener. We ended up talking about how it would be nice to have a site like Ravelry, but for small-scale food gardeners. You would be able to track your own crops, but also consult about the best times to plant and harvest certain species. You would be able to say how well a certain variety did in your location and climate. Over time, by aggregating people's data, we would be able to compile a free database of crop data, local varieties, and climate information.

    Growstuff begins

    Growstuff

    Skud started coding Growstuff from scratch. I had never seen a project start from zero-lines-of-code, and be run in an agile fashion, for absolutely everything, and I must say: I am very impressed!

    Every single feature runs through the same process: definition of a story, pair programming, integration. Newbies are encouraged to participate. They pair up with a more experienced developer, and they get mentored.

    They did that even for the very basic skeleton of the web site: in the beginning there were stories for "the web site should display a footer with links to About and the FAQ", and "the web site should have a login form". I used to think that in order to have a collaboratively-developed project, one had to start with at least a basic skeleton, or a working prototype — Growstuff proved me wrong. By having a friendly, mentoring environment with a well-defined process, you can start from zero-lines-of-code and get excellent results quickly. The site has been fully operational for a couple of years now, and it is a great place to be.

    Growstuff is about the friendliest project I have seen.

    Local crop data

    Tomato heirloom        varieties

    I learned the basics of gardening from a couple of "classic" books: the 1970s books by John Seymour which my mom had kept around, and How to Grow More Vegetables, by John Jeavons. These are nominally excellent — they teach you how to double-dig to loosen the soil and keep the topsoil, how to transplant fragile seedlings so you don't damage them, how to do crop rotation.

    However, their recommendations on garden layouts or crop rotations are biased towards the author's location. John Seymour's books are beautifully illustrated, but are about the United Kingdom, where apples and rhubarb may do well, but would be scorched where I live in Mexico. Jeavons's book is biased towards California, which is somewhat closer climate-wise to where I live, but some of the species/varieties he mentions are practically impossible to get here — and, of course, species which are everyday fare here are completely missing in his book. Pity the people outside the tropics, for whom mangoes are a legend from faraway lands.

    The problem is that the books lack knowledge of good crops for wherever you may live. This is the kind of thing that is easily crowdsourced, where "easily" means a Simple Matter Of Programming.

    An API for Open Food Data

    Growstuff has been gathering crop data from people's use of the site. Someone plants spinach. Someone harvests tomatoes. Someone puts out seeds for trade. The next steps are to populate the site with fine-grained varieties of major crops (e.g. the zillions of varieties of peppers or tomatoes), and to provide an API to access planting information in a convenient way for analysis.

    Right now, Growstuff is running a fundraising campaign to implement this API — allowing developers to work on this full-time, instead of scraping from their "free time" otherwise.

    I encourage you to give money to Growstuff's campaign. These are good people.

    To give you a taste of the non-trivialness of implementing this, I invite you to read Skud's post on interop and unique IDs for food data. This campaign is not just about adding some features to Growstuff; it is about making it possible for open food projects to interoperate. Right now there are various free-culture projects around food production, but little communication between them. This fundraising campaign attempts to solve part of that problem.

    I hope you can contribute to Growstuff's campaign. If you are into local food production, local economies, crowdsourced databases, and that sort of thing — these are your people; help them out.

    Resources for more in-depth awesomeness